Abstract

With the developing of 5G, it is widely accepted that 5G will use a system architecture that supports the ultradense networks (UDN) deployments. In this architecture, a user will be covered by a large amount of small cell base stations (SBS) in 5G. However, selecting an SBS for handover is a great challenge. To address the challenge, the emerging content-oriented Named Data Networking (NDN) has attractive advantages, such as providing name-based routing. In this paper, a request-based handover strategy (RBHS) is presented to improve the user experience in performance and obtain the optimal allocation of resources, and a caching mechanism based on the users’ requests is introduced for it. The proposed caching mechanism and access network selection mechanism were validated utilizing ndnSIM. Simulation results demonstrate that our proposed strategy achieves around 30% higher cache hit rate and 20% more traffic reduction, compared with the access network selection based on SINR.

1. Introduction

Mobile data traffic and mobile devices have been exponentially growing, and monthly global mobile data traffic maybe surpasses 15 exabytes in 2018, which poses a significant challenge to the mobile communication system [16]. The current deploying fourth-generation mobile communication system (4G) has been unable to meet the new challenge. Then the fifth-generation mobile communication system (5G) has been proposed to address the challenge, which aims to achieve 1000 times higher mobile data volumes, 10 times higher number of connected devices, 10 times higher typical end-user data rates, 10 times the spectral efficiency, 5 times lower latency, and 25 times the average cell throughput compared with 4G [3]. In order to achieve the 5G system requirements, the 5G cellular architecture should use a system architecture that supports ultradense networks (UDN) deployments [46]. The UDN means that, in the coverage of macro base station (Macro), the density of SBS with low-power radio transmission technology will reach 10 times more than the existing density of SBS deployment, the distance between the SBS will be 10 meters or less [7, 8], the users per square kilometer will reach 25000 [9], and the number of active users and the number of SBS reach 1 to 1 ratio in the future [10]. Nevertheless, it is a great challenge to choose the optimal small cell base station (SBS) to connect in the environment of ultradense small cell base stations to relieve the burden of the links between Macro and backbone.

In this paper, we propose a request-based handover strategy (RBHS). The goals of this strategy are to make the optimal allocation of network resources, reduce the data traffic and decrease the latency, and let the user obtain the optimal allocation of resources, in order to improve the user experience. In RBHS, we will focus on the analysis of user requests. For the purposes of analysis and calculation of user requests, we introduce the Named Data Networking (NDN) into 5G cellular architecture. NDN is a new Internet architecture and is initiated by National Science Foundation (National Science Foundation, NSF) in 2010 [7, 8]. In NDN, each router is equipped with a fixed amount of memory to cache content, which results in the difference between NDN and the traditional IP networks. Basically, NDN runs requester-driven communication model; i.e., a client will first send out an interest packet for the desired content and then a router that has the same content in local cache memory will return the content within a data packet. Taking the exponentially growing mobile data traffic into account, we also introduce cache module into SBS as well as Macro. By caching the popular contents in SBS and Macro, the corresponding requests will no longer need to trace back the server which will save a considerable amount of redundant traffic. In summary, the strategy of this paper is divided into two parts; one is caching mechanism and the other one is handover mechanism.

The rest of the paper is organized as follows: Section 2 summarizes the related work. Section 3 introduces our system model which includes the caching mechanism and the handover mechanism. The RBHS is presented in Section 4. The evaluation setting, metrics, impact factors, and simulation results of RBHS are discussed in Section 5. Finally, Section 6 concludes the paper.

As the amount of mobile data traffic continued to rise as well as the explosive growth of mobile devices [16], 5G has become a quite hot topic nowadays. More and more researchers pay their attention to the horizontal topic [1, 2]. METIS (mobile and wireless communications enablers for the 2020 information society) is an integrated project partly funded by the European Commission under the FP7 research framework and is considered as the 5G flagship project [2, 3]. What is more, 863 plan in China launched a 5G major project phase I and phase II separately in June 2013 and March 2014 [3]. At present, countries around the world are having a wide range of discussions for 5G development vision, application requirements, candidate of frequencies, and key technical indexes. Under the joint efforts of the countries around the world, 5G vision and capability requirements have been basically clear. The standardization for 5G has been in full gear since early 2016 [2] and would be set in 2018. According to [16], 5G will use a system architecture that supports the ultradense networks (UDN) deployment, which may consist of different types of infrastructure elements (BSs), such as macro-, micro-, pico-BSs. Low-power BSs like pico-BS will be used to enhance coverage and capacity by covering areas that are much smaller than a macro-BS coverage area. The UDN offers multiple options for satisfying application requirements [1, 3].

Therefore, in such complex environment, it is wise to introduce the NDN into 5G cellular architecture. NDN is a content-centric architecture, which provides name-based routing [11, 12]. Based on this characteristic, we can obtain the user request information easily which cannot be done in the IP networks. NDN has several attractive advantages, such as network load reduction, low dissemination latency, and energy efficiency. To achieve these benefits from NDN paradigm, the content caching mechanism plays the most important role. The solution [13] proposed, called Hamlet, differs from previous work by reason that it helps users to make the decision about what information to keep, and for how long, based on a probabilistic estimate of what is cached in the neighborhood. The work [14] proposes a collaborative caching scheme guided by traffic engineering (TECC) for the emerging content-centric networks. The work [15] presents a collaborative caching scheme guided by traffic engineering (TECC) for the emerging content-centric networks. The work [16] develops a popularity-based coordinated caching strategy named the Effective Multipath Caching (EMC) scheme. The works [1720] introduce cache into small cell base station.

Considering that the users are more likely to take a more active role in 5G (e.g., selecting the set of serving base stations, performing advanced interference rejection, or exploiting local cooperation) [2], we suggest taking the user request into the caching policy as well as handover policy. In this work, we develop a request-based handover strategy, which adopts the NDN network architecture. To our best knowledge, very few studies have tried to do that.

3. System Model and Problem Statement

Since NDN is newly proposed in 5G, we first sketch out the 5G cellular architecture using NDN in this section. Then, combined with our optimized objective, we illustrate the key issue of caching mechanism and handover mechanism.

3.1. An Overview of the 5G Cellular Architecture Using NDN

UDN is a promising network densification cellular architecture in the 5G era, which aims at spectrum-efficient and energy-efficient solution that copes with a large number of devices and the huge mobile data traffic in future wireless applications. As illustrated in Figure 1, the 5G network will further enable the existing small cell miniaturization and distribution. And the distribution of small cell in the future will be further intensive; the density of base station deployment will increase by more than 10 times. That is, users will be more likely to be repeatedly covered by SBS (small cell base stations) and UE (user equipment) can reselect SBS. We suggest taking the users requests into consideration through introducing NDN architecture.

As well known, NDN is a protocol stack [7, 8], which is easier to be managed and achieves better performance than IP protocol stack. There are two types of packets in NDN, interest and data. When a user requests for content, it will send out an interest packet which contains the name of the interested content. And data packets are the reply messages issued by the nodes that have the data satisfying the name of the interested content. Data is transmitted only in response to an interest and consumes that interest. NDN has three main data structures: Forwarding Information Base (FIB), Content Store (CS), and Pending Interest Table (PIT). The FIB is used to forward interest packets toward potential sources of matching data. The CS is the same as the buffer memory of an IP router but has a different replacement policy. The PIT keeps track of Interests forwarded upstream toward content sources so that returned data can be sent downstream to its requesters.

In our strategy, we propose to install FIB, CS, and PIT into SBS and Macro to fulfill the architecture of NDN. It is obvious that handover may cause data packet to be returned to an inaccessible location (the previous connected SBS). Furthermore, to receive these unreceived data packets, UE needs to initiate the recovery mechanism by retransmitting its interest packets. As a result, the handover in NDN architecture may increase the retransmission probability and introduce significant latency. How to solve this problem is the main difficulty of introducing the NDN architecture into 5G. We noticed that the user can simultaneously be connected to several wireless access technologies and seamlessly move between them (see media independent handover or vertical handover, IEEE 802.21, also expected to be provided by future 4G releases). We suggest, in 5G, UE may simultaneously connect to the previous SBS and the new selected SBS. And the timing of disconnection to the previous SBS is the time finish of the last request. The handover process is shown in Figure 2.

What is more, we suggest adding a reconnection list to the interest packet which involves SBS that can be reconnected. And, besides the interest packet and the data packet, we add the confirmation packet which is used to request the data from SBS after reconnection. The details of the three types of packet are described in Figure 3. What is more, in order to facilitate the decision making, we add two tables: neighbor cache table (NCT) and neighbor state table (NST), which are shown in Figure 4. The NCT is used to record the content data cached by the nearby SBS. The NST is used to record the state of the nearby SBS.

In the next subsection, we will discuss mechanism about caching and handover.

3.2. Caching Mechanism

Based on the characteristics of 5G cellular architecture and NDN architecture, we have observed that while the popular contents are cached in the SBS (or Macro) and the user request for the same content, the SBS can directly deliver the content to the user without asking the server. For example, as shown in Figure 5, when UE1 requests for content data D1, UE1 can directly get D1 form SBS1 or Macro with no need to send the request to the server.

Since every base SBS are equipped with a limited storage space to cache content data, how to improve the cache hit rate effectively attracted our attention. It has become apparent that the technical key issues of caching mechanism fall into the following two questions [4]:

What to cache?

There are various contents in the Internet, and the cache space of SBS is limited. It is hence important to decide what content to cache taking into account content popularity. And SBS do not necessarily have to cache similar contents since the users they serve are different and they can share and exchange contents. Obviously, it is of vital importance to improve the diversity of the cached content to augment the hit ratio of the cache content.

How to cache?

Caching policies, deciding what to cache, and when to release caches are crucial for overall caching performance. And the goal of the caching policy is to augment the hit ratio. Under the consideration, the current popularity, the trend of popularity, storage size, and the location of replicas should be involved in the strategy.

3.3. Access Network Selection Mechanism

In wireless communication system, the density of the distribution of small cell base station led to the network repetitive coverage. And, in 5G era, this feature will become more and more obvious. For the repetitive coverage, there are diverse SBS to connect. That is, every user can reselect other SBS or Macro. Therefore, the main topic of handover mechanism can be divided into two parts:

When to reselect the SBS?

As shown in Figure 5, when UE2 requests content data D6, UE2 will connect to SBS2 and obtain content data D6 from the CS of SBS2. After that, if UE2 requests content data D1, then UE2 will first consider reconnecting to SBS1, SBS3, or Macro.

How to reselect SBS?

In the above case, when UE2 decides to reselect SBS, it is obvious that it need choose one from SBS1, SBS3, and Macro. We suggest taking the condition of UE and the condition of base station into account.

4. RBHS: The Effective Handover Strategy

With the object of reducing the traffic between the macro and the server, we need to gain a high cache hit rate. We suggest that the timing of handover and the cache update mechanism are very important.

Our strategy process is shown in Figure 6.

When UE sends an interest packet to the local SBS, the local SBS will first check the NCT according to the reconnection list in the interest packet. If not found, the local SBS will add the interest to the PIT and send the request to the next hop according to the FIB. Else, it will trigger the handover mechanism.

When the handover mechanism is active, the local SBS will compute the rank of the SBS in the reconnection list. Then the local SBS will let UE reconnect to the best SBS. After reconnection, UE will send a confirmation packet to the SBS to get the request data.

Considering the above content, we consider the receiving contents as a trigger of the caching mechanism. When the SBS receives data, it will first check the PIT. If not matching, it will drop the data. Or else it will add the data to the CS. If the CS is full, it will replace the data which is of the lowest rank and then send the data to UE according to PIT and FIB.

4.1. Request-Based Caching Mechanism (RBCM)

Following the discussion in Section 3.2, we need to decide which data should be cache in the limit cache space and the update strategy which will be active when the cache space is full.

As illustrated in Figure 5, the local SBS will send the UE’s request to the next hop only if there is no cache in the local SBS and the neighbors. Hence, if the incoming data match the PIT entry, the data will be stored in the CS and also will be sent to the requested UE according to the PIT and FIB. The question is, when the CS is full, how could the new incoming data be stored in the CS? It is obvious that we need to replace a content in the CS with the new incoming data. Therefore, it is meaningful to rank the data in the CS by setting a factor “value” to make sure the most valuable contents are stored in the caching space.

As well known, the goal of increasing the traffic saving is equivalent to the goal of storing the much more popular data in the CS. Obviously, different data has different popularity, and the probability of UE requesting different data is different. But we cannot get the popularity of data. Because the more popular the data is, the higher cache hit it will be, we believe that the cache hit would affect the factor of ”value”, and we will take the cache hit count into consideration. We define the “” for each data stored in the CS, which indicates the popularity of the data. For in the CS, when the UE’s request matches the , then . As we noticed, the popularity of data changes over time. Although the popularity of the data is high in this period, it may decline in the next period. Hence, the time of the data to be stored also should be taken into account.

As the UE can reselect the access network based on the request, it can reach all the data that are cache in the local SBS or in the SBS in the reconnection list. In other words, from the UE’s viewpoint, all the CS in the local SBS and in the SBS in the reconnection list can be seen as a whole, therefore increasing the cache hit rate which means that we need to improve the diversity in the adjacent SBS. If the data in the CS is also stored in the CS of the adjacent SBS, the value of the data will decrease. The more the replicas exist in the CS of the adjacent SBS, the less the value of data will be.

Considering the above, the mathematical expression of the value is as follows:

As shown in formula (1), means the ; means the . is the number of replicas existing in the adjacent SBS. is the current time, is the time when is stored, and represents the total number of the UE’s request from to . So indicates the popularity of in .

After calculating all the values of data stored in the SBS, we could rank the data by value. And the value will be periodically updated. If new incoming data matches the PIT and will be stored in the CS while the CS is full, the data with min value will be deleted to cache the new incoming data.

4.2. Request-Based Handover Mechanism (RBHM)

Following the discussion in Section 3.3, we need to decide the best timing of reselection and choose SBS. Based on the RBHM, we assume that the data in the adjacent SBS are sufficiently diverse. From the UE’s viewpoint, the CS of all the adjacent SBS can be seen as a whole. But, in fact, each SBS is present individually. As a result, if we want to get the data from the SBS directly, we need to select the SBS which has already stored the data. According to the above analysis, the best timing of handover is the time after the UE determines to request some new content, which makes the currently connected network no longer the best access network. What is more, the request should be the most important part of handover. Channel capacity for a device may be determined by its RSS. In general, RSS depends on the distance between the UE and its attached BS. Also, to some extent, the RSS represents the mobility. Hence, the RSS should be taken into consideration. As the SINR and the load condition of the SBS will affect the performance of access SBS, we also take SINR and load condition into consideration.

UE will periodically measure the RSS and SINR and choose the SBS with the RSS and SINR over the threshold. First order the SBS by RSS. And select the top n SBS by RSS and record sorting number as . Then order the top n SBS by SINR and record sorting number as . Finally add the top n SBS into the reconnection list in interest packet (n is the length of reconnection list). From the interest packet, the local SBS can obtain the reconnection SBS as candidate. We define the Weighted RSS () and Weighted SINR () for each candidate. So and are expressed as

The range of and is .

After that we need to define the weighted data () for each candidate. From the predefined neighbor cache table (NCT) which records the content data stored in neighbor, we could find out whether the candidate has the data or not. So is expressed as

The range of is .

We also define the weighted load () for each candidate. From the predefined neighbor state table (NST) which records the amount of UE the SBS connect to. For the candidate , set its load as ; set max connection number as . So is expressed as

The range of is .

After that, we use the parameter , to represent the cost of the UE u connecting to the candidate . Based on the aforementioned analysis, is expressed as

We utilize the Analytic Hierarchy Process (AHP) [21] to calculate , , , . The AHP is a structured technique for organizing and analyzing complex decisions, based on mathematics and psychology. Rather than prescribing a “correct” decision, the AHP helps decision makers find one that best suits their goal and their understanding of the problem. In our consideration, compared with , , and , is strongly preferred. And compared with and , is moderately preferred. And is as important as .

From the above, we can obtain which represent the benefit of UE u reconnection to the SBS candidate . After calculate all of UE u, we could acquire the maximum of all . Then UE u will reconnects to . And as we known, the requested content has already been stored in the reconnection SBS; then we can get the requested content by the reconnection SBS instead of sending a request to the server.

5. Simulation Results

In this section, we implement the handover strategy of RBHS in the ndnSIM simulator [22].

5.1. Simulation Setting

Considering the peculiar character which we add into the NDN architecture, we have modified the source code of ndnSIM. The basic configurations are explained as follows.

Simulation environment: We set the simulation environment to the population density area. And we assume that all the SBS are evenly distributed around the Macro and all the UE pieces are randomly distributed in this area. For simplicity, the mobility model of UE is set to random walk with low speed.

Performance metric: We take the traffic-saving rate (TSR) as the dominant metric to show the importance of saving the overall traffic between Macro and server. The TSR is the ratio of the average amount of traffic reduced by adopting RBHS to the amount of traffic which reselects SBS according to SINR. In our context, the traffic is equivalent to the number of the incoming packets. The cache hit rate indicates that the SBS has the data in the CS and could directly deliver the data to the UE without sending the request to server, which also represents saving the traffic. So we also involve it into our work.

The compared handover strategy: The main idea of our strategy is request-based. The cache hit rate and the traffic-saving rate are increased by introducing the request-based mechanism to relieve the burden of the link between Macro and backbone caused by the growing mobile traffic data and the UDN. So we take the handover strategy based on the SINR and load condition.

Input data: We generate the synthetic input data as the following descriptions. Let donate the set of content items. All the requests are identical and independently distributed within the set D.

The requests of each UE follow the Zipf–Mandelbrot law also known as the Pareto-Zipf law. The probability mass function is given bywhere is given by

In the formula, is the total number of data, is the rank of the data, and and are the parameters of distribution. And is the skewness factor indicating the consideration degree of the arrival of the requests.

Impact factor and default setting: To explore the effectiveness and the scalability of RBHS, we take impact factors into consideration, including the cache size, request pattern, content population, and the size of reconnection list attaching in the interest packet.

We set a default setting as follows. Associating with the size of chunks, we describe the cache size as relative cache size, which is the proportion of the cache size per SBS in the total size of all data. The relative cache size at each SBS is set to 3%. And the total number of data is N=1000. And the skewness factor is s=0.7. And the size of reconnecting list is list size=3.

5.2. Experiment Results

Impact of cache size: We conduct the experiment in the range of relative cache size from 1% to 5%, while other parameters follow the default setting.

Figure 7 compares the cache hit rate and the traffic-saving rate (TSR) gained by the two strategies. Obviously, our proposed RBHS significantly outperforms the baseline of the strategy based on the SINR.

Impact of the request pattern: As explained before, the requests follow the Zipf–Mandelbrot law. The parameter is the key factor of Zipf–Mandelbrot law indicating the degree of the concentration of requests. In short, the larger is, the fewer the data covering the major requests are.

In Figure 8, we test the impact of request patterns on effectiveness of the two strategy. Parameter varies from 0.5 to 1.0 under the default setting. As shown in Figure 7, obviously, the more concentrated the requests are, the more effective the strategy is.

Impact of the data population: To examine the scalability of the RBHS, we collect test data coving a large range of data scales, whose number of items varies from 500 up to 20,000.

Given that the relative cache size is fixed to 3%, we get the result in Figure 9. We are excited to see the saved traffic is increasing smoothly as the scale enlarges, which means RBHS will achieve even better performance if being deployed in the large network, considering the UE requests of data are increasing exponentially nowadays.

Impact of the reconnection list: As mentioned in Section 4.2, by setting the size of reconnection list, we can pretend that the cache size of the SBS is extended to (list sizecache size) or smaller than (list sizecache size) because there are copies existing in the adjacent SBS. As shown in Figure 10, the cache hit rate is increasing by the size of reconnection list and tends to be stable. We think that the reason why cache hit rate tends to be stable is that the effect of the reconnection list size is limited to the variety of the CS in the SBS.

6. Conclusion and Future Work

In this paper, we propose a request-base handover strategy, which can be divided into caching mechanism and handover mechanism, in order to deal with the UND in 5G. For the sake of the analysis of the user requests, we introduce NDN into 5G cellular architecture. And the simulation results demonstrate our proposed RBHS is effective and scalable.

In the future, since there are some horizontal topics of 5G such as D2D, we are to develop a handover strategy that takes the D2D communication into account. That means the conditions we need to consider become more complicated.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work described in this paper was fully supported by “the Fundamental Research Funds for the Central Universities” (no. 2017JBM005).