Abstract

Mobile edge computing, a promising paradigm, brings services closer to a user by leveraging the available resources in an edge network. The crux of MEC is to reasonably allocate resources to satisfy the computing requirements of each node in the network. In this paper, we investigate the service migration problem of the offloading scheme in a power-constrained network consisting of multiple mobile users and fixed edge servers. We propose an affinity propagation-based clustering-assisted offloading scheme by taking into account the users’ mobility prediction and sociality association between mobile users and edge servers. The clustering results provide the candidate edge servers, which greatly reduces the complexity of observing all edge servers and decreases the rate of service migration. Besides, the available resource of candidate edge servers and the channel conditions are considered to optimize the offloading scheme to guarantee the quality of service. Numerical simulation results demonstrate that our offloading strategy can enhance the data processing capability of power-constrained networks and reach computing load balance.

1. Introduction

The arrival and evolution of the 5G delivers a transformative solution to an ultimate high-quality end-user experience. Such a high-speed, high-capacity, low-latency 5G network can well meet the demands of increasing data-intensive applications in the current Internet of Things (IoT) and Artificial Intelligence (AI) era. The numerous applications require high-speed internet connectivity and high computation power, which is not possible in a mobile device with limited memory and storage capacity [13]. In such situations, it is feasible to transfer resource-intensive tasks to external platforms like cloud, grid, and edge servers. This process is known as task offloading, which decides when and how a task should be offloaded to external platforms to execute.

During the development of 5G, the mobile edge computing (MEC) technique has been playing an important role [4, 5]. Massive smart edge devices in various IoT systems hide huge distributed computing capabilities [6, 7]. The reasonable utilization of distributed capability can reduce the amount and frequency of information transferring to a distant centralized cloud server, which can greatly decrease transfer latency and omit the cloud server processing latency. Although existing research can reduce the delay in some scenarios, many severe problems remain to be solved. Especially, how to migrate services as mobile users move is a critical problem. If users move far away from the MEC server that is responding to their request, this could result in significant quality of service (QoS) and quality of experience (QoE) degradation and service interruption due to long transmission latency of the offloaded task [8]. To ensure the continuity of service, services need to be migrated to nearby MEC servers that can cover the user’s current location when it moved out of previous service coverage.

The migration of user-generated data in edge networks is involved in transmission costs, the mobility of users, transmission resources, etc. Thus, an efficient edge server selection algorithm is needed to select the optimal target edge server. In general, two factors should be taken into account: users’ trajectory and QoS utility. On the one hand, existing research works rarely explore users’ trajectory data and the prediction of their movement and adopt a random mobility model instead [9]. However, users’ mobility pattern (e.g. direction and velocity) has a significant influence on the construction of the candidate edge server set and the users’ trajectory data can be used to predict users’ movement. On the other hand, existing literatures pay less attention on the effect of QoS utility (network latency, energy consumption, and cost) on the selection of edge servers in service migration and, therefore, hardly select the edge server with the highest QoS utility [1013]. Without considering users’ trajectory data and QoS utility, the accuracy of edge server selection and the efficiency of service migration decrease. To develop a QoS-aware algorithm to improve edge server selection, we should overcome the problems such as how to integrate user’s trajectory data and QoS utility into the server selection algorithm.

In this paper, we investigate an offloading scheme by considering users’ trajectory data, sociality associations, and QoS parameters. First, we employ a Kalman filtering algorithm to do mobility prediction of users’ trajectory; then, we compute the sociality association between mobile users and edge servers by using the history connection relationship. By using the mobility prediction and the social association results as input parameters of an affinity propagation algorithm, we propose a mobility-aware and sociality-associate clustering algorithm (MASACA), where MEC servers are divided into candidate sets associated with different users. Moreover, we devise a QoS utility function for each MEC server based on their available computation resource and the channel links’ quality, which is used to determine the appropriate MEC server for a user to offload task. The main contributions of this paper are summarized as follows: (1)A mobility model of mobile users is considered, which is combined with a Kalman filter to make prediction about the location of users(2)The sociality association is taken into account, which guarantees the continuity of service(3)Clustering has been shown to achieve efficient and reliable management and reduce data congestion. We construct an AP-based algorithm to obtain stable clusters considering the trajectory data and the sociality association of mobile users

The paper is organized as follows: Section 2 gives a review of related works in the literature. Section 3 presents the system model and problem formulation. Section 4 gives the details of AP-MASACA. The QoS utility function encompassing outage probability and average coverage is derived in Section 5. Section 6 presents the performance of the proposed scheme by comparing with existing offloading schemes. And a summary is provided in Section 7.

In order to optimize the delay and energy consumption of mobile devices, a computational offload strategy is adopted in mobile edge computing [14]. Mobility is one key challenging topic in MEC systems, which has effects on decisions in several domains such as caching, connected vehicles, and especially computation offloading [15]. However, the mobility of mobile users and the limited coverage of edge servers can result in significant network performance degradation, dramatic drop in QoS, and even interruption of ongoing edge services [16]. Usually, the types of UE mobility can be categorized as random mobility, short-term predictable mobility, and fully known mobility, depending on whether the future location of the UE is known. Because of the limited resources of servers, the mobility of users, and the low latency requirements of service requests, computing offloading and service migrations are expected to occur in MEC systems regularly. The authors in [17] proposed a glimpse mobility prediction model based on the seq2seq model, which provides useful coarse-grain mobility information for Mobility-Aware Deep Reinforcement Learning training. However, the traditional offloading approaches (e.g., auction-based and game-theory approaches) cannot adjust the strategy according to changing environment when dealing with the mobility problem, in order to keep the system performance for a long time, an online user-centric mobility management scheme is proposed in [18] to maximize the edge computation performance while keeping the energy consumption of user’s communication low by using Lyapunov optimization and multiarmed bandit theories. These proposed methods overcome the challenges of mobility in MEC, making contribution in reducing service delays or energy consumption of devices. However, by nature, minimizing the smart devices’ energy consumption and tasks’ execution time does not always coincide and may conflict in IoT.

Beside delay and energy consumption, the MEC system performance can be further improved by utilizing the recent developments of social networks [13, 19] and energy harvesting methods. Social IoT (SIoT) has been proposed as a promising way to create and maintain the collaborative relations among smart SIoT devices. The idea of SIoT is to build social collaborative networking for smart IoTs to achieve locally distributed data processing according to the rules set by owners [20]. The collaboration between SIoT and MEC is crucial to minimize network communications and computation, where the SIoT collaboration can locally process user requests and further reduce the communications and computation in MEC networks. Nevertheless, the load sharing among SIoT devices, MEC, and remote servers brings about new challenges for the communication and computation trade-off, cross-layer design in SIoT, and forwarding and aggregation trade-off [14, 21]. To solve this issue, Wang et al. in [22] formulated a new optimization problem SIoT Collaborative Group and Device Selection Problem (SCGDSP) and propose Optimal Collaborative Group Selection (OCGS) for a fundamental SCGDSP to find the intrinsic properties of collaborative group construction under the deployment of SIoTs in a grid street map to reduce the running time and transmission delay. However, one of the main challenges of social attribute is the selfish nature of the user, and some users only response what they are interested in and do not care about other people’s needs. The authors in [23] proposed a physical-social-based cooperative cache framework to maximizing the social group utility and meet the data request of users. By considering the diversity of users, the authors of [24, 25] combined MEC with the mobile crowdsensing approach and proposed a social-based method to optimize the share of contents among users by exploiting their mobility and sociality, which improved the performance of a content-sharing scenario.

There is no common definition of social strength in current researches about SIoT. Jung et al. [26] proposed a model named social strength prediction model, which infers social connections among smart objects and predicts the strength of the connections using the cousage data of the objects by two major components: (1) entropy-based and (2) distance-based social strength computations. These two components capture different properties of cousages of objects, namely, the diversity and spatiotemporal features, which are all essential factors that contribute to the values of social strength. Wang et al. [27] put forward a method to measure the social strength generally by the available storage capacity and betweenness centrality. In this paper, we propose a definition that utilizes the information entropy to calculate social strength.

3. System Model and Problem Formulation

In this section, we give the system model and computation model of the task offloading and then formulate the offloading problem. For the sake of clarity, we summarize the notations in Table 1.

3.1. System Model

We consider a multiuser MEC model as shown in Figure 1 with a set of mobile users (MUs) denoted as and a set of edge servers as . According to physical positions of each user, they are clustered into different service cells surrounding edge servers. Assume that each MU has computation tasks. Denote as the task set of , where each is independent and decomposable. For each task, a mobile user can proceed it locally or offload it to corresponding edge server within its service cell.

However, the service cell for an MU may change due to the MU’s mobility. To guarantee seamless services, service migration is necessary. Specifically, as shown in Figure 2, initially belonged to the service cell surrounding and now is within the service cell surrounding . If it offloaded computing tasks to before moving out the initial cell and the computation results did not return to before it enters into the current cell, computation results need to be migrated from to , then returned to .

Thus, there are three computation offloading models: local execution, offloading without service migration, and offloading with service migration. In following sections, we will investigate how to select from these three computation offloading models to minimize the time delay for completing all tasks of all mobile users under the guarantee of successful offloading.

3.2. Computation Offloading Model

Define as the input size of task for , the size of computation results returned to for , and the total number of CPU clock cycles required to complete task .

3.2.1. Local Computing Model

Let denote the computation capability of , which is dependent on the intrinsic nature of the MU, i.e., CPU cycles per second. The time for locally processing the request of task at is expressed as

Assume that the energy consumption per second when the CPU is working is . Then, the energy consumption of local executing the task for can be given as

3.2.2. Offloading without Service Migration

Three phases are needed to offload computation task to an edge server without service migration: (i) an MU transmits task request to an edge server ; (ii) edge server executes computing task; and (iii) edge server sends computing results to .

Assume that the wireless channel bandwidth between and is . Let represent the transmission power of for transmitting tasks to . The transmission rate between and can be calculated by where represents the transmission channel fading coefficient, is the distance from to , is the standard path loss propagation exponent, and denotes the power of additive white Gaussian noise (AWGN). Then, the transmission phase from to for the task can be given as

The computing capability of an edge server is denoted as . Let denote the actual CPU frequency when edge server executes the task received from , where . Then, the time for edge server processing the request of task from can be calculated as

After completing the computation task, edge server needs to send back the computation results to . The time delay for this transmission phase is expressed as

Then, the total time delay for offloading task to an edge server without service migration can be computed as

Assume the energy consumption per second for during the idle duration and the receiving phase are denoted as and , respectively. In total, the energy consumption at for executing the task on MEC sever without service migration can be formulated as

3.2.3. Offloading with Service Migration

If moves from a cell surrounding to current cell surrounding before completes the computing task , one more migration phase is needed to transmit computation results from edge server to current edge server . The time delay of this migration phase is determined by the transmission rate between these two edge servers as follows: where is a fixed value because of optical fiber communications between edge servers. Then, forwards the computation results to . Similar to (6), the transmission time can be given as where the transmission rate can be calculated in a similar way with (3). Thus, the total time delay for offloading with service migration can be calculated as

Likewise as (8), the energy consumption at for executing the task by offloading with service migration can be calculated as

3.3. Problem Formulation

For convenience, we define a binary variable to equal to 1 if of is offloaded to and otherwise equal to 0. When of is offloaded to , we define to be 1 if service migration from to is necessary and otherwise equal to 0. Thus, the time delay for completing the task for can be expressed as

The total time delay for completing all tasks of all users can be given by

In summary, the problem of minimizing the total time delay by optimizing the transmission power matrix , the offloading matrix , and the migration matrix can be formulated as follows: where

Constraint is the computing capacity limitation of edge server, and constraint is the power limitation with being the maximum tolerable transmission power at . Constraints and indicate that and are binary variables. Constraint gives the requirement that each task that can be totally offloaded to an edge server or be executed locally as a whole. Constraint ensures the service migration occurs between one-hop edge servers. The constraint is the energy constraint at with being the maximum available energy of .Due to the integer constraints -, is a mixed integer programming problem and NP-hard. In order to reduce the complexity of solving this NP-hard problem, in the following sections, we propose a mobility-aware and sociality-associate affinity propagation clustering algorithm, and then design an offloading scheme based on the clustering results.

4. Mobility-Aware and Sociality-Associate Clustering Algorithm (MASACA)

As shown in Figure 3, a MU may meet different edge servers during its movement towards its destination. One of method to decrease the complexity of solving is cutting down the number of candidate edge servers. In this section, we propose an affinity propagation- (AP-) based clustering by utilizing mobility prediction and sociality association results.

The MU can determine to randomly offload its tasks to an adjacent edge server at the moment the tasks arrive. In this case, service migration may occur due to the movement of MU. When the server completes the computing tasks, the MU may have moved to a remote position and the connection with this server may be lost, leading to the recalculation of tasks and bringing additional overhead. To avoid much service migration and decrease the lost rate of computation results, we use Kalman filtering method to make trajectory prediction. According to the prediction results, the mobile user can select the best timing to offload its tasks to a better edge server.

4.1. Mobility Prediction

The movement of the mobile user is considered as a traditional dynamic system satisfying the following state equation at time : where is the state vector and equals representing the user’s real-time position and its velocity in a two-dimensional space; is the transition matrix; is the input signal from previous time, namely, the moving velocity of the user; is the input matrix; is a deterministic process noise and is usually assumed as a Gaussian noise with ; and is called the process noise covariance matrix. The observation model is denoted as where is the observation vector, is the observation matrix, and is the observation noise and also seen as a Gaussian noise with . is called the observation noise covariance matrix. Note that and describe deviations such as collision, slippage, and friction that caused by objective conditions. Thus, in practice, the process noise covariance and the observation matrix might change within each time step and are assumed to be temporally and spatially independent with each other.

According to above analyses, let denote the estimate value of when the measurement value of is obtained. The dynamic estimation process includes two steps: (i)The time update phase follows the next equations as where is the prediction value of a variable and and represent the transpose matrix and inverse matrix, respectively. is the error covariance matrix(ii)The measurement update phase is as follows: where matrix is the Kalman filtering gain and represents unit matrix

Thus, the Kalman filtering prediction algorithm starts with an initial predicted state and a certain error covariance then executes the time update and measurement update by iterations until satisfying a certain termination condition. Finally, we can get the predicted trajectory of each mobile user, including the position and the velocity information at each time slot.

4.2. Sociality Association

In this subsection, we consider the influence of sociality association on choosing appropriate edge servers for offloading tasks. Since there is an acquaintance effect, we mainly take the history connections into account. And we use the Renyi entropy to describe the selection preference. A higher entropy represents higher uncertainty on a specific selection. Specifically, the Renyi entropy is denoted as where is a general parameter in Renyi entropy. Different values of represent different entropy; for example, when , it is the Shannon entropy as . And is the probability of communicates with . A smaller value of means a higher possibility that will offload its tasks to the server with the maximum value of .

Let represent the set of social association matrix between edge servers and mobile users. represents that communicates with during . For example, is the association vector for , demonstrating that has connected with at , at and , and at and never connected with . The set of request user was proposed for edge server . Let denote the communication frequency between and , e.g., Define representing the total number of communicate with all edge servers. Thus, the probability is calculated as

Similarly, can be easily obtained, denoting the social association between and all users and can be also calculated as

4.3. AP Clustering

To accelerate the determination on connection with appropriate edge servers, we propose mobility-aware and sociality-associate clustering algorithm. Without the need of giving the number of clusters, the affinity propagation- (AP-) based clustering is employed . For the sake of striking the balance between accuracy and complexity, the update window is set as slots.

4.3.1. Similarity Definition

The AP algorithm takes similarity, , as a measure describing the resemblance between nodes. when node has higher similarity with than with . In the mobility model, the similarity is probable to be higher when two nodes have closer distance between them. Thus, we define the similarity as the reciprocal of the average distance between nodes with future slots according to the mobility prediction results.

Thus, we have

The smaller the distance is, the bigger the similarity and then, the higher the possibility that and belong to a same cluster. Assume the distance between any two different nodes is larger than 1; thus, the maximum value of similarity is set 1 such that . Note that refer to the subscripts of mobile users and are the subscripts of edge servers.

4.3.2. AP-Based Clustering Mobility-Aware and Sociality-Associate Clustering Algorithm

During the clustering, we generally use subscripts to denote all nodes in the system. After similarity measures, there are other two important messages transferred between nodes [28, 29]: (i) responsibility, , is sent from node to candidate exemplar node , indicating how well the node is suitable as the exemplar of node ; (ii) availability, , is sent from candidate exemplar node to node , reflecting how appropriate it would be for the node choose node as its exemplar. The initial values are given according to the social association, i.e., for the case of and ; calculate and ; otherwise, the initial value is , e.g., .

The update formulas for responsibility and availability are stated as

In order to avoid numerical oscillations, the AP algorithm introduces a convergence coefficient when updating information. Each piece of information is set to times its previous iteration updating value plus times the current information updating value. Among that, the attenuation coefficient is a number between 0 and 1, while in our simulation is used.

Availabilities and responsibilities are combined to select exemplar. The maximum value of identifies the exemplar. Upon convergence, each node’s cluster head CH is defined by

A detailed AP-based clustering algorithm is shown in Algorithm 1.

Input:
  •The user set and edge servers set ;
  •The positions of edge servers ;
  •History connection relationship between and ;
Output:
  •: the partition of all nodes into clusters;
  •: the set of cluster centers;
 1: function KALMANFILTERING
 2:  Initialize and ;
 3:  for Each do
 4:   Time Update according to (19);
 5:   Measurement Update according to (20);
 6:  end for
 7:  Output mobility prediction results:
 8:  .
 9: end function
 10: function PROBABILITYCOMPUTATION
 11:  Build matrix and ;
 12:  Calculate , , then compute according to (22);
 13:  Calculate , , then compute according to (23);
 14:  Output and .
 15: end
 16: function APCLUSTERING,,,
 17:  Calculate the distances based on (24);
 18:  Calculate the similarities based on (25);
 19:  Initialize to be zero;
 20:  Initialize according to the type of nodes;
 21:  Update the responsibility based on (26);
 22:  Update the availability based on (27);
 23:  ;
 24:  Calculate ;
 25:  ;
 26:  Output and
 27: end function

5. QoS-Aware Offloading Scheme

According to the AP clustering results, each MU has a high probability to offload its tasks to the edge servers within a same cluster. During designing the clustering algorithm, we have considered the influence of history connections and the mobility prediction. To realize the objective given in , we take two more factors into account, encompassing the service ratio and the channel conditions between MU and edge servers.

Although it is overloaded to make edge servers execute all of computation tasks, edge servers are expected to serve as many mobile users as possible. In addition in most cases, users prefer to communicate with edge servers with more available resources rather than the ones that are trapped in resource competition. So as the available resource of edge servers decreases, their corresponding social strength may be affected to decrease. To visually evaluate the quality of computation offloading based on the AP-MASACA, we defined a QoS parameter average service ratio, which follows where represents the service ratio of -th edge server, represents the actual number of mobile devices attaching the -th edge server (assuming that every mobile user has only one device), and represents the actual number of mobile devices served by any edge server.

By applying MASACA strategies, the system cost of computation offloading could be decreased. When signal propagates through wireless channels, it undergoes deleterious effects mainly characterized by path loss, multipath fading, and shadowing, while the computation links between mobile users and edge servers could be interrupted. The SINR outage probability is an important channel quality measure of communication links operating over composite fading/shadowing channels, which is defined as the probability that the received SNR drops below a certain threshold for a certain average SNR [30]:

Note that requires the SNR distribution. Considering that the SNR could be affected by the distance of links, SNR could be obtained following where represents the transmission power, represents the path loss, and represents the shadowing gain factor. And represents the noise power of additive white Gaussian noise (AWGN) in a wireless communication environment.

Furthermore, according to the composite lognormal/gamma PDF introduced by [31], we could get the final expression about outage probability as follows:

We define the connection ratio to visually describe the performance for maintaining the sustainability of connections with edge servers in different cases:

6. Experiments and Result Analysis

In this section, we conduct simulation experiments to verify the efficiency of our proposed offloading scheme by comparing with an existing method employed in [32, 33] where mobile users randomly offload computation tasks to edge servers they have searched.

In our simulation, MEC servers are uniformly distributed within an area with side length of 200 m. The simulation settings are listed in Table 2. Under different numbers of MEC servers and different moving speed of mobile users , we conduct experiments to show the performance in terms of the time delay, the service ratio, and the connection ratio. The results are averaged over 1000 independent runs.

Figure 4 shows the influence of the number of mobile users on the average time delay. We can see that the average offloading time delay increases with the expansion of mobile users’ scale for each method. This is an intuitive result because the computation capability of MEC servers is limited and more users need to wait for a specific period before being serviced. For the same number of mobile users, our proposed scheme obtains a lower average delay, outperforming the original random offloading method. This demonstrates the effectiveness of our proposed scheme.

Figure 5 shows that the number of mobile users serviced by edge servers will increase with the expansion of mobile users’ scales. Compared with the case that mobile users offload computation tasks randomly, we can see the improvements that applying AP-MASACA could bring more chances for mobile users to be served. At the same time, it can be seen that the system performance was not greatly stable as the number of mobile users increases, which means that there may be available resources of some edge servers not fully utilized and the offloading strategy can be further optimized.

Figure 6 shows the connection ratio under different velocities in the single-user scenario. We can see that the connection ratio of each method declines with the increase of the number of computation tasks. This is because the competition for resources becomes more intense with the increase of task amount. Meanwhile, the decline rate of our proposed scheme is slower than that of the original random case. For the same number of computation tasks, our proposed scheme has a higher connection ratio than the original random method. This demonstrates our proposed scheme can improve the sustainability of connections between MU and MEC servers. Moreover, it is obvious that the larger the number of MEC server, , is, the higher the connection ratio. By comparing the results shown in Figures 6(a) and 6(b), we can know that the connection ratio decreases with the increasing speed. This is because the distance between the MU and the selected MEC servers changes faster with the increasing speed.

Figure 7 shows the connection ratio of multiuser scenario. As the number of MEC servers increases, the system performance is getting better. And it shows that the performance in the multiuser scenario is better than that in the single-user scenario. Figure 6 shows the connection ratio of multiuser when the moving speed is 80 km/h. From the simulation results, it has been shown that the connection ratio is improved by applying the proposed method especially when mobile users move at a high speed.

7. Conclusion

In this paper, we design a task offloading strategy by considering the mobility, the sociality, and QoS factors. First, we employ the Kalman filtering algorithm to do mobility prediction of mobile users’ trajectory; then, we compute the sociality association parameter between mobile users and edge servers based on the history connection relationship. By applying the mobility prediction results and the social association parameter as input parameters of affinity propagation algorithm, we propose the mobility-aware and sociality-associate clustering algorithm (MASACA). By applying the clustering results as candidate edge server set, we devise QoS utility function of a given edge server based on the available service ratio and the quality of channel link. At last, based on the designed QoS utility function, we select the candidate edge server with the highest QoS utility as the target edge server to offload tasks. Numerical simulation results demonstrate that our offloading strategy can enhance the data processing capability of power-constrained networks and cut down the computation delay.

Data Availability

In the simulation section, we use the social network data “usense” provided at http://www.crawdad.org/copelabs/usense/20170127/index.html. In the data set, we employ the distances between users, the encounter durations, and other related data as the input parameters of our proposed clustering algorithm.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the Fundamental Research Funds for the Central Universities under Grant 2019JBZ001 and the Key Project of the National Natural Science Foundation of China under Grant 61931001.