Abstract

With the emergence and development of communication technology and new computing paradigm named mobile edge computing (MEC), fast response and ultralow latency are given higher requirements. Nevertheless, due to the low penetration and coverage of the MEC network, it is difficult to guarantee the large-scale connection needs of all user groups in industry 4.0. In addition, user mobility is closely related to the network connection between edge nodes (ENs) and mobile devices (MDs) in industry 4.0, the frequent mobility of MDs makes the computation offloading process not smooth and the channel unstable, which can reduce the network performance. Hence, this paper constructs an edge network environment for MEC-based industrial internet of things (IIoT), considering the combined benefits of energy consumption, time delay, and computing resource cost to tackle the aforementioned problem by maximizing the utility of the entire system. In order to solve this problem, this paper proposes a mobility-aware offloading and resource allocation scheme (MAORAS). This scheme first employs the Lagrange multiplier method to solve the problem of computing resource allocation; then, a noncooperative game between MDs is established and the existence of Nash equilibrium (NE) has been proven. Simulation results demonstrate that the practical performance of the MAORAS optimization scheme could improve the system utility significantly.

1. Introduction

With the exponential growth of mobile applications data in IIoT, there is an urgent demand for real-time response, which makes it difficult for MDs with limited battery and computing resources to meet the requirements of low latency and real-time user experience for resource-intensive tasks [1, 2]. MEC emerged at the historic moment, sinking rich communication and computing resources to industrial devices, extending the computing power of MDs while also greatly reducing the burden on the backhaul link of the cloud computing centers [3, 4], which means that computation-intensive tasks, including object detection, face recognition, and model training, can be offloaded to MEC servers for further processing.

Nevertheless, MEC networks’ penetration rate and coverage rate are relatively low, and it is difficult to guarantee the large number of connection needs of the dense user groups. The limited computing resources of the MEC server may increase the delay of task execution; therefore, it is critical to optimize computing resources and offloading mechanisms in order to decrease communication costs and increase system utility [5].

Multiuser resource allocation and offloading optimization in MEC have always been researching hotspots. In terms of optimization goals, it is mainly divided into three categories; one is to minimize time delay, the other is to reduce energy consumption simultaneously, and the third is to maximize system utility. With regard to solutions, there are online optimization algorithms based on Lyapunov, offloading strategies based on -learning, resource allocation solutions based on genetic algorithms, and so on.

Optimizing the offloading decisions and the corresponding resource allocation is a decision-making problem of multiple participants essentially, i.e., MDs and MEC servers, and includes cloud centers sometimes, so game theory is widely utilized to make offloading decisions in the MEC scenario. One of the characteristics of MDs in edge computing network is mobility, which makes the network become a highly dynamic network and makes the optimization problem more complex. Generally, when a mobile user move from the original cell to other neighboring cell, handover will happen, contributing to the additional latency even packet loss, which will raise retransmission, and add the latency in turn. The current MEC studies based on game theory have the following shortcomings: (1) Most of the literature fails to consider the problem of task migration caused by user mobility; (2) single scenario, multiuser, and multiserver scenarios are not considered, and (3) game theory-based computation offloading and allocation schemes are mostly static allocations.

This paper researches the computation offloading and resource allocation problems in MEC networks under multiuser mobile scenarios to address the issues above. The main contributions of the article are as follows: (i)The mobility of the user is characterized as the stay time of the user in the area. Also, the residence time of the user satisfies the exponential distribution. Comprehensively consider energy consumption, time delay, and computing cost, and then jointly optimize resource allocation and offloading decisions to maximize system utility(ii)The optimization function of system utility maximization is formulated as an NP-hard problem and further divided into two subproblems, which are solved by Lagrangian multiplier method and game theory, respectively(iii)The performance of mobility-aware offloading and resource allocation strategy (MAORAS) is evaluated by comparing it with other baseline algorithms. Simulation results show that the MAORAS could achieve better performance

This paper is organized as follows: Section 2 introduces the related work, whereas Section 3 presents the system framework and the optimization problem is formulated. In Section 4, the problem solution and algorithm proposal is presented. Experiment and simulation are performed in Section 5, and finally, the paper is summarized in Section 6.

One of the most significant characteristics of MEC networks is that MDs move with the user’s motion, rather than being stationary, contributing to the communication network being unstable. At the same time, frequent link disconnection makes user’s task transfer fail, resulting in wasted energy consumption, increased latency, and reduced user satisfaction.

Existing research focused on resource allocation according to user mobility. Tang et al. proposed a DQN-based resource allocation scheme to solve the dynamic service prediction problem in a high-mobility environment [6]. For each EDC, a vector autoregressive model (VAR) was used to predict the future load to allocate computation resources. This method used the historical load time series of itself and its neighbors to predict the load, make decisions, and determine various management measures through the estimated resource usage of each EDC. On the premise of ensuring user service quality, we improve the utilization rate of EDC resources as much as possible. For problems such as how many resources to allocate in EDC [7], where to place different application services, and when to activate various resource management operations, a location-aware workload prediction method based on LSTM was proposed. The proposed method could predict the workload of the environmental data center in a short time by using the correlation of the workload changes of the environmental data center in the close-range environment. In order to solve the problem of road traffic congestion, Shinkuma et al. proposed a new predictive road traffic information delivery system architecture to ensure that the system can establish real-time delivery and prediction accuracy without overflowing the computational and network load [8]. Finally, the system was validated using numerical tests on real datasets and real network simulators, where the predicted road traffic information will be forwarded to users.

Existing machine learning methods have their own advantages for predicting user mobility [9]. Subramaniam and Kaur explored the operational efficiency of various machine learning methods [10], such as multiple perceptron, random forest, and support vector machine, running on mobile edge computing platforms. Ojima and Fujii evaluated the stability of the communication link between users and edge computing nodes through factors such as transmission power, path loss, shadow gain, standard deviation, and received power [11]. A Kalman filter is used to predict user mobility and select its nearest edge computing node to complete the delivery of the task results. Nasrin and Xie proposed a new handoff scheme for the problem of increased communication delay caused by user mobility [12]. In the study of mobility-based offloading, when a user leaves cellular coverage, MDs will send a report to the base station that contains relays, cell identities, and user speed from neighboring macro- and microbase stations. The user’s priority is then assigned based on the three parameters: user speed, computing task delay requirements, and resource availability of mobile edge computing nodes. Finally, the base station decides whether to migrate to mobile edge computing nodes or remote cloud, based on resource availability.

However, the above research methods lack effective network environment awareness techniques for MEC networks. Also, they do not consider the changes in the offloading overhead and migration cost of user equipment as the location changes.

There are multiple users in the wireless communication network, the shared communication and computing resources are competitive, and game theory can effectively solve the decision-making problem of multiple participants on the target so that each user can meet their own communication needs to the greatest extent. At the same time, the performance of the network is improved. The choice problem of computation offloading devices was studied in [13], a potential game for edge computing is constructed, in which each device selfishly reduces its payoff, study the computation offloading strategy optimization problem, and a game-based method developed to tackle such a complex problem. Reference [14] proposed a resource allocation scheme based on dynamic game to allocate resources variant devices, and the blockchain technology is utilized to record the whole resource transaction process to ensure confidentiality and anonymity.

In the scenario of NOMA-based MEC network, Wang et al. considered users and MEC servers as leaders and followers and then established a Stackelberg game [15]. The user minimizes task offloading and the total energy consumption of local computing by optimizing the task allocation coefficient and transmit power. The MEC server allocates computing resources to process offloading tasks, with the ultimate goal of minimizing execution delays. In the multiuser computation offloading problem of interference perception in MEC network [16], a computing offloading algorithm Nash--learning based on Nash equilibrium and reinforcement learning is designed. The simulation results show that the algorithm can effectively reduce the overhead of the system under different system parameters. The performance of the MEC network is significantly impacted by the problem of frequency division and intra-area interference [17], which treated user computation offloading, CPU capacity adjustment, transmit power regulation, and network interference as a game. In this game, the utility of each mobile user is maximized, which has been proven that the game was an exact potential game with a Nash equilibrium. The simulation results show that the algorithm can effectively improve the performance of the MEC system.

3. System Architecture and Problem Formulation

3.1. System Architecture

Consider a multiuser and multibase station edge computing network system as illustrated in Figure 1. The scene consists of a macrobase station (MBS) and multiple small base stations (SBS). MEC server deployed on SBS, providing powerful computing capability for the SBS. Each user moves within the coverage of the base station. Define the set of users and edge servers as and . The computation offloading strategy for all users is expressed as , indicates that the task of user is offloaded to the MEC server, and means the task is processed locally. For each MD , where is the data size, is the offloading decision variable, is the computing resources required to complete the task, and is the time that the user stays in the area.

3.2. Local Computing Model

Equations should be provided in a text format, rather than as an image. Microsoft Word’s equation tool is acceptable. Equations should be numbered consecutively, in round brackets, on the right-hand side of the page. They should be referred to as Equation (1), etc. in the main text.

When of the MD is executed locally, the local computing delay is expressed as where means the number of CPU cycles of unit data and is the computing capacity of th MD.

To evaluate the energy consumption generated by the MD in local processing, the energy consumption model is used, where is the energy consumption coefficient depending on the chip structure. Therefore, when performing tasks locally, the energy consumption of th MD is

3.3. MEC Computing Model

When the MD chooses to offload the task to the MEC server, the delay includes uplink transmission delay and MEC server processing delay. Since the size of the computing result is much smaller than the input data, the return delay of the computing result can be ignored. According to the Shannon formula, the uplink transmission rate can be calculated as where denotes channel bandwidth of base station, is the transmit power of th MD, and means channel gain between th MD and base station. is the interference of other users in the channel, and is the noise power.

The uplink transmission delay of the th MD and the MEC server can be expressed as

After the task is uploaded to the MEC server, the computing delay of the task processing can be expressed as where represents computing resource allocated to the th MD.

In summary, the total computation offloading delay of processing can be expressed as follows:

In terms of energy consumption, only the energy consumption of the user in the transmission link is considered.

3.4. User Mobility Model

In the model of user mobility, we assume that the stay time of each user in the area obeys the following probability density function, denoted by , is expressed as where represents the average stay time of the th MD, due to the heterogeneity of MDs, is also different, and we assume that obeys a Gaussian distribution.

3.5. Problem Formulation

In this section, a utility function is established that considers the three indicators of delay, energy consumption, and computing resource cost. We use the saved time and energy consumption as the benefit of the MD and use the computing resource cost and migration cost as the overhead. Benefit minus cost is the utility function of the MD.

Due to the user’s mobility, the stay time of user in the area will also affect the user’s offloading decision and utility function. Therefore, the utility function is discussed in three different situations according to the user’s stay time.

Case 1. The user stayed in the area for a short time and left the area before uploading the task to MEC servers, which can be expressed as . The probability of case 1 is . Therefore, the user cannot complete the transmission of the entire computing task, and the offloading strategy can only choose to local offloading, that is, and utility function is

Case 2. When the user completes the upload of the computing task and leaves the coverage before the MEC server execution is completed, which can be expressed as , the probability of case 2 is , then the user needs to pay an additional migration cost. Assuming that the cost of migration is only related to the size of data, given as . So, the utility function in this case is where and are the coefficient of delay and energy consumption saving revenue and and are the coefficients of the migration cost for computing resource cost.

Case 3. After the MEC server task is accomplished, the user still stays in this area, which can be expressed as , the probability of case 3 is , and the utility function in this case is

Combine (10), (11), and (12), we can get the expression as follows:

The objective function of the system is defined as the sum of all user utility functions, expressed as

Constraints and indicate that the computing resources allocated by the MEC server to users are nonnegative, and the total computing resources allocated to users cannot exceed the maximum value. Constraints and indicate that the task can only be processed locally or offloaded to the MEC server, and a MD can only be offloaded to one MEC server. indicates that the user’s utility function must be greater than or equal to 0, so that the revenue generated is greater than the local processing, which makes MEC offloading meaningful.

4. Proposed Scheme

4.1. Problem Decomposition

In this section, we propose MAORAS optimization scheme to solve the above optimization problem. The optimization problem is mainly divided into two parts. Where the offloading decision is a binary variable, computing resource is a continuous variable. So the optimization problem is a mixed integer nonlinear programming (MINLP) problem; thus, it is an NP-hard problem. According to the objective function and constraints, the constraints , , and of the offloading decision are decoupled from the constraints and of the computing resource allocation . The optimization problem is decomposed into two subproblems. Subproblem is the problem of computing resource allocation under a given offloading decision.

Subproblem is the problem of offloading decision under a given computing resource allocation.

Subproblem used the Lagrange multiplier method to solve the problem of computing resource allocation, and then, a noncooperative game is used to solve subproblem .

4.2. Computing Resource Allocation

According to Equations (13) and (14), the objective function can be expressed as follows:

Since the objective function is to maximize the benefit of the system, the local time delay and energy consumption are fixed values, so the objective function can be transformed into minimizing the total cost of the system, that is,

The first derivative and the second derivative are obtained for the objective function , and we get

The second derivative of with respect to is always greater than 0, and the constraints and are both linear; therefore, problem (18) is a convex optimization problem. According to the KKT (Karush-Kuhn-Tucker) optimization conditions, the optimal solution can be obtained by the Lagrange multiplier method.

Formulate problem (18) as a partial Lagrangian function: where is the Lagrangian multiplier related to the computational resource constraints, satisfying .

Then, get the first derivative of with respect to , which can be expressed as

When the first derivative takes 0, the optimal solution for computing resource allocation is obtained where is the Lagrange multiplier, which can be updated by gradient descent. where is the number of iterations, and is the step size. The condition for convergence is where indicates the maximum tolerance value.

Therefore, the resource allocation algorithm in this section is shown in Algorithm 1.

1: Initialization: number of iterations , maximum tolerance value
2: Calculate according to formula (22).
3: Update Lagrange multipliers according to formula (23).
4: If
5: then algorithm convergence, return
6: else
7: 
8: end if
9: Repeat steps 1-8 until the algorithm converges or
4.3. Computation Offloading Strategy

In the previous section, the Lagrange multiplier method was adopted to solve the problem of computing resources allocation. In this section, a noncooperative game model is adopted to solve the problem of task offloading strategies. Define the game as , where is number of devices participating in the game, is offloading strategy, and is utility function. represents the set of offloading strategies for all users except user . represents the utility function of user . When none of the participants improves their utility function by unilaterally changing their offloading strategy, then all participants can converge to a Nash equilibrium state.

Definition 1. For offloading strategy , if , has , then the offloading strategy is called the Nash equilibrium of game .

Definition 2. If there is a potential function in game , when the user’s offloading strategy unilaterally changes from to , and . The following formula can be obtained , then the game is an exact potential game.

Definition 3. If game is an exact potential game and the strategy set is finite, then game has finite increasing properties, so there must be a Nash equilibrium.

Argument 1: Game is an exact potential game, and the potential function is as follows: , where represents the two cases of and , due to they are all benefits generated by MEC processing, all can be combined for discussion.

Proof. When user updates the offloading strategy to , a potential function that satisfies the condition. Consider the following two cases: (1) ; (2) . (1)The offloading strategy of user is changed from local processing to MEC processing (2)The offloading strategy of user is changed from MEC processing to local processing To sum up, there is a potential function for , according to Definition 3, for every potential game with a finite strategy space exists a Nash equilibrium, which means that any player can change its offloading strategy in a finite time to reach a Nash equilibrium state.

4.4. MAORAS Algorithm Scheme

Combining the aforementioned computing resource allocation algorithm and computation offloading strategy in Section 3.3, the MAORAS algorithm scheme is proposed in Algorithm 2. Each mobile user makes the best offloading decision according to the current state and reaches the Nash equilibrium state after multiple iterations of convergence, maximizing the system utility.

1: Initialization:
2: Mobile user set , MEC server set , offloading strategy , utility function , game model
3: End Initialization
4: Input:
5: For each user , do
6:  Calculate .
7: Calculate by Lagrange multiplier method.
8: if , then update
9: else
10:  if ,
11:   update
12:  else
13:   update
14:  end if
15: end if
16: End for
17: Output: Initialized offloading strategy .
18: Initialization: iteration
19:
20: While do
21:  calculate ;
22:  if
23:   then update .
24:  else
25: End While
26: Output: Optimal computing resource allocation , and Nash equilibrium offloading strategy .

The MAORAS algorithm scheme mainly consists of two parts: (1)Offloading strategy preprocessing: corresponding to lines 1-17 in the pseudocode. In this process, the offloading strategy of all mobile users is first initialized to local processing; then, according to the user’s stay time in the area, it is divided into three levels; finally, an initial offloading strategy is obtained(2)Game theory-based offloading strategy update: corresponding to lines 18-26 in the pseudocode. Let each mobile user traverse the set of offloading strategies and take the one with the largest utility function as the best response strategy in this iteration process, which can be expressed as when the following formula is satisfied:

After iterations, there is no mobile users want to change their offloading strategy individually to improve their utility function and record the final set of Nash equilibrium offloading strategies

5. Simulation Results and Analysis

In this section, we set the environmental parameters of the simulation experiment and evaluate the performance of MAORAS algorithm scheme by comparing other baseline algorithms. The specific details are shown as follows.

5.1. Simulation Environment

The experiment uses Python3.7 and MATLAB R2018b simulation tools, the operating system is window10, the CPU processor used is Intel(R) Core(TM) i5-9500 CPU @ 3.00 GHz, and the memory size is 16 GB.

5.2. Parameter Settings

The scenario assumed in this article is on a square area. The users holding computing-demanding devices move randomly around the area. The task of the MDs can be executed locally or offloaded to the MEC servers. Due to the heterogeneity of MDs, assume that the number of mobile users is randomly selected from 5 to 50. Each region is equipped with a separate MEC server. The remaining parameters are shown in Table 1.

5.3. Algorithm Convergence

Figure 2 analyzes the convergence of mobile users’ offloading decisions. When , randomly select 5 mobile users and observe the changes in their offloading strategies. The abscissa is the number of algorithm iterations, and the ordinate is the offloading decision.

In Figure 2, after a limited number of iterations, the Nash equilibrium strategy solution is finally obtained and reaches a state of convergence. Initially, all MDs execute computation task locally. User 1 always insists on executing computing tasks locally, and User 2 and User 3 perform MEC offloading and local offloading, respectively, after 9 iterations. User 4 reaches the Nash equilibrium state after 7 iterations. User 5 insists on MEC offloading after only one strategy change. In general, after a finite number of iterations for all users, no one wants to unilaterally change strategy and finally reach a state of convergence.

Figures 3 and 4 show the changes in the system overhead and user utility function of five random users with the increase in the number of iterations. System overhead refers to the sum of delay, energy consumption, computing cost, and migration cost caused by user computation offloading. User utility function refers to the cost of local offloading minus MEC offloading. If , then it is more cost-effective to offload at MEC server, and the user utility function is also higher. Conversely, if , choosing MEC offloading will cost more, on balance, the users prefer to handle it locally. Therefore, its utility function .

According to the change of the user’s utility function, we can infer the user’s attributes and make some reasonable guesses. For example, User 1 always insists on local decision-making. We speculate that he may be moving at high speed and stay in the area for a short time, which is not enough to upload the entire computing task before leaving. After many iterations of User 3, the cost of performing MEC offloading has always been higher than that of local processing. We can infer that its device has better performance and stronger computing power. For User 5, only one iteration insists on processing in MEC; it can be speculated that MEC provides powerful computing resources to User 5.

5.4. Algorithm Performance

The performance of the proposed MAORAS algorithm scheme is compared with the other algorithms. (1) Computing resource average allocation (CRAA): all users process tasks on the MEC server and each user gets the same computing resources. (2) Lagrange resource allocation method (LRAM): allocate computing resources according to the Lagrange multiplier method. (3) Game-based distributed task offloading (GDTO): game theory-based computing resource and offloading optimization scheme without considering user mobility. (4) Genetic algorithm-based allocation algorithm (GAAA): the user mobility model is the same as this paper but uses genetic algorithm.

Figure 5 presents the changes in the total system utility as the number of users increases in different algorithms. In the beginning, the utility function of the CRAA and LRAM algorithms increases with the number of users. When the number of users reaches 20, the system utility begins to show a downward trend; this is because the influx of a large number of users increases the load of the MEC server. The computing resources allocated to each user are less and less, and the transmission bandwidth of the system is occupied, causing network congestion. Meanwhile, if offload tasks to MEC servers randomly, the cost of MEC offloading is gradually higher than the cost of local offloading, and the final utility function is negative. Comparing CRAA and LRAM, it can be found that the computing resource allocation scheme using the Lagrange multiplier method is better than the traditional average allocation scheme.

Since GDTO does not consider user mobility, as the number of MDs increases, redundant devices can only be executed locally, and the number of tasks offloaded or migrated to the MEC server hardly changes. Hence, the GDTO algorithm tends to stabilize after the device reaches 25.

The total system utility of the GAAA and the MAORAS algorithm increases with the increase in the number of users. The system utility starts to change slowly but continues to increase and show a convergence trend. Because limited computing resources can only serve a part of computing tasks, the increase in the number of MDs provides the diversity and selectivity of offloading. It can be seen by observation that MAORAS is better than GAAA because the GAAA algorithm is prone to premature convergence.

Figure 6 shows the effect of the MEC computing capability on system utility using different algorithm, when , and the computing capability of the MEC server varies from 30 GHz to 50 GHz.

It shows that the five different algorithms’ final convergent system utility value increases with the increase of computing capability. Compared with CRAA, the LRAM algorithm adopts the Lagrange multiplier method, which allocates computing resources more reasonably and improves the utility function of the system. The GDTO algorithm jointly optimizes the offloading decision and computing resource allocation strategy but does not consider the user’s mobility. Although the performance of the MEC server is improved, it is not considered to reduce the cost of migration. GAAA and MAORAS, considering the user’s mobility, present good performance for those users who stay for a short time. More computing resources are provided to ensure that computation offloading is completed before the user leaves, thereby saving migration costs. MAORAS can improve by 5.3% compared with GAAA.

In Figure 7, the system utility of five different algorithms decreases with the increase in migration cost. The differences between the algorithms is initially small, but subsequently, the gap between them becomes larger and larger. The MAORAS algorithm proposed in this paper has the slowest decline rate, indicating that this scheme can significantly reduce the probability of migration. For CRAA and LRAM, all the tasks are offloaded or migrated to MEC server; the migration of many tasks resulted in a huge reduction in system revenue. Although GDTO took the offloading decision, it did not consider optimizing migration cost caused by user mobility. It can be concluded that ignoring user mobility will greatly increase the cost of migration and reduce system utility.

6. Conclusion

In this paper, a mobility-based edge computing scenario is considered in this paper, with the aim of maximizing system benefits by optimizing offloading decisions and resource allocation schemes. Using the Lagrange multiplier method to allocate computing resource, in the aspect of offloading decision, it is proved that the game model exists Nash equilibrium from establishing the model of the game relationship between the game participants, determining the final utility function of the game problem, and finally constructing the potential function. The simulation results show that the proposed MAORAS algorithm can effectively improve the utility function of the system. The performance of computation offloading is not only related to user mobility but also related to network status and routing. In the next step, we are prepared to predict the status of the edge computing network, including traffic prediction and so on, and conduct routing based on the prediction to reduce latency.

Data Availability

Data are available on request

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was sponsored by the National Natural Science Foundation of China under grant numbers 62271264, 61901191, 61972207, and 42175194 and the Shandong Provincial Natural Science Foundation (Grant No. ZR2020LZH005).