Abstract

Mobile edge computing is a new computing paradigm that can extend cloud computing capabilities to the edge network, supporting computation-intensive applications such as face recognition, natural language processing, and augmented reality. Notably, computation offloading is a key technology of mobile edge computing to improve mobile devices’ performance and users’ experience by offloading local tasks to edge servers. In this paper, the problem of computation offloading under multiuser, multiserver, and multichannel scenarios is researched, and a computation offloading framework is proposed that considering the quality of service (QoS) of users, server resources, and channel interference. This framework consists of three levels. (1) In the offloading decision stage, the offloading decision is made based on the beneficial degree of computation offloading, which is measured by the total cost of the local computing of mobile devices in comparison with the edge-side server. (2) In the edge server selection stage, the candidate is comprehensively evaluated and selected by a multiobjective decision based on the Analytic Hierarchy Process based on Covariance (Cov-AHP) for computation offloading. (3) In the channel selection stage, a multiuser and multichannel distributed computation offloading strategy based on the potential game is proposed by considering the influence of channel interference on the user’s overall overhead. The corresponding multiuser and multichannel task scheduling algorithm is designed to maximize the overall benefit by finding the Nash equilibrium point of the potential game. Amounts of experimental results show that the proposed framework can greatly increase the number of beneficial computation offloading users and effectively reduce the energy consumption and time delay.

1. Introduction

With the development of artificial intelligence and Internet of Things (IoT) technology, a large number of computation-intensive and time-sensitive mobile applications have appeared on mobile terminals, such as face recognition, natural language processing, and augmented reality. However, the limited computing capability and energy storage of mobile terminals cannot provide intensive computing and complete high-energy tasks. A large number of applications training process must be deployed on the cloud service platform, so that a large amount of training data generated on the mobile terminal need to be transmitted to the cloud through the core network, resulting in a sharp increase in the already congested core network load. Application transmission delay and transmission energy consumption will also increase greatly, which will cause service request failure and QoS of the user to decline.

Currently, mobile edge computing (MEC) [13] has become an important solution to the above contradictions. However, in the MEC scheme based on the public cloud [35], accessing the mobile cloud service through the wireless channel results in a large channel blocking rate and delay. Meanwhile, the MEC scheme based on the cloudlet [6] can only connect to cloud services via WiFi, which has a large space limitation.

In response to the above limitations, an edge server-based MEC solution has been proposed and widely used. As shown in Figure 1, the edge server is deployed on the wireless LAN side in the edge server-based MEC solution, which shortens the distance between servers and the users. This solution can use computing offload to expand the service capabilities of mobile devices, provide localized computing and storage resources for mobile devices nearby, reduce data transmission costs, and meet the needs of the fast and interactive response. Among them, mobile devices and users are collectively referred to as mobile edge nodes, and the combination of wireless base stations and edge servers is referred to as edge server nodes. The edge server nodes can further offload the preprocessing results to the remote cloud service center through the mobile core network. Therefore, the key technology of the MEC solution based on the edge server is computing offloading. How to formulate a reasonable and efficient computing offloading framework will be an issue that we urgently need to solve. When many users offload computing tasks to the same edge server through the same channel at the same time, it will cause congestion and greater delay. The evaluation indicators for computing offload include energy consumption and service delay.

In order to minimize the energy consumption and service delay of mobile users, the main contributions of this paper are listed as follows:(1)The concept of beneficial computation offloading is proposed. The overall cost of the mobile edge computing system is defined as the weighted sum of energy consumption, the corresponding transmission delay of computation offloading, and the task processing of all edge nodes. When the total cost of computation offloading is less than the total cost of the local computing, beneficial computation offloading is obtained. Beneficial computation offloading is a prerequisite for users to make computation offloading decisions.(2)A Cov-AHP strategy for multiobjective servers offloading decision is proposed. Considering the transmission time, energy consumption, and server residual resources, the server to be selected for undertaking offloading is comprehensively evaluated by the covariance judgment matrix. The experimental results show that the Cov-AHP algorithm is objective and can effectively realize load balancing.(3)A multiuser multichannel distributed computation offloading algorithm based on the potential game is proposed. The computation offloading optimization of multiuser in a multichannel wireless interference scenario is an NP-Hard problem. According to the number of beneficial computation offloading users and the total system cost, the efficiency of the Nash equilibrium is quantified. Users can formulate an interaction mechanism based on the group strategy to achieve the maximum benefit and the highest resource utilization. The algorithm can optimize its computation offloading performance as the size of the user increases.

The remainder of this paper is organized as follows. Section 2 reviews the related studies. Section 3 introduces the system model and describes the problem statement and solution. In Sections 4, 5, and 6, a three-level strategy for the computation offloading framework is illustrated in detail. The experiment and result analysis are discussed in Section 7. The final section summarizes the paper and discusses future work.

Researches on computation offloading can be divided into three types. In the mode of single-user single-available edge server, a dynamic computation migration algorithm (LODCO) based on Lyapunov optimization [7] is proposed to optimize the execution delay of the application. In [8], the optimization of mobile device energy is defined as a constrained Markov decision process. Then, a computational migration decision algorithm is proposed to equalize the execution delay and the energy consumption of the mobile terminal. The literature [9] defined the computational migration decision problem as a nonconvex quadratic constrained programming and proposed a semicustom random heuristic algorithm to significantly reduce the overall cost of the system. In order to balance the energy consumption and delay during the migration process, the literature [10] adopted cost function to measure the migration request aggregation and proposed an online algorithm considering both energy consumption and QoS. A deep learning model based on network packet information to evaluate the quality of experience (QoE) of users and servers was proposed in [11].

In the mode of multiuser single-available edge server, an offloading algorithm based on distributed deep learning is proposed [5], which can provide approximate optimal offloading decisions for multiple mobile device users and single edge servers in MEC. A classification-based energy-saving computation migration algorithm (EECO) is proposed in [3, 12], which can save about 15% energy consumption in comparison. A three-step algorithm is proposed to design the semideterministic relaxation (SDR), alternating optimization (AO), and sequential adjustment (SA) strategies [13] to achieve a joint optimization scheme of computing resources and communication resources. A novel approach to formulate cost-efficient fault tolerance strategies for multitenant service-based systems is proposed in [14].

In the mode of multiuser multiavailable edge servers, the linear relaxation method and the semidetermined relaxation (SDR) method are used to minimize the total task execution delay and energy consumption in the MEC with the multiedge servers [15]. The best computation distribution between multiple edge servers is given in [16]. In order to maximize the long-term utility, a model-free reinforcement learning offloading mechanism (Q-learning) is proposed in [17]. An adaptive computation offloading method with both macrobase stations (MABSs) in 5G and roadside units (RSUs) in Internet of Connected Vehicles (IoCV) to optimize the task execution delay and energy consumption of the edge system is proposed in [18]. A two-phase computation offloading optimization method to maximize the resource utilization of ECUs, minimize the execution time, and balance the privacy preservation and execution performance is proposed in [19]. A blockchain-enabled computation offloading method to achieve tradeoffs among minimizing ECDs’ task offloading time, optimizing energy consumption, and maintaining load balance for IoT is devised in [20]. An optimal approach to solve the dynamic QoS edge user allocation (EUS) problem and a heuristic approach for quickly finding suboptimal solutions to large-scale instances of the dynamic QoS EUS problem are proposed in [21].

Additionally, multichannel radio interference has aroused much attention. The literature [3] utilizes game theory to realize efficient distributed channel allocation and computation offloading in multichannel wireless interference environments. A two-layer optimization method based on orthogonal frequency division multiplexing (OFDM) is proposed in [22, 23] to solve the problem of subcarrier allocation and task offloading of multiuser access multiavailable edge servers. A theorem method to describe edge user allocation (EUA) problem as potential games and a novel decentralized algorithm that can solve EUA problem effectively are proposed in [24]. An optimal method to deal with EUA problem is proposed in [25]. The EUA problem is modelled as a bin packing problem, and the Lexicographic Goal Programming technique is adopted. An metric that can measure the community-diversified influence is proposed in [26]. Three novel QoS-aware service selection approaches for composing multitenant SBSs that achieve three different multitenancy maturity levels is presented in [27]. A novel strategy CFT4MTS (Criticality-Based Fault Tolerance for Multi-Tenant SBSs) that formulates cost-effective fault tolerance for multitenant SBSs by providing redundancy for the critical component services is proposed in [14]. A novel Web API recommendation method called keywords-based and compatibility-aware API recommendation based on weighted API correlation graph is proposed in [28]. A multidimensional quality ensemble-driven recommendation method based on the Locality-Sensitive Hashing technique and Order Preference by Similarity to Ideal Solution technique is proposed in [29]. The original stochastic problem is transformed to the determinist optimization problem by adopting stochastic optimization techniques, and an energy efficient dynamic offloading algorithm is proposed in [30]. A blockchain-based computation offloading method for edge computing in 5G networks is proposed in [31].

In summary, single-user scenarios are an idealized abstraction of computation offloading. While the multiuser, multiserver, and multichannel scenarios are so complicated that involve multiusers’ offloading decisions, computing resource requirements, edge server selection, and multiple channel interferences. This paper focuses on the problem of computation offloading under multiuser, multiserver, and multichannel scenarios and proposes a computation offloading framework considering the QoS of users, server resources, and channel interference. This framework consists of three stages: (1) offloading decision stage; (2) server selection stage; (3) channel selection stage. In the corresponding stage, the corresponding solution model is proposed to optimize the user’s decision.

3. System Model and Problem Statement

In the Mobile Edge Cloud Service model, we consider setting up N mobile users and K edges server nodes, where each user has a computation-intensive task [32] and each edge server node is composed of a base station and an edge server. Mobile users can offload computing tasks to edge servers through the base station. We also consider setting M wireless channels between mobile users and each edge server node. Table 1 lists the parameters used in this paper. Computing tasks that are offloaded to the same edge server node will be transmitted over different channels. The mobile users evaluate the computing task characteristics, energy reserve, computing capability, and network communication quality to make computation offloading decisions, which are divided into local computing and edge server-side computing. In the edge server-side computing mode, mobile users search for a suitable edge server, for which computing tasks offloading decisions is beneficial [33]. We call this beneficial computation offloading, which can reduce the energy consumption, shorten the delay, and guarantee the QoS of users. Then, we search a channel with high bandwidth and low interference to the selected edge server. The computation task is offloaded to the edge server, and the result is returned to the edge user through the channel. Among them, communication and computation are the two most important components of the Mobile Edge Cloud Service model. And, the model is described as follows.

In order to effectively represent different scenarios in the edge server-based MEC scheme, this paper proposes a Mobile Edge Cloud Service model, including the communication model and computation model.

3.1. Mobile Edge Cloud Service Communication Model

Each edge-end user can process tasks locally or offload computing tasks to the edge server over a wireless channel. Assume that the number of edge-end users that may perform computational offloading is and there are wireless channels between the edge-end user and the edge server node. User n’s computational offload strategy is defined as

Then, is the set of computation offloading decisions for all users. According to the Shannon spectrum formula, the edge user offloads the computing task to the edge server, and the uplink transmission rate is given aswhere , and are the transmission energy and channel gain of the user communicating with the base station, is the channel bandwidth of the wireless transmission process, and , in which is the background white noise interference and is the communication interference of other channels. The channel capacity calculated by Shannon theorem is the maximum available data transmission rate. Usually, the actual data transmission rate is less than the channel capacity. Using Shannon theorem to evaluate the achievable data transmission rate, the minimum delay of data transmission can be obtained. This minimum delay value can be used as a threshold for offloading decisions.

3.2. Mobile Edge Cloud Service Computation Model

The computation model includes local computing and edge server-side computing. The compute-intensive tasks for edge users are defined as , where is the data amount that the user needs to complete the task [34] and is the number of CPU clock cycles required to complete the task [35].

3.2.1. Local Computing

When the edge user’s decision is local computing, the edge user executes the computation task locally. Assume is the computing capability of the edge user (the clock frequency unit of the edge user CPU runs is HZ). is the energy consumption of each CPU cycle, which can be obtained by the measurement method in [36]. So, we can get the execution time and energy consumption of the local computing task as

For the total cost of the local computing task, we have that

Among them, indicates the weights of the computation time and energy consumption given by the edge user in the decision-making, respectively. The user can flexibly set the two weights according to the requirements of the energy consumption and sensitivity of the delay in the scenario, thereby dynamically adjusting the overall cost of the system.

3.2.2. Edge Server Side Computing

When the edge-side user’s decision is the edge server-side computing, the edge-side user offloads the computation-intensive task to the edge server through the wireless channel, and the time and energy consumption overhead of the offload transmission process is defined as

Among them, is the tailing energy generated in wireless transmission. On the edge server-side, the computing capability of the edge server is the clock frequency , then the time of computing tasks performs on the edge server node can be given as

According to (5), (6), and (7), we can compute the total cost of edge server-side computations as

Among them, , the time and energy cost of sending computation results back to the edge node from the edge server node [22] is ignored. Because for many intensive computing applications that need to be offloaded (e.g., face recognition and virtual reality), the size of the data set that is fed back to the user is often several orders of magnitude smaller than the size of the input data set.

3.3. Problem Statement and Solution

Therefore, the computation offloading of the Mobile Edge Cloud Service model involves three problems:(1)How to decide whether the computing task is completed locally or offloaded to the edge server?(2)How to determine the appropriate edge server for computation offloading?(3)How to choose the right channel to achieve the highest wireless transmission efficiency?

In order to solve the above problems, the computation offloading optimization framework proposed in this paper consists of three levels.

3.3.1. Offloading Decision Stage

In this level, we propose a beneficial computation offloading decision strategy. And, according to the strategy, the overall cost of edge user local computing and edge server-side computing is compared. Then, the computing mode is determined based on the beneficial degree of computation offloading.

3.3.2. Edge Server Selection Stage

After the edge user makes the decision to computation offloading, we propose a server selection strategy, which considers the transmission time, transmission energy consumption, and remaining CPU resources of the edge server. Then, we use the Cov-AHP multiobjective decision method to evaluate and select the offload table edge server according to the final weight.

3.3.3. Channel Selection Stage

To solve the problem of signal interference caused by multiple users simultaneously selecting the same edge server for computation offloading, we propose a multiuser multichannel distributed computation offloading strategy based on the potential game. In detail, the Nash equilibrium point is defined as the optimal solution for the combined optimization problem of channel selection and beneficial computation offloading.

4. Offloading Decision Stage

At this level, we present the beneficial computation offloading decision strategy. The weighted sum of the energy consumption of the edge user to offload and process the computing task and the corresponding transmission and processing delay is defined as the total cost of the system for the edge node to complete the task. We propose a beneficial computation offloading decision strategy, which minimizes the overall cost of the system by optimizing the task offloading decision .

Definition 1. Beneficial computation offloading: if and only if the overall cost of edge server-side computing is less than the total cost of local computing, we call such computing offloading beneficial to the user, i.e.,Here, we construct an indicator function . When event is true, then ; otherwise, . The edge-user offloading decision if and only if the computing task satisfies a beneficial offload.
In summary, the beneficial computation offloading problem boils down to maximizing the number of users performing beneficial edge computation and minimizing the overall cost of performing all computational tasks, so the objective function and constraints are defined as

5. Edge Server Selection Stage

At this level, we present the Cov-AHP- (Analytic Hierarchy Process Based on Covariance-) based server selection strategy. When the task satisfies the beneficial computation offloading, we consider the transmission energy consumption, the transmission delay, and the remaining resources of the edge server to select the server that meets the computation offloading condition. The novelty of the Cov-AHP-based method is that the feasibility of its evaluation scheme no longer depends on the experience of experts but on the relationship between the scheme itself and the target layer to judge. This approach can greatly reduce the subjectivity, and at the same time objectively evaluate the connection between various schemes and goals.

Firstly, we need to address the evaluation of server selection. In order to overcome the influence of human subjective judgment, this paper builds a judgment matrix based on covariance and the Cov-AHP-based server selection strategy [37] whose calculation results and sorting are unique. The strategy includes four steps.

5.1. Establishing a Hierarchical Structure

The Cov-AHP-based server selection strategy is established according to the progressive order of the target layer, the criteria layer, and the scheme layer. The Cov-AHP-based server selection strategy is shown in Figure 2.

The target layer is the ultimate goal that the strategy will ultimately achieve. In this paper, our ultimate goal is to choose the most suitable server for computing offload. The criteria layer is the element that depends on the evaluation of the server. We select three elements: transmission delay, transmission energy consumption, and remaining CPU resources of the server. The scheme layer is the servers that can be used for computing offload.

5.2. Constructing Judgment Matrix

For a system that has alternatives, there is a certain objective relationship between its constituent elements, which can be expressed by covariance. The basic idea of Cov-AHP is to construct a judgment matrix reflecting the relative importance of each element. Then, based on the covariance matrix, we can obtain the weight of the relative importance of quantitative indicators between the relative layers during the analytic hierarchy process.

Suppose that the value of scheme 1 corresponding to element is and the value of scheme 2 corresponding to element is , then the covariance of and is and we have . The covariance matrix of element is expressed as

Using the covariance of each column divide covariance and then taking the transformation of the product of all paired elements into one, i.e., , finally, the judgment matrix is constructed as

Then, use the judgment matrix to calculate the weight of each element. The principle of the analytic hierarchy process shows that the eigenvector corresponding to the largest eigenvalue of the judgment matrix B is the weight vector of each element. The square root method is used to solve the feature vector as follows:(i)Calculate the product of each row element of the judgment matrix B: (ii)Calculate the k-th root of each line : (iii)Normalize and then get the weight of each element:

Then, the weight vector of the matrix can be obtained as .

5.3. Consistency Test

Multiply the judgment matrix B by the weight vector of matrix to obtain a k-order column vector , and then, according to the formula, , we can get the largest eigenvalue of the judgment matrix . Among them, represents the i-th component of the column vector .

The indicator that measures the deviation of the judgment matrix is calculated as

The random consistency ratio is calculated as

Among them, is a random consistency standard (Table 2).

When , it is generally considered that the judgment matrix has satisfactory consistency; otherwise, the judgment value needs to be adjusted until the consistency check is passed.

5.4. Weight Integration and Server Selection

Suppose the weights of the elements of the criteria layer for the target layer are usually set according to the user needs of the edge user, the weight of each scheme relative to each element of the criteria layer is , , and . The weight of each scheme relative to the target layer is

As can be seen from the above formula, will be the most preferred offload server solution.

6. Channel Selection Stage

At this level, we present the channel selection stage strategy. After completing the edge server selection, it is necessary to solve how to select the appropriate wireless channel for computation offloading, assuming that there is a set of available wireless channels between the edge user and the edge server node. Then, the user decision is . Multiple users simultaneously selecting the same wireless channel for beneficial computation offloading can cause severe signal interference.

Defining the total cost of user n as , the objective function of the multiuser multichannel computing offload decision problem is defined as

Using the potential game [38] analyses the multiuser multichannel distributed computation offloading decision problem, the potential function is constructed to prove that the multiuser multichannel distributed computation offloading decision problem satisfies the potential game condition, and there is a Nash equilibrium point. Among them, the Nash equilibrium point [38] is defined as the optimal solution to the NP-hard [39] problem of the multiuser multichannel distributed computing offload.

6.1. Game Analysis of Multiuser Multichannel Distributed Computation Offloading

In the distributed computing offload decision, the set of computation offloading decisions for all users except the edge-side user is . Whether the user chooses local computing or edge server-side computing to reduce their overall overhead, i.e., is given as

According to equations (4) and (8), the mathematical expression of the overall cost of user can be derived as

Then, we express the above problem as a strategic game ; the edge user strategy set conforming to Nash Equilibrium is . Then, according to the definition of Nash Equilibrium, if the multiuser system is in equilibrium, no user can change the strategy unilaterally to further reduce the overhead. i.e.,

In this formula, denotes the decision after the edge user changes.

6.2. Proof of the Existence of Multiuser Multichannel Distributed Computation Offloading Nash Equilibrium Points

The potential game is a subset of the strategy game. Each subject will continually approach the optimal objective function after a finite iteration to find the optimal solution of the objective function, and each potential game obeys a potential function. Here, we need to construct a potential function to prove that the target problem is a potential game problem, and then there is a Nash equilibrium point. From (4), (8), and (10), we can get equivalent to

According to (2), we then have that the interference in the wireless channel has an extremum when the edge-side user n implements the beneficial computation offloading:

It can be seen that when the wireless channel interference is sufficiently low, it is beneficial for the user to adopt the edge server-side computation mode. Otherwise, the user should perform the computation task locally. Based on the above results, we can know that channel interference has an extreme value, which satisfies the potential game condition. And, the following potential function is constructed to prove that the multiuser multichannel computation offloading satisfies the potential game:

The potential game has a finite increment property (FIP) [38]; its incremental path length is limited, and the game subject can reach the Nash equilibrium after a finite number of iterations. Any edge user updates its current decision for ; when in Nash equilibrium, it will lead to an increase in overall overhead, i.e.,

The objective function is proved to be a potential function in three cases.

Case 1. (). The user overhead is inversely proportional to data uplink rate , according to (2), that is, proportional to channel interference. This implies thatSince , according to (22), (23), and (24), it is obvious that

Case 2. (). Since , and , when the user changes his local computing decision to the mobile edge computing decision process, the overall overhead increases, that is, the channel interference is greater than the maximum extreme value of the beneficial mobile edge computing interference:

Case 3. (). By a similar argument in the second case, since , and , .
For the above three cases, both the definitions of the potential game are satisfied. Then, we can get from .
Combined with the above proof, it can be known that the target problem (multiuser multichannel computation offloading problem in the Mobile Edge Cloud Service strategy) is a potential game problem, and there is a Nash equilibrium point. The Nash equilibrium point can be used as the optimal value for multiuser multichannel mobile edge computing task offloading.

6.3. Multiuser Multichannel Task Scheduling Algorithm
6.3.1. Algorithm Design

We design the multiuser multichannel task scheduling algorithm according to the finite increment property of the potential game and ensure that any asynchronous response update process reaches Nash equilibrium within a finite number of iterations. Algorithm 1 of the whole auction flow is described as follows:

Step 1: initialization
Step 2: all computing tasks are done locally, i.e.
Step 3: end initialization
Step 4: repeat for each user n and server node in each decision slot
Step 5: transmit the pilot signal on the chosen channel m to the mobile cloud server base-stations
Step 6: receive the information of the received powers on all channels from each mobile edge user
Step 7: compute the best response set in the base-stations
Step 8: ifthen
Step 9: send RTU message to the cloud for contending for the decision update opportunity
Step 10: if receive the UP message from the cloud then
Step 11: choose the decision for next slot
Step 12: else choose the original decision for next slot
Step 13: end if
Step 14: else choose the original decision for next slot
Step 15: end if
Step 16: until END message is received from the mobile cloud server base-stations

Specifically, the user synchronizes with the clock signal from the wireless base station, and the time slot used to update the computation offloading decision is called a decision period, and each decision period includes two phases:

Radio interference measurement phase: at this stage, we measure interference on different channels to select the appropriate channel for access. In the current decision slot, each edge node user who selects mobile edge computation offloading mode (i.e. ) will transmit the pilot signal on its selected channel m and then measure the total received power of each channel on the radio base station. And, the power information received on all channels will be fed back to the edge node user. Therefore, each user n can grasp the interference on its channel from other users as

The interference received on the channel currently selected by the edge user is equal to the measured total power minus the signal power. For other channels that do not transmit the pilot signal, the interference received is equal to the measured total power.

Offloading decision update phase: at this stage, we motivate the multiuser computing offload’s finite incremental properties by having an edge node user perform a decision update. Based on interference information measured on different channels , each edge node user first calculates its best response update set as

Then, in case (i.e., user n can improve its offload decision), user n will send a request-to-update (RTU) message to the edge server node to indicate that it wants to contend for the decision update opportunity. Otherwise, user n will not compete for updates in the next decision slot and keep their current offloading decisions unchanged (i.e. ). The edge server node will select the user with the highest priority from the user who has sent the RTU and send an update-permission (UP) command to update the decision in the next time slot. For users who do not receive the UP command, they will not update their decision in the next time slot (i.e. ).

6.3.2. Analysis of Convergence and Solvability

From the finite increment attribute (FIP) of the potential game, the algorithm will converge to the Nash equilibrium of the multiuser multichannel computation offloading game in a limited number of decision slots. In the simulation experiment, when the edge server does not receive any RTU message of the edge user in any decision time slot, that is, the game has reached the Nash equilibrium. Then, the edge server broadcasts the end message to all edge users, indicating that the updating process of the computation offloading decision is terminated. We analyze the convergence and solvability of the distributed computation offloading algorithm by calculating the extreme value of the number of required decision slots for the computation offloading algorithm.

In each decision slot, each edge user will execute steps 3–10 in Algorithm 1 in parallel. Since most operations involve only some basic arithmetic computations, the main part is to calculate the optimal response set in step 5, which involves the computation and sequence of m-channels measurement data, usually with the computational complexity of . So, the computational complexity in each decision slot is . Assuming that it requires C decision slots to terminate the algorithm, the total computational complexity of the distributed computation offloading algorithm is . Letwhere , , and are the channel interference extremum, transmission power, and channel gain, respectively. Because we need decision slots to converge, we have the following inference.

When and are nonnegative integers for any , the distributed computation offloading algorithm will terminate within at most decision slot, i.e.,

Proof. According to (23),In a decision slot, assume that an edge user updates its current decision to . And, this decision leads to a reduction in the overall cost of the user, i.e., . According to the definition of the potential function, it can be seen that the potential function will also be reduced by at least , i.e.,We will consider three situations: (a) ; (b) ; (c) .
While , we can see from equation (26) thatSince are integers for any , thenAccording to the above formula,While , we can get by formula (28) thatSince Qk is the integer for any ,According to the above formula,While , through a similar argument to the second case, we getTherefore, according to (33)–(41), we know that the algorithm will terminate by driving the potential function to a minimum point within at most decision slots:For the general case, the numerical results of the previous section indicate that the distributed computation offloading algorithm can also converge quickly, and the number of convergence decision slots increases linearly (almost) as the number of users N increases. The inferences in this section further indicate that the distributed computation offloading algorithm can converge quickly under normal conditions and has a quadratic convergence time (i.e., an upper bound). Note that in the simulation experiment, the transmission power and channel gain are nonnegative (i.e., ), so we know that . The nonnegative condition ensures that each user has the opportunity to implement a beneficial computing offload (otherwise, the user can only choose the local computing all the time). Therefore, the algorithm makes sense only when and are nonnegative integers. , , and can be obtained from known conditions, and then the algorithm can be solved.

7. Simulation and Analysis

7.1. Experiment Settings

In this experiment, the face recognition algorithm [40] has been used as a computation task. MATLAB is used to simulate the computation offloading framework proposed in this paper. We will set up 5 edge server nodes in each experimental scenario, including 5 wireless base stations and 5 edge servers, running 30 virtual machines on each edge server, and the computing power of each virtual machine is set to 10GH [40]. The coverage of the base station is 100 meters [27], and edge users are randomly distributed in coverage [27]. Each task is executed by a single virtual machine. The various data parameters [3, 33, 4144] in the simulation experiment are shown in Table 3.

7.2. Analysis of Simulation Results

In this section, we analyze the simulation results and discuss the performance of the framework proposed.

7.2.1. Performance Analysis of Beneficial Computation Offloading Decision Strategy

We consider the two indicators of beneficial computation offloading users and real-time system overhead. The two indicators of the beneficial computation offloading decision strategy used in this paper are evaluated. Experiments are set up for analysis: (1) the change of the number of beneficial computation offloading users with the change of the decision time slot among the 50 users who perform the computation offloading is measured; (2) the change of the real-time system overhead of the user with the change of the decision time slot among the 50 users who perform the computation offloading is measured; (3) the comparison of the number of beneficial computation offloading users with the change of the user number of different computation offloading decision strategies (random computation offloading decision, beneficial computation offloading decision, and full computation offloading decision). All test data in this paper are the average of 100 trials.

From Figure 3, we can see the dynamic process of the number of beneficial computation offloading users under the beneficial computation offloading decision strategy. It shows that the strategy can increase the number of beneficial computation offloading users in the system and converge to a balance. From Figure 4, we can see the dynamic process of the overall cost of the mobile device user system under the beneficial computing offload decision strategy. It shows that the strategy can also keep the total overhead of the mobile device user system in the process of computation offloading from decreasing and eventually converging to an equilibrium. Figure 5 shows that, under the condition that each edge service node has sufficient computing resources in the mobile edge network, the number of beneficial computation offloading users will increase with the number of users. The performance of the computation offloading decision strategy used in this paper compared with the other two strategies is improved by 30%.

7.2.2. Performance Analysis of Cov-AHP Based Server Selection Strategy

We focus on the stability evaluation of the Cov-AHP-based server selection strategy used in this paper. Two experiments are set up: (1) compare the server weight of different server selection strategies in the same region [45] with the change of the task execution time; (2) compare the server weight of the server selection strategy used in this paper with the change of the task execution time in different regions. In this experiment, we set the performance parameters of the server to be more realistic, and the performance of the server has advantages and disadvantages. The specific parameters are shown in Table 4.

The server weights based on different server selection strategies fluctuate with the task execution time in the same area (Figure 6). The server selection strategy based on Cov-AHP can reduce the weight fluctuation of each server in the same area effectively, and so that the user offloading decision is relatively stable. It also achieves load balancing between servers in the same area effectively.

The server weight based on the server selection strategy adopted in this paper varies with the task execution time in different regions (Figure 7). These regions depend on where the users are. The server selection strategy based on Cov-AHP can maintain the weight of each server in different regions effectively, so that the server weight fluctuation is small in the same region, which indicates that the server selection strategy based on Cov-AHP can efficiently achieve load balancing between servers in different areas within the coverage of the base station.

7.2.3. Performance Analysis of Multiuser Multichannel Distributed Computation Offloading Strategy Based on Potential Game

In this section, we focus on two important indicators of the user’s computation offloading overhead and task completion time in the process of making channel selection decisions and completing the computation offloading. We evaluate two indicators of the multiuser multichannel distribution computation offloading strategies based on potential games is used in this paper. And, two experiments are set up separately: (1) compare the total user overhead of different channel selection strategies [2] in the process of computation offloading; (2) compare the time of different channel selection strategies [2] to complete the computation task. Among them, the centralized computation offloading algorithm uses the global optimization method to calculate the overall cost of centralized computation offloading and requires the edge server to continuously interact with the edge users. It has been proved to be able to effectively find the approximate optimal solution of the complex combinatorial optimization problem.

The variation of the total user overhead of different channel selection strategies to complete the computation offloading process with the increase of the number of users is shown in Figure 8. The multiuser multichannel distributed computation offloading strategy based on the potential game has a larger advantage relative to the random computation offloading and the full local computing, which saves twice as much on average. The centralized computation offloading strategy is almost consistent or even slightly superior to the user overhead of the strategy in this paper, but it pays a large price in terms of delay. The task completion time of different channel selection strategies increases with the increase in the number of users (Figure 9). The multiuser multichannel distributed computation offloading strategy based on potential game used in this paper will greatly reduce the completion time of tasks compared with other strategies. This is because the strategy in this paper makes the most suitable decision for each user based on their own combined channel conditions. The centralized offloading decision needs to collect all users’ information for centralized analysis, which will greatly increase the additional delay and affect the QoS of the users.

8. Conclusions

This paper proposes a multilevel computation offloading optimization framework suitable for multiuser multichannel multiserver scenarios in MEC to meet the needs of computing-intensive applications on mobile devices. From the perspective of edge users, delay and energy consumption are used as the basis for computation offloading decision, and then the concept of beneficial computation offloading is proposed. Through the Cov-AHP strategy for multiobjective decision-making, we make a comprehensive evaluation of the optional edge server and select the appropriate server to perform computation offloading. It is proved that the multiuser multichannel distributed computation offloading accords with the potential game condition, and it indicates that there is always a Nash equilibrium point in the potential game. Then, a distributed computation offloading algorithm is designed, which can realize Nash equilibrium. The simulation results show that the proposed offloading framework has higher stability than the similar methods, which can effectively reduce the delay and the overall cost of energy consumption of the edge client and improve the execution speed of the computation offloading and the standby time of the mobile device.

In future work, we will consider how edge devices optimize computation offloading in ad hoc networks, which will make it possible for users to share computing resources in more urgent situations.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (Grant no. U1603261) and the Natural Science Foundation Program of Xinjiang Province (Grant no. 2016D01A080).