Abstract

Computing offloading of mobile devices (MDs) through cloud is a greatly effective way to solve the problem of local resource constraints. However, cloud servers are usually located far away from MDs leading to a long response time. To this end, edge cloud servers (ECSs) provide a shorter response time due to being closer to MDs. In this paper, we propose a computing offloading game for MDs and ECSs. We prove the existence of a Stackelberg equilibrium in the game. In addition, we propose two algorithms, F-SGA and C-SGA, for delay-sensitive and compute-intensive applications, respectively. Moreover, the response time is reduced by F-SGA, which makes decisions quickly. An optimal decision is obtained by C-SGA, which achieves the equilibrium. Both algorithms above proposed can adjust the computing resource and utility of system users according to parameters control in computing offloading. The simulation results show that the game significantly saves the computing resources and response time of both the MD and the ECSs during the computing offloading process.

1. Introduction

The popularity of the Internet of Things (IoT) allows people to enjoy the convenience of the Internet in most scenarios of daily life. Especially for mobile devices (MDs), network services provide convenience and functional possibilities. However, functional and computationally intensive applications consume a large amount of energy and computing time on MDs, such as augmented reality [1] and face recognition [2]. Moreover, the MD is characterized by its mobility and portability with the poor CPU performance and limited battery power. To this end, the mobile cloud computing (MCC) is seen as an effective way to solve the problem of a shortage of local resources by offloading the computations to cloud infrastructure with remarkable computational power [35]. The popular approach is for offloading computing tasks to the public clouds such as Windows Azure and Amazon EC2. Although MCC provides considerable cloud resources, it cannot guarantee a low response time. Furthermore, the user experience is reduced due to the delay.

Edge cloud computing (ECC) is promising for mobile computing offloading, which also considers a promoter of 5G mobile networks because they are located near the edge of the network [6, 7], and has been extensively studied in recent years [8, 9]. As illustrated in Figure 1, edge cloud servers (ECSs) are closer to users, which greatly reduces the time for data transmission. Therefore, offloading computations to less resource ECSs is considered a more advantageous solution. Some previous works on ECC focus on reducing energy consumption such as [10, 11]. Although [12] considers the energy consumption of MDs and cloud servers in computing offloading, it does not analyze the computing performance of MDs and cloud servers. Refs. [13, 14] focus on computing performance enhancement. However, ECSs usually leased in real life scenarios rarely attract attention of researchers.

In this paper, we consider the equilibrium problem during the computing offloading process, which not only considers the needs of MDs, but also considers the maximum benefits of service providers. For this reason, we propose a strategy for computing offloading by computing the equilibrium between a MD and ECSs. Liu et al. [15] considered the computing offloading process between a remote cloud server and several ECSs, which concerned the benefits of different levels of service providers. However, they did not pay attention to the needs of mobile users, which also should be taken seriously because of its mobility and portability with limited resource [3]. Ref. [16] considered the mobile user that maximized their performance by choosing of the wireless access points for computing offloading. But they did not consider the benefit of mobile users. In this paper, we mainly focus on the features of mobile users, which concludes the issues of mobility, cost, and so on. We designed an efficient computing offloading strategy considering the scenario that a MD executes computing offloading through ECSs. Suppose a MD needs to offload its computing tasks to one of the sets of ECSs. The MD needs to negotiate an offloading policy with the ESCs to optimize the offloading efficiency of both the MD and the ECSs. For this reason, we propose a strategy among a MD and ECSs. Furthermore, we formulate the computing offloading process among the MD and the ECSs as a Stackelberg game. In particular, the MD increases computing efficiency by offloading complicated computation to the ECSs, and the ECSs obtain certain revenue by performing the computations which are offloaded by the MD. An equilibrium between the benefits of the MD and the ECSs is achieved through our proposed strategy. We make three important contributions in this paper.(i)Realistically considering the real offloading scenario, we formulate the interaction of the MD and the ECSs during the computing offloading process as a Stackelberg game.(ii)We prove the existence of equilibrium in the Stackelberg game. Furthermore, we propose the C-SGA for computing the equilibrium. And we also design the F-SGA for delay-sensitive applications, which greatly reduces response time.(iii)We verify the performance of the proposed algorithms via the simulation experiments. The results show that the proposed algorithms increase the efficiency of computing offloading. In addition, we performed a detailed analysis of the performance of the strategy for the changes in the values of the different parameters.

The rest of the paper is organized as follows. We present the related work in Section 2. We present our system model and formulate the problem in Section 3. We analyze the Stackelberg game of our model in Section 4. We present our algorithms in Section 5. Performance evaluation is provided in Section 6. We conclude this paper in Section 7.

Most previous works have been done on computing offloading [1719]. The emergence of computing offloading techniques can be traced back to the concept of Cyber Foraging [20], which reduces the computation, storage, and energy of MDs by offloading tasks of MDs to nearby servers with sufficient resources. The main goal of computing offloading includes expanding CPU capacity, saving energy consumption of MDs, reducing service delays, and saving computational cost. Most of the early computing offloading techniques used static partitioning schemes, relying on programmers to statically divide the application into two parts: one part is executed on the MD; the other part is executed on the server. Li et al. [21] proposed a partitioning approach based on energy consumption. The communication energy consumption depends on the size of the transmitted data and the network bandwidth. The computational energy consumption depends on the number of instructions of the program. They obtain optimized program partitioning based on the consumption of computation and communication. Yang et al. [22] proposed a comprehensive consideration of the use of multiple resources, including CPU, memory, and communication costs. They offloaded some tasks on the MD to a nearby laptop with adequate resources.

Proposition of MAUI [23] is to provide a common dynamic computing offloading solution and minimize the burden on developers. The programmer only needs to divide the application into local methods and remote methods without having to make an offloading decision set for each program. In order to solve the problem of excessive transmission delay in wide area network (WAN), researchers have considered offloading tasks of MDs to infrastructure that are closer to the information source. Then, Satyanarayanan et al. [24] first proposed the concept of Cloudlet, which is defined as a trusted, resource-rich computing device or a group of computing devices to provide computing to nearby mobile terminals. Patel et al. [25] proposed the concept of MEC, which provides powerful computing capability in the wireless access network close to mobile users. MEC runs at the edge of the network and is logically independent of the rest of the network, which is important for applications with high security requirements [26, 27]. In addition, ECSs are particularly suited for dealing with massive analyses and mass data. At the same time, since the ECSs are geographically close to users, the delay of the network responding to the user request is greatly reduced, and the possibility of network congestion in the transmission network and the core network portion is also reduced [28]. There have been some previous studies on computing offloading using ECSs [2931]. Wang et al. [32] propose a MEC-WPT design for computing offloading by considering multiuser mobile edge cloud system. Ref. [33] explored how infrastructure of edge computing can improve latency and energy consumption relative to the cloud by analyzing multiple types of mobile applications. They demonstrate that the use of edge computing platforms in WIFI and LTE networks can significantly improve the latency of interactive and intensive computing applications. Sardellitti et al. [34] proposed a QoS-based incentive mechanism for mobile data offloading. The incentive mechanism is used in the quality-aware system to stimulate the users’ participation and enhance the robustness of the system [35]. Neto et al. [36] designed a mobile offloading framework with an original decision engine, which can significantly reduce energy consumption.

Some of previous work conducted extensive research on game theory. There has been some research on the game that can be used in computing offloading [37, 38]. Chen et al. [39] proposed an approach for computing offloading for mobile cloud computing. In addition, they designed a computation offloading mechanism to achieve a Nash equilibrium of the game. Wang et al. [40] proposed an analysis framework based on evolutionary game theory. Xu et al. [41] designed a security-aware incentive mechanism for computation offloading by using game theory and epidemic theory.

3. System Model

As the primary deployment method for MEC [25], we consider the edge system in this paper consisting of a set of ECSs and several MDs. We assume that a MD decides to offload its computations to a set of ECSs. denotes the total computation of ECS for all . denotes the computation offloaded from the MD to ECS .

3.1. Mobile Device

We assume that computation offloaded profile is ; given , the local remaining unoffloaded computation is given by and computations performed locally completely without offloading is given by The cost of local computing for the mobile device is given by where is the modeling parameter. The payment profile is . Given , the payment for offloading is given by where the payment is determined by offloading unit price and computation amount , i.e., We consider the consumption of performing computing offloading time as part of the offloading cost, which includes the time to compute local remaining computations and the time to transfer the data needed by computing offloading. denotes the computational capability of the MD, and denotes the transmission capacity of the MD. The total consumption of performing offloading time is given by where represents the time of computing local remaining computations and is given by where represents the complexity of the computation. represents the time taken to transfer the data needed by computing offloading and is given by where represents the coefficient of the amount of data required by transferring computations.

Therefore, the cost of the MD for performing computing offloading is determined by the computing time, the transmission time, and the payment for the ECSs, i.e., We define the local utility as the cost reduction from performing offloading, i.e.,

3.2. Edge Servers

ECSs usually have their own computations to compute as shown in Figure 1. When the ECSs decide whether to perform the computing offloading for the MD, they must take their own computations into account. denotes the revenue for one unit of ECS to perform its own computations, and the computational capability of the ECS . Similar to previous work [42], we ignore the time for transmitting the computation results.

The profit of the ECS is given by where is the computation for its own users, and denotes the cost of computing .

The utility of each ECS , which is the profit improvement by performing computing offloading, is defined as

3.3. Problem Formulation

The strategy among the MD and the ECSs is formulated as a Stackelberg game. Our strategy has two steps as shown in Figure 2. In the first step, the MD proposes a payment profile, denoted by . In the second step, the ECSs respond with corresponding amount of computation for offloading, denoted by .

We assume that the MD gives a first initial payment; then the optimal decision of each ECS is obtained by solving the following optimization problem:

Next, we obtain the optimal strategy by maximizing the utility of the MD after receiving the decision of the ECS, i.e.,where is the utility of the MD in (10).

4. Stackelberg Game Analysis

As discussed above, the MD determines the payment profile of computation offloading to the ECSs, while the ECSs provide corresponding amount of computation. We model the problem as a two-step Stackelberg game among the MD and the ECSs based on the noncooperative game theory. Explicitly, as shown in Figure 2, the MD acts as a leader proposing the payment profile. As followers of the MD, the ECSs can adjust the corresponding decisions.

4.1. Stackelberg Game Design

In this work, we denote by the optimal strategy of the ECS , which is the optimal problem of (13), and the optimal strategy profile. In addition, denotes the optimal payment for the ECS . A Nash equilibrium of the Stackelberg game is defined as , in which none of the ECSs can further improve its profit by changing strategy.

Definition 1. A Nash equilibrium of the Stackelberg game is a strategy profile , if the strategy satisfies the following conditions:

where is the upper bound of the payment , and is the lower bound.

4.2. Nash Equilibrium Analysis

In this part, we will analyze the equilibrium in the Stackelberg game that we proposed and prove the existence of Nash equilibrium in the Stackelberg game. In order to confirm the existence of Nash equilibrium in our Stackelberg game, we have made the following proofs. We first discuss the existence and uniqueness of Nash equilibrium of the Stackelberg game.

Lemma 2. The strategy profile set of the ECSs is nonempty, convex, and compact.

Proof. First, according to the characteristics of ECSs, we calculate the first-order derivative of to that is According to the first-order derivative, we calculate the second-order derivative of to that isWe can obtain the optimal strategy of ECS by taking (16) to zero Then we can obtain the payment’s lower bound of ECS by taking to zero, and we can obtain the payment’s upper bound of ECS by taking to , Therefore, Lemma 2 is proved completely.

Lemma 3. The ECS has a unique optimal strategy after receiving the mobile’s strategy.

Proof. Obviously, if the MD gives a that , the ECS will perform all of its computation for the MD. However, if , the ECS does not perform the offloading.
When , the expression of the strategy is given by (18), and we have Evidently, the relation (21) implies that is an increasing function of , which means that the higher payment the MD offers, the more computation the ECS provides. Equation (22) implies that is a concave function of . Since , is strictly concave in , and thus the strategy is unique and optimal. Lemma 3 is proved.

Lemma 4. For the optimal strategies of ECSs, the MD has a unique optimal strategy.

Proof. The first-order partial derivative of to is The second-order partial derivative of to is We define two auxiliary matrices as follows, and thus, the Hessian matrix of is given by Furthermore, for any nonzero column vector z, we haveFrom (23), we have . Since , , , we have Thus, we obtain From the inequality (32), is strictly concave, and then (15) is a convex optimization problem. Since is strictly negative, is unique, which is the optimal strategy of MD. Lemma 4 is proved.

With Lemmas 2, 3, and 4 in place, we prove the following theorem.

Theorem 5. A unique Nash equilibrium exists among the MD and the ECSs in our proposed Stackelberg game.

5. Algorithm Design

In this section, the process of strategy is shown in Figure 3. We propose two algorithms, F-SGA and C-SGA, for delay-sensitive services and compute-intensive services, respectively. Generally, delay-sensitive services require strict response time, which requires the system to make decisions quickly. Compute-intensive services require a large amount of computing by the servers. F-SGA, a fast Stackelberg game algorithm, can make decisions quickly due to its simple decision mechanism. C-SGA, a complex Stackelberg game algorithm, is slower than F-SGA in decision making speed, but it provided more accurate price and computation to maximize the benefit of MD and ECSs. F-SGA and C-SGA are described in detail as follows.

5.1. F-SGA

We propose F-SGA to quickly achieve equilibrium as shown in Algorithm 1. Because the algorithm can quickly obtain the optimal objective for the MD and the ECSs, it is suitable for computing offloading of latency-sensitives applications.

Input:  
Output:  .
1: if    then
2:Let ;
3:Let ;
4:Let ;
5: else
6:Let ;
7: end if
8: for all   such that   do
9:Calculate according to (18);
10: end for
11: Calculate according to (10);

As discussed before, we set two points, denoted as and . Initially, we set , . Then we describe F-SGA in detail. The ECSs first send their payment to the MD. After receiving the payment of ECSs, F-SGA proceeds to the next step by comparing the difference between and . It means that when the difference of payment from the ECSs is small, we simply compromise the payment of computing offloading. But when the different of payment is large, F-SGA will also quickly determine the payment of computing offloading, but more detailed.

5.2. C-SGA

The algorithm proposed in Algorithm 1 needs less time and has a faster response speed, but it directly determines the payment, which leads to an increase in the cost of computing offloading. We next propose C-SGA, which uses the fine iteration to obtain the optimal strategy. C-SGA are shown in Algorithm 2.

Input:  
Output:  .
1: Let ;
2: Let ;
3: while    do
4:for all   such that   do
5:Calculate according to (18);
6:end for
7:Calculate according to (10);
8:for all   such that   do
9:Calculate according to (18);
10:end for
11:Calculate according to (10);
12:if    then
13:Let ;
14:else
15:Let ;
16:end if
17: end while
18: Let ;
19: for all   such that   do
20:Calculate according to (18);
21: end for
22: Calculate according to (10);

In contrast, C-SGA only needs the ECSs to compare the the bound of payment, which not only greatly reduces the computation of the ECSs, but also obtains more optimized payment. Therefore, C-SGA is suitable for situations where the ECSs have less computation resources. The process of C-SGA is summarized as follows.(1)The MD first sends the initial value of payment. With the strategy , the ECSs compute the optimal computation for offloading based on (18) and send the computing results to the MD.(2)After receiving the results, the MD calculates the utility function and of the strategy. Then we compare and . If , this implies that the optimal value must be located between and , and thus we set , .(3)Then the MD and the ECSs continue to perform procedures (1) and (2) until , where is supposed to make our algorithm more accurate.

Theorem 6. The proposed Algorithm 2 can reach a unique Nash equilibrium.

Proof. Since is strict convex, the optimal strategy is unique. We can obtain by Algorithm 2 to maximize . Then, ECS can determine corresponding to maximize its utility. Consequently, the MD can also obtain a definitive optimal utility with . Therefore, Algorithm 2 can reach a unique Nash equilibrium.

6. Performance Evaluation

We use extensive simulations to verify our proposed strategy and algorithms. We set up 3 ECSs as the initial default settings for our simulations. The total computation of ECSs is 100. We consider that the MD and the ECSs are placed over a region. The ECSs are located at grid points in the region, while the MDs are placed uniformly at random. In the simulations, we set the modeling parameter and the computation complexity parameter .

6.1. The Utility of the Mobile Device

In this part, we analyze the changes in the utility of MD as a function of different conditions. First we analyze the relationship between the utility of a MD and the revenue of the ECSs. Figure 4 shows the utility of the MD by varying the value of revenue of the ECSs from their own users. As shown in the figure, regardless of the parameter being 0.4, 0.6, or 0.8, with the increase of the unit revenue, the ECSs prefer providing the computation to their own users. As a result, the amount of computation provided to computing offloading is reduced. Therefore, the utility of the MD decreases.

Then we analyze how the payment of a MD affects the utility of the MD. Figure 5 shows the correlation between the payment form the MD and the utility of the MD under different value of parameter . With the increase of the payment from the MD, the utility of it increases at first. But after reaching a peak, the utility of the MD has dropped. This is because as the payment increases, the ECS provides more computation. However, the payment continues to increase, resulting in an increase in the cost of MD, so the utility is reduced.

6.2. The Utility of the Edge Servers

Figure 6 shows the average utility of the ECSs by varying the unit revenue. With the increase of the unit revenue, the utility of the ECSs increases. The high utility of ECSs is attributed to the fact that ECSs are willing to provide computation to their own users, leading to efficient computation offloading. Interestingly, the ECSs achieve less utility when the revenue is large enough. As we know, ECSs are profit driven to provide computation for their users. However, large unit revenue is a double-edged sword for ECSs. Although edge servers are more willing to serve their users, it may result in fewer computing offloading.

Figure 7 shows the trend of ESCs’ utility as a function of payment for a MD. We set , , and , respectively. As the figure shows, the utility of ECSs begins to increase as the payment of the MD increases, and the utility of ECSs begins to decrease after reaching a peak. At the beginning, due to the increase in the payment, the utility of ECSs increases, but this will increase the cost of the MD, so it will not continue to increase.

6.3. Offloading Computations

In order to evaluate how the unit revenue impacts computation for offloading, we set three different unit revenues for simulation. We set , , and , respectively. As can be seen from Figure 8, the trends of different are similar, and their computation for offloading increases as the payment increases. In other words, in the case of the above three unit revenues, the computation increases as the payment from the MD increases. Note that when the unit revenue is greater than the MD’s given payment, the ECSs will not perform any offloading.

Figure 9 shows the relationship between the computation of offloading and the utility of MD. It also shows the relationship between the computation and the utility of ECSs. Through the simulation results, we see that, within a certain range, the greater the computation of offloading, the higher the utility of the MD. At the same time, the utility of ECSs increases. It is well understood that, due to the strategy, when the computation of offloading increases, the utility of the MD will increase accordingly. Of course ECSs will also benefit. We can say with certainty that, within a certain range bound, the more computation performed, the more beneficial to both sides.

6.4. The Impact of the Utility and Response Time

We are interested in the impact of that represents the value of accuracy for the proposed strategy. So we set a series of values in the range of 0.2 to 1.5. Figure 10 shows the simulation results. As the value of increases, the accuracy of our strategy increases, and the utility of MD and the profit of ECSs increase. It will be more beneficial to both the MD and the ECSs to calculate the benefits of offloading.

Figure 11 shows the response time of F-SGA and C-SGA using the same conditions during the computing offloading. As can be seen from the figure, as the parameter increases, the response time of C-SGA decreases. However, the response time of the F-SGA remains the same and has been at a low level. This is because as the parameter increases, the C-SGA needs to make more rounds of judgment and iteration, resulting in an increase in response time. But F-SGA can always make decisions quickly, regardless of the value of the parameter .

7. Conclusion

In this paper, we proposed a game for the computing offloading between a MD and ECSs. We provided a Stackelberg game theoretic analysis and proved the existence of the equilibrium in the Stackelberg game. Furthermore, we proposed two algorithms for different scenarios and provided the upper and lower boundary of the payment. Moreover, multiple optimization results were obtained by regulating model parameters. The simulation results showed that the game is effective in improving the utility of both the MD and the ECSs.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grants no. 61602155, no. 61871430, no. U1604155, and no. 61370221 and in part by Henan Science and Technology Innovation Project under Grant no. 174100510010 and in part by the industry university research project of Henan Province under Grant No. 172107000005 and in part by the basic research projects in the University of Henan Province under Grant No. 19zx010.