Research Article  Open Access
Meiwen Li, Qingtao Wu, Junlong Zhu, Ruijuan Zheng, Mingchuan Zhang, "A Computing Offloading Game for Mobile Devices and Edge Cloud Servers", Wireless Communications and Mobile Computing, vol. 2018, Article ID 2179316, 10 pages, 2018. https://doi.org/10.1155/2018/2179316
A Computing Offloading Game for Mobile Devices and Edge Cloud Servers
Abstract
Computing offloading of mobile devices (MDs) through cloud is a greatly effective way to solve the problem of local resource constraints. However, cloud servers are usually located far away from MDs leading to a long response time. To this end, edge cloud servers (ECSs) provide a shorter response time due to being closer to MDs. In this paper, we propose a computing offloading game for MDs and ECSs. We prove the existence of a Stackelberg equilibrium in the game. In addition, we propose two algorithms, FSGA and CSGA, for delaysensitive and computeintensive applications, respectively. Moreover, the response time is reduced by FSGA, which makes decisions quickly. An optimal decision is obtained by CSGA, which achieves the equilibrium. Both algorithms above proposed can adjust the computing resource and utility of system users according to parameters control in computing offloading. The simulation results show that the game significantly saves the computing resources and response time of both the MD and the ECSs during the computing offloading process.
1. Introduction
The popularity of the Internet of Things (IoT) allows people to enjoy the convenience of the Internet in most scenarios of daily life. Especially for mobile devices (MDs), network services provide convenience and functional possibilities. However, functional and computationally intensive applications consume a large amount of energy and computing time on MDs, such as augmented reality [1] and face recognition [2]. Moreover, the MD is characterized by its mobility and portability with the poor CPU performance and limited battery power. To this end, the mobile cloud computing (MCC) is seen as an effective way to solve the problem of a shortage of local resources by offloading the computations to cloud infrastructure with remarkable computational power [3–5]. The popular approach is for offloading computing tasks to the public clouds such as Windows Azure and Amazon EC2. Although MCC provides considerable cloud resources, it cannot guarantee a low response time. Furthermore, the user experience is reduced due to the delay.
Edge cloud computing (ECC) is promising for mobile computing offloading, which also considers a promoter of 5G mobile networks because they are located near the edge of the network [6, 7], and has been extensively studied in recent years [8, 9]. As illustrated in Figure 1, edge cloud servers (ECSs) are closer to users, which greatly reduces the time for data transmission. Therefore, offloading computations to less resource ECSs is considered a more advantageous solution. Some previous works on ECC focus on reducing energy consumption such as [10, 11]. Although [12] considers the energy consumption of MDs and cloud servers in computing offloading, it does not analyze the computing performance of MDs and cloud servers. Refs. [13, 14] focus on computing performance enhancement. However, ECSs usually leased in real life scenarios rarely attract attention of researchers.
In this paper, we consider the equilibrium problem during the computing offloading process, which not only considers the needs of MDs, but also considers the maximum benefits of service providers. For this reason, we propose a strategy for computing offloading by computing the equilibrium between a MD and ECSs. Liu et al. [15] considered the computing offloading process between a remote cloud server and several ECSs, which concerned the benefits of different levels of service providers. However, they did not pay attention to the needs of mobile users, which also should be taken seriously because of its mobility and portability with limited resource [3]. Ref. [16] considered the mobile user that maximized their performance by choosing of the wireless access points for computing offloading. But they did not consider the benefit of mobile users. In this paper, we mainly focus on the features of mobile users, which concludes the issues of mobility, cost, and so on. We designed an efficient computing offloading strategy considering the scenario that a MD executes computing offloading through ECSs. Suppose a MD needs to offload its computing tasks to one of the sets of ECSs. The MD needs to negotiate an offloading policy with the ESCs to optimize the offloading efficiency of both the MD and the ECSs. For this reason, we propose a strategy among a MD and ECSs. Furthermore, we formulate the computing offloading process among the MD and the ECSs as a Stackelberg game. In particular, the MD increases computing efficiency by offloading complicated computation to the ECSs, and the ECSs obtain certain revenue by performing the computations which are offloaded by the MD. An equilibrium between the benefits of the MD and the ECSs is achieved through our proposed strategy. We make three important contributions in this paper.(i)Realistically considering the real offloading scenario, we formulate the interaction of the MD and the ECSs during the computing offloading process as a Stackelberg game.(ii)We prove the existence of equilibrium in the Stackelberg game. Furthermore, we propose the CSGA for computing the equilibrium. And we also design the FSGA for delaysensitive applications, which greatly reduces response time.(iii)We verify the performance of the proposed algorithms via the simulation experiments. The results show that the proposed algorithms increase the efficiency of computing offloading. In addition, we performed a detailed analysis of the performance of the strategy for the changes in the values of the different parameters.
The rest of the paper is organized as follows. We present the related work in Section 2. We present our system model and formulate the problem in Section 3. We analyze the Stackelberg game of our model in Section 4. We present our algorithms in Section 5. Performance evaluation is provided in Section 6. We conclude this paper in Section 7.
2. Related Work
Most previous works have been done on computing offloading [17–19]. The emergence of computing offloading techniques can be traced back to the concept of Cyber Foraging [20], which reduces the computation, storage, and energy of MDs by offloading tasks of MDs to nearby servers with sufficient resources. The main goal of computing offloading includes expanding CPU capacity, saving energy consumption of MDs, reducing service delays, and saving computational cost. Most of the early computing offloading techniques used static partitioning schemes, relying on programmers to statically divide the application into two parts: one part is executed on the MD; the other part is executed on the server. Li et al. [21] proposed a partitioning approach based on energy consumption. The communication energy consumption depends on the size of the transmitted data and the network bandwidth. The computational energy consumption depends on the number of instructions of the program. They obtain optimized program partitioning based on the consumption of computation and communication. Yang et al. [22] proposed a comprehensive consideration of the use of multiple resources, including CPU, memory, and communication costs. They offloaded some tasks on the MD to a nearby laptop with adequate resources.
Proposition of MAUI [23] is to provide a common dynamic computing offloading solution and minimize the burden on developers. The programmer only needs to divide the application into local methods and remote methods without having to make an offloading decision set for each program. In order to solve the problem of excessive transmission delay in wide area network (WAN), researchers have considered offloading tasks of MDs to infrastructure that are closer to the information source. Then, Satyanarayanan et al. [24] first proposed the concept of Cloudlet, which is defined as a trusted, resourcerich computing device or a group of computing devices to provide computing to nearby mobile terminals. Patel et al. [25] proposed the concept of MEC, which provides powerful computing capability in the wireless access network close to mobile users. MEC runs at the edge of the network and is logically independent of the rest of the network, which is important for applications with high security requirements [26, 27]. In addition, ECSs are particularly suited for dealing with massive analyses and mass data. At the same time, since the ECSs are geographically close to users, the delay of the network responding to the user request is greatly reduced, and the possibility of network congestion in the transmission network and the core network portion is also reduced [28]. There have been some previous studies on computing offloading using ECSs [29–31]. Wang et al. [32] propose a MECWPT design for computing offloading by considering multiuser mobile edge cloud system. Ref. [33] explored how infrastructure of edge computing can improve latency and energy consumption relative to the cloud by analyzing multiple types of mobile applications. They demonstrate that the use of edge computing platforms in WIFI and LTE networks can significantly improve the latency of interactive and intensive computing applications. Sardellitti et al. [34] proposed a QoSbased incentive mechanism for mobile data offloading. The incentive mechanism is used in the qualityaware system to stimulate the users’ participation and enhance the robustness of the system [35]. Neto et al. [36] designed a mobile offloading framework with an original decision engine, which can significantly reduce energy consumption.
Some of previous work conducted extensive research on game theory. There has been some research on the game that can be used in computing offloading [37, 38]. Chen et al. [39] proposed an approach for computing offloading for mobile cloud computing. In addition, they designed a computation offloading mechanism to achieve a Nash equilibrium of the game. Wang et al. [40] proposed an analysis framework based on evolutionary game theory. Xu et al. [41] designed a securityaware incentive mechanism for computation offloading by using game theory and epidemic theory.
3. System Model
As the primary deployment method for MEC [25], we consider the edge system in this paper consisting of a set of ECSs and several MDs. We assume that a MD decides to offload its computations to a set of ECSs. denotes the total computation of ECS for all . denotes the computation offloaded from the MD to ECS .
3.1. Mobile Device
We assume that computation offloaded profile is ; given , the local remaining unoffloaded computation is given by and computations performed locally completely without offloading is given by The cost of local computing for the mobile device is given by where is the modeling parameter. The payment profile is . Given , the payment for offloading is given by where the payment is determined by offloading unit price and computation amount , i.e., We consider the consumption of performing computing offloading time as part of the offloading cost, which includes the time to compute local remaining computations and the time to transfer the data needed by computing offloading. denotes the computational capability of the MD, and denotes the transmission capacity of the MD. The total consumption of performing offloading time is given by where represents the time of computing local remaining computations and is given by where represents the complexity of the computation. represents the time taken to transfer the data needed by computing offloading and is given by where represents the coefficient of the amount of data required by transferring computations.
Therefore, the cost of the MD for performing computing offloading is determined by the computing time, the transmission time, and the payment for the ECSs, i.e., We define the local utility as the cost reduction from performing offloading, i.e.,
3.2. Edge Servers
ECSs usually have their own computations to compute as shown in Figure 1. When the ECSs decide whether to perform the computing offloading for the MD, they must take their own computations into account. denotes the revenue for one unit of ECS to perform its own computations, and the computational capability of the ECS . Similar to previous work [42], we ignore the time for transmitting the computation results.
The profit of the ECS is given by where is the computation for its own users, and denotes the cost of computing .
The utility of each ECS , which is the profit improvement by performing computing offloading, is defined as
3.3. Problem Formulation
The strategy among the MD and the ECSs is formulated as a Stackelberg game. Our strategy has two steps as shown in Figure 2. In the first step, the MD proposes a payment profile, denoted by . In the second step, the ECSs respond with corresponding amount of computation for offloading, denoted by .
We assume that the MD gives a first initial payment; then the optimal decision of each ECS is obtained by solving the following optimization problem:
Next, we obtain the optimal strategy by maximizing the utility of the MD after receiving the decision of the ECS, i.e.,where is the utility of the MD in (10).
4. Stackelberg Game Analysis
As discussed above, the MD determines the payment profile of computation offloading to the ECSs, while the ECSs provide corresponding amount of computation. We model the problem as a twostep Stackelberg game among the MD and the ECSs based on the noncooperative game theory. Explicitly, as shown in Figure 2, the MD acts as a leader proposing the payment profile. As followers of the MD, the ECSs can adjust the corresponding decisions.
4.1. Stackelberg Game Design
In this work, we denote by the optimal strategy of the ECS , which is the optimal problem of (13), and the optimal strategy profile. In addition, denotes the optimal payment for the ECS . A Nash equilibrium of the Stackelberg game is defined as , in which none of the ECSs can further improve its profit by changing strategy.
Definition 1. A Nash equilibrium of the Stackelberg game is a strategy profile , if the strategy satisfies the following conditions:
where is the upper bound of the payment , and is the lower bound.
4.2. Nash Equilibrium Analysis
In this part, we will analyze the equilibrium in the Stackelberg game that we proposed and prove the existence of Nash equilibrium in the Stackelberg game. In order to confirm the existence of Nash equilibrium in our Stackelberg game, we have made the following proofs. We first discuss the existence and uniqueness of Nash equilibrium of the Stackelberg game.
Lemma 2. The strategy profile set of the ECSs is nonempty, convex, and compact.
Proof. First, according to the characteristics of ECSs, we calculate the firstorder derivative of to that is According to the firstorder derivative, we calculate the secondorder derivative of to that isWe can obtain the optimal strategy of ECS by taking (16) to zero Then we can obtain the payment’s lower bound of ECS by taking to zero, and we can obtain the payment’s upper bound of ECS by taking to , Therefore, Lemma 2 is proved completely.
Lemma 3. The ECS has a unique optimal strategy after receiving the mobile’s strategy.
Proof. Obviously, if the MD gives a that , the ECS will perform all of its computation for the MD. However, if , the ECS does not perform the offloading.
When , the expression of the strategy is given by (18), and we have Evidently, the relation (21) implies that is an increasing function of , which means that the higher payment the MD offers, the more computation the ECS provides. Equation (22) implies that is a concave function of . Since , is strictly concave in , and thus the strategy is unique and optimal. Lemma 3 is proved.
Lemma 4. For the optimal strategies of ECSs, the MD has a unique optimal strategy.
Proof. The firstorder partial derivative of to is The secondorder partial derivative of to is We define two auxiliary matrices as follows, and thus, the Hessian matrix of is given by Furthermore, for any nonzero column vector z, we haveFrom (23), we have . Since , , , we have Thus, we obtain From the inequality (32), is strictly concave, and then (15) is a convex optimization problem. Since is strictly negative, is unique, which is the optimal strategy of MD. Lemma 4 is proved.
With Lemmas 2, 3, and 4 in place, we prove the following theorem.
Theorem 5. A unique Nash equilibrium exists among the MD and the ECSs in our proposed Stackelberg game.
5. Algorithm Design
In this section, the process of strategy is shown in Figure 3. We propose two algorithms, FSGA and CSGA, for delaysensitive services and computeintensive services, respectively. Generally, delaysensitive services require strict response time, which requires the system to make decisions quickly. Computeintensive services require a large amount of computing by the servers. FSGA, a fast Stackelberg game algorithm, can make decisions quickly due to its simple decision mechanism. CSGA, a complex Stackelberg game algorithm, is slower than FSGA in decision making speed, but it provided more accurate price and computation to maximize the benefit of MD and ECSs. FSGA and CSGA are described in detail as follows.
5.1. FSGA
We propose FSGA to quickly achieve equilibrium as shown in Algorithm 1. Because the algorithm can quickly obtain the optimal objective for the MD and the ECSs, it is suitable for computing offloading of latencysensitives applications.
As discussed before, we set two points, denoted as and . Initially, we set , . Then we describe FSGA in detail. The ECSs first send their payment to the MD. After receiving the payment of ECSs, FSGA proceeds to the next step by comparing the difference between and . It means that when the difference of payment from the ECSs is small, we simply compromise the payment of computing offloading. But when the different of payment is large, FSGA will also quickly determine the payment of computing offloading, but more detailed.
5.2. CSGA
The algorithm proposed in Algorithm 1 needs less time and has a faster response speed, but it directly determines the payment, which leads to an increase in the cost of computing offloading. We next propose CSGA, which uses the fine iteration to obtain the optimal strategy. CSGA are shown in Algorithm 2.

In contrast, CSGA only needs the ECSs to compare the the bound of payment, which not only greatly reduces the computation of the ECSs, but also obtains more optimized payment. Therefore, CSGA is suitable for situations where the ECSs have less computation resources. The process of CSGA is summarized as follows.(1)The MD first sends the initial value of payment. With the strategy , the ECSs compute the optimal computation for offloading based on (18) and send the computing results to the MD.(2)After receiving the results, the MD calculates the utility function and of the strategy. Then we compare and . If , this implies that the optimal value must be located between and , and thus we set , .(3)Then the MD and the ECSs continue to perform procedures (1) and (2) until , where is supposed to make our algorithm more accurate.
Theorem 6. The proposed Algorithm 2 can reach a unique Nash equilibrium.
Proof. Since is strict convex, the optimal strategy is unique. We can obtain by Algorithm 2 to maximize . Then, ECS can determine corresponding to maximize its utility. Consequently, the MD can also obtain a definitive optimal utility with . Therefore, Algorithm 2 can reach a unique Nash equilibrium.
6. Performance Evaluation
We use extensive simulations to verify our proposed strategy and algorithms. We set up 3 ECSs as the initial default settings for our simulations. The total computation of ECSs is 100. We consider that the MD and the ECSs are placed over a region. The ECSs are located at grid points in the region, while the MDs are placed uniformly at random. In the simulations, we set the modeling parameter and the computation complexity parameter .
6.1. The Utility of the Mobile Device
In this part, we analyze the changes in the utility of MD as a function of different conditions. First we analyze the relationship between the utility of a MD and the revenue of the ECSs. Figure 4 shows the utility of the MD by varying the value of revenue of the ECSs from their own users. As shown in the figure, regardless of the parameter being 0.4, 0.6, or 0.8, with the increase of the unit revenue, the ECSs prefer providing the computation to their own users. As a result, the amount of computation provided to computing offloading is reduced. Therefore, the utility of the MD decreases.
Then we analyze how the payment of a MD affects the utility of the MD. Figure 5 shows the correlation between the payment form the MD and the utility of the MD under different value of parameter . With the increase of the payment from the MD, the utility of it increases at first. But after reaching a peak, the utility of the MD has dropped. This is because as the payment increases, the ECS provides more computation. However, the payment continues to increase, resulting in an increase in the cost of MD, so the utility is reduced.
6.2. The Utility of the Edge Servers
Figure 6 shows the average utility of the ECSs by varying the unit revenue. With the increase of the unit revenue, the utility of the ECSs increases. The high utility of ECSs is attributed to the fact that ECSs are willing to provide computation to their own users, leading to efficient computation offloading. Interestingly, the ECSs achieve less utility when the revenue is large enough. As we know, ECSs are profit driven to provide computation for their users. However, large unit revenue is a doubleedged sword for ECSs. Although edge servers are more willing to serve their users, it may result in fewer computing offloading.
Figure 7 shows the trend of ESCs’ utility as a function of payment for a MD. We set , , and , respectively. As the figure shows, the utility of ECSs begins to increase as the payment of the MD increases, and the utility of ECSs begins to decrease after reaching a peak. At the beginning, due to the increase in the payment, the utility of ECSs increases, but this will increase the cost of the MD, so it will not continue to increase.
6.3. Offloading Computations
In order to evaluate how the unit revenue impacts computation for offloading, we set three different unit revenues for simulation. We set , , and , respectively. As can be seen from Figure 8, the trends of different are similar, and their computation for offloading increases as the payment increases. In other words, in the case of the above three unit revenues, the computation increases as the payment from the MD increases. Note that when the unit revenue is greater than the MD’s given payment, the ECSs will not perform any offloading.
Figure 9 shows the relationship between the computation of offloading and the utility of MD. It also shows the relationship between the computation and the utility of ECSs. Through the simulation results, we see that, within a certain range, the greater the computation of offloading, the higher the utility of the MD. At the same time, the utility of ECSs increases. It is well understood that, due to the strategy, when the computation of offloading increases, the utility of the MD will increase accordingly. Of course ECSs will also benefit. We can say with certainty that, within a certain range bound, the more computation performed, the more beneficial to both sides.
6.4. The Impact of the Utility and Response Time
We are interested in the impact of that represents the value of accuracy for the proposed strategy. So we set a series of values in the range of 0.2 to 1.5. Figure 10 shows the simulation results. As the value of increases, the accuracy of our strategy increases, and the utility of MD and the profit of ECSs increase. It will be more beneficial to both the MD and the ECSs to calculate the benefits of offloading.
Figure 11 shows the response time of FSGA and CSGA using the same conditions during the computing offloading. As can be seen from the figure, as the parameter increases, the response time of CSGA decreases. However, the response time of the FSGA remains the same and has been at a low level. This is because as the parameter increases, the CSGA needs to make more rounds of judgment and iteration, resulting in an increase in response time. But FSGA can always make decisions quickly, regardless of the value of the parameter .
7. Conclusion
In this paper, we proposed a game for the computing offloading between a MD and ECSs. We provided a Stackelberg game theoretic analysis and proved the existence of the equilibrium in the Stackelberg game. Furthermore, we proposed two algorithms for different scenarios and provided the upper and lower boundary of the payment. Moreover, multiple optimization results were obtained by regulating model parameters. The simulation results showed that the game is effective in improving the utility of both the MD and the ECSs.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grants no. 61602155, no. 61871430, no. U1604155, and no. 61370221 and in part by Henan Science and Technology Innovation Project under Grant no. 174100510010 and in part by the industry university research project of Henan Province under Grant No. 172107000005 and in part by the basic research projects in the University of Henan Province under Grant No. 19zx010.
References
 W. Zhang, B. Han, and P. Hui, “On the networking challenges of mobile Augmented Reality,” in Proceedings of the 2017 ACM SIGCOMM Workshop on Virtual Reality and Augmented Reality Network, VR/AR Network 2017, pp. 24–29, USA. View at: Google Scholar
 Y. Shen, M. Yang, B. Wei, C. T. Chou, and W. Hu, “Learn to Recognise: Exploring Priors of Sparse Face Recognition on Smartphones,” IEEE Transactions on Mobile Computing, vol. 16, no. 6, pp. 1705–1717, 2017. View at: Publisher Site  Google Scholar
 D. Liu, L. Khoukhi, and A. Hafid, “PredictionBased Mobile Data Offloading in Mobile Cloud Computing,” IEEE Transactions on Wireless Communications, vol. 17, no. 7, pp. 4660–4673, 2018. View at: Publisher Site  Google Scholar
 Qingtao Wu, Xulong Zhang, Mingchuan Zhang, Ying Lou, Ruijuan Zheng, and Wangyang Wei, “Reputation Revision Method for Selecting Cloud Services Based on Prior Knowledge and a Market Mechanism,” The Scientific World Journal, vol. 2014, Article ID 617087, 9 pages, 2014. View at: Publisher Site  Google Scholar
 Q. Wu, M. Zhang, R. Zheng, and W. Wei, “A QoSsatisfied Prediction Model for Cloudservice Composition Based on Hidden Markov Model,” International Journal of Online Engineering (iJOE), vol. 9, no. 3, p. 67, 2013. View at: Publisher Site  Google Scholar
 M. Olsson, C. Cavdar, P. Frenger, S. Tombaz, D. Sabella, and R. Jantti, “5GrEEn: Towards Green 5G mobile networks,” in Proceedings of the 2013 IEEE 9th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), pp. 212–216, Lyon, France, October 2013. View at: Publisher Site  Google Scholar
 N. Cheng, W. Xu, W. Shi et al., “AirGround Integrated Mobile Edge Networks: Architecture, Challenges, and Opportunities,” IEEE Communications Magazine, vol. 56, no. 8, pp. 26–32, 2018. View at: Google Scholar
 X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multiuser computation offloading for mobileedge cloud computing,” IEEE/ACM Transactions on Networking, vol. 24, no. 5, pp. 2795–2808, 2015. View at: Publisher Site  Google Scholar
 L. Tong, Y. Li, and W. Gao, “A hierarchical edge cloud architecture for mobile computing,” in Proceedings of the 35th Annual IEEE International Conference on Computer Communications, IEEE INFOCOM 2016, USA, April 2016. View at: Publisher Site  Google Scholar
 F. Liu, P. Shu, H. Jin et al., “Gearing resourcepoor mobile devices with powerful clouds: architectures, challenges, and applications,” IEEE Wireless Communications Magazine, vol. 20, no. 3, pp. 14–21, 2013. View at: Publisher Site  Google Scholar
 S. Tayade, P. Rost, A. Maeder, and H. D. Schotten, “Devicecentric energy optimization for edge cloud offloading,” in Proceedings of the 2017 IEEE Global Communications Conference (GLOBECOM 2017), pp. 1–7, Singapore, December 2017. View at: Publisher Site  Google Scholar
 K. Kumar and Y.H. Lu, “Cloud computing for mobile users: can offloading computation save energy?” The Computer Journal, vol. 43, no. 4, Article ID 5445167, pp. 51–56, 2010. View at: Publisher Site  Google Scholar
 C. Wu, B. Yang, W. Zhu, and Y. Zhang, “Toward High Mobile GPU Performance Through Collaborative Workload Offloading,” IEEE Transactions on Parallel and Distributed Systems, vol. 29, no. 2, pp. 435–449, 2018. View at: Publisher Site  Google Scholar
 X. Tao, K. Ota, M. Dong, H. Qi, and K. Li, “Performance guaranteed computation offloading for mobileedge cloud computing,” IEEE Wireless Communications Letters, vol. 6, no. 6, pp. 774–777, 2017. View at: Publisher Site  Google Scholar
 Y. Liu, C. Xu, Y. Zhan, Z. Liu, J. Guan, and H. Zhang, “Incentive mechanism for computation offloading using edge computing: a Stackelberg game approach,” Computer Networks, vol. 129, pp. 399–409, 2017. View at: Publisher Site  Google Scholar
 S. Josilo and G. Dan, “A game theoretic analysis of selfish mobile computation offloading,” in Proceedings of the IEEE INFOCOM 2017  IEEE Conference on Computer Communications, pp. 1–9, Atlanta, GA, USA, May 2017. View at: Publisher Site  Google Scholar
 S. Yang, D. Kwon, H. Yi, Y. Cho, Y. Kwon, and Y. Paek, “Techniques to minimize state transfer costs for dynamic execution offloading in mobile cloud computing,” IEEE Transactions on Mobile Computing, vol. 13, no. 11, pp. 2648–2660, 2014. View at: Publisher Site  Google Scholar
 M. Chen, Y. Hao, M. Qiu, J. Song, D. Wu, and I. Humar, “Mobilityaware caching and computation offloading in 5G ultradense cellular networks,” Sensors, vol. 16, no. 7, pp. 974–987, 2016. View at: Publisher Site  Google Scholar
 K. Sucipto, D. Chatzopoulos, S. Kostap, and P. Hui, “Keep your nice friends close, but your rich friends closer  Computation offloading using NFC,” in Proceedings of the 2017 IEEE Conference on Computer Communications, INFOCOM 2017, USA, May 2017. View at: Google Scholar
 M. Satyanarayanan, “Pervasive computing: vision and challenges,” IEEE Personal Communications, vol. 8, no. 4, pp. 10–17, 2001. View at: Publisher Site  Google Scholar
 Z. Li, C. Wang, and R. Xu, “Computation offloading to save energy on handheld devices: A partition scheme,” in Proceedings of the 2nd International Conference on Compilers, Architecture, and Synthesis for Embedded Systems, CASES 2001, pp. 238–246, USA, November 2001. View at: Google Scholar
 K. Yang, S. Ou, and H.H. Chen, “On effective offloading services for resourceconstrained mobile devices running heavier mobile internet applications,” IEEE Communications Magazine, vol. 46, no. 1, pp. 56–63, 2008. View at: Publisher Site  Google Scholar
 E. Cuervoy, A. Balasubramanian, D.K. Cho et al., “MAUI: making smartphones last longer with code offload,” in Proceedings of the 8th Annual International Conference on Mobile Systems, Applications and Services (MobiSys '10), pp. 49–62, New York, NY, USA, June 2010. View at: Publisher Site  Google Scholar
 M. Satyanarayanan, V. Bahl, R. Caceres, and N. Davies, “The Case for VMbased Cloudlets in Mobile Computing,” IEEE Pervasive Computing, 2011. View at: Publisher Site  Google Scholar
 M. Patel, B. Naughton, C. Chan et al., “Mobileedge computing introductory technical white paper,” White Paper, Mobileedge Computing (MEC) industry initiative, 2014. View at: Google Scholar
 J. Ni, K. Zhang, X. Lin, and X. S. Shen, “Securing Fog Computing for Internet of Things Applications: Challenges and Solutions,” IEEE Communications Surveys & Tutorials, vol. 20, no. 1, pp. 601–628, 2018. View at: Publisher Site  Google Scholar
 S. N. Shirazi, A. Gouglidis, A. Farshad, and D. Hutchison, “The extended cloud: Review and analysis of mobile edge computing and fog from a security and resilience perspective,” IEEE Journal on Selected Areas in Communications, vol. 35, no. 11, pp. 2586–2595, 2017. View at: Publisher Site  Google Scholar
 Z. Chen, R. Klatzky, D. Siewiorek et al., “An empirical study of latency in an emerging class of edge computing applications for wearable cognitive assistance,” in Proceedings of the the Second ACM/IEEE Symposium, pp. 1–14, San Jose, California, October 2017. View at: Publisher Site  Google Scholar
 F. Wang and X. Zhang, “Dynamic Computation Offloading and Resource Allocation over Mobile Edge Computing Networks with Energy Harvesting Capability,” in Proceedings of the 2018 IEEE International Conference on Communications (ICC 2018), pp. 1–6, Kansas City, MO, May 2018. View at: Publisher Site  Google Scholar
 L. Tang and S. He, “MultiUser Computation Offloading in Mobile Edge Computing: A Behavioral Perspective,” IEEE Network, vol. 32, no. 1, pp. 48–53, 2018. View at: Publisher Site  Google Scholar
 K. Guo, M. Yang, Y. Zhang, and Y. Ji, “An Efficient Dynamic Offloading Approach Based on Optimization Technique for Mobile Edge Computing,” in Proceedings of the 2018 6th IEEE International Conference on Mobile Cloud Computing, Services, and Engineering (MobileCloud), pp. 29–36, Bamberg, March 2018. View at: Publisher Site  Google Scholar
 F. Wang, J. Xu, X. Wang, and S. Cui, “Joint Offloading and Computing Optimization in Wireless Powered MobileEdge Computing Systems,” IEEE Transactions on Wireless Communications, vol. 17, no. 3, pp. 1784–1797, 2018. View at: Publisher Site  Google Scholar
 W. Hu, Y. Gao, K. Ha et al., “Quantifying the Impact of Edge Computing on Mobile Applications,” in Proceedings of the the 7th ACM SIGOPS AsiaPacific Workshop, pp. 1–8, Hong Kong, Hong Kong, August 2016. View at: Publisher Site  Google Scholar
 S. Sardellitti, G. Scutari, and S. Barbarossa, “Joint optimization of radio and computational resources for multicell mobileedge computing,” IEEE Transactions on Signal and Information Processing over Networks, vol. 1, no. 2, pp. 89–103, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 C. H. Liu, J. Fan, P. Hui, J. Crowcroft, and G. Ding, “QoIaware energyefficient participatory crowdsourcing,” IEEE Sensors Journal, vol. 13, no. 10, pp. 3742–3753, 2013. View at: Publisher Site  Google Scholar
 J. L. Neto, S. Yu, D. F. Macedo, J. M. Nogueira, R. Langar, and S. Secci, “ULOOF: A User Level Online Offloading Framework for Mobile Edge Computing,” IEEE Transactions on Mobile Computing, vol. 17, no. 11, pp. 2660–2674, 2018. View at: Publisher Site  Google Scholar
 V. Cardellini, V. De Nitto Persone, V. Di Valerio et al., “A gametheoretic approach to computation offloading in mobile cloud computing,” Mathematical Programming, vol. 157, no. 2, Ser. B, pp. 421–449, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 D. Nowak, T. Mahn, H. AlShatri, A. Schwartz, and A. Klein, “A Generalized Nash Game for Mobile Edge Computation Offloading,” in Proceedings of the 2018 6th IEEE International Conference on Mobile Cloud Computing, Services, and Engineering (MobileCloud), pp. 95–102, Bamberg, March 2018. View at: Publisher Site  Google Scholar
 X. Chen, “Decentralized computation offloading game for mobile cloud computing,” IEEE Transactions on Parallel and Distributed Systems, vol. 12, no. 5, pp. 1045–1057, 2014. View at: Publisher Site  Google Scholar
 Y. Wang, A. Nakao, and A. V. Vasilakos, “Heterogeneity playing key role: Modeling and analyzing the dynamics of incentive mechanisms in autonomous networks,” ACM Transactions on Autonomous and Adaptive Systems (TAAS), vol. 7, no. 3, 2012. View at: Google Scholar
 J. Xu, L. Chen, K. Liu, and C. Shen, “Designing SecurityAware Incentives for Computation Offloading via DevicetoDevice Communication,” IEEE Transactions on Wireless Communications, vol. 17, no. 9, pp. 6053–6066, 2018. View at: Publisher Site  Google Scholar
 J. Guo, Z. Song, Y. Cui, Z. Liu, and Y. Ji, “EnergyEfficient Resource Allocation for MultiUser Mobile Edge Computing,” in Proceedings of the GLOBECOM 2017  2017 IEEE Global Communications Conference, pp. 1–7, Singapore, December 2017. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Meiwen Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.