Table of Contents Author Guidelines Submit a Manuscript
Wireless Communications and Mobile Computing
Volume 2018, Article ID 1324897, 10 pages
https://doi.org/10.1155/2018/1324897
Research Article

Dynamic Service Request Scheduling for Mobile Edge Computing Systems

Computer School, Beijing Information Science and Technology University (BISTU), Beijing 100101, China

Correspondence should be addressed to Ying Chen; nc.ude.utsib@gniynehc

Received 20 April 2018; Accepted 3 July 2018; Published 13 September 2018

Academic Editor: Kok-Seng Wong

Copyright © 2018 Ying Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Nowadays, mobile services (applications) running on terminal devices are becoming more and more computation-intensive. Offloading the service requests from terminal devices to cloud computing can be a good solution, but it would put a high burden on the network. Edge computing is an emerging technology to solve this problem, which places servers at the edge of the network. Dynamic scheduling of offloaded service requests in mobile edge computing systems is a key issue. It faces challenges due to the dynamic nature and uncertainty of service request patterns. In this article, we propose a Dynamic Service Request Scheduling (DSRS) algorithm, which makes request scheduling decisions to optimize scheduling cost while providing performance guarantees. The DSRS algorithm can be implemented in an online and distributed way. We present mathematical analysis which shows that the DSRS algorithm can achieve arbitrary tradeoff between scheduling cost and performance. Experiments are also carried out to show the effectiveness of the DSRS algorithm.

1. Introduction

With the rapid development of Information Technology and increasing promotion of terminal devices [1], the mobile services (applications) running on terminal devices are becoming more and more complex and computation-intensive [2, 3]. However, the computing capacity and battery life of terminal devices are generally limited, and these devices cannot afford to process all these service requests locally on devices. To solve this problem, some researches propose to offload the service requests from terminal devices to cloud computing, which has more computing resources and larger capacity [48]. Nevertheless, cloud computing is usually located remotely that is far away from terminal devices. Besides, with the increasing popularity of mobile services running on terminal devices, scheduling all the offloaded service requests to cloud computing can put a significant burden on the networks [9, 10]. To meet this challenge, recent researches have been proposed to put edge servers with computing capacities at the edge of the networks in close proximity to terminal devices. Mobile Edge Computing (MEC) is an emerging technology based on this idea [1114], and it has drawn extensive attention from both academy and industry [1518].

One of the key issues in MEC research is how to schedule service requests [2, 19, 20]: when a large number of service requests are offloaded, how to schedule service requests among multiple MEC systems in order to reduce scheduling cost while providing performance guarantees. It is intuitive that there exist tradeoffs between scheduling cost and performance. Besides, the service request scheduling problem among multiple MEC systems is challenging due to several reasons. Firstly, as terminal devices are moving and the service environment varies over time [21, 22], how to make dynamic request scheduling decisions in accordance with the uncertainty of request patterns and changing environment is a great challenge [23]. Secondly, with the increasing promotion of terminal devices and mobile services, both the number of terminal devices and mobile services are rising dramatically, making the service request scheduling problem more complicated.

Some existing researches have studied the service request scheduling problem in MEC systems. Reference [24] modelled the server in the MEC as one queue. Reference [2] assumed the offloaded service requests arrived at the MEC system according to a Poisson process. These works assumed the request arrival followed certain distribution. However, in reality, the request arrival process is highly dynamic, and the statistical information of request arrival can hardly be obtained or precisely predicted [25, 26]. Besides, with the number of terminal devices and mobile services increasing, traditional centralized optimization techniques such as combination optimization and dynamic programming may suffer from high-complexity and result in long execution time.

In this article, we introduce a dynamic online service request scheduling mechanism which requires no prior information of the statistical information of request arrivals. Specifically, the request scheduling among multiple MEC systems is formulated as an optimization problem, and the goal is to minimize request scheduling cost while providing performance guarantees. Based on Lyapunov optimization techniques, we propose a Dynamic Service Request Scheduling (DSRS) algorithm. DSRS uses a parameter to control the tradeoff between scheduling cost and queue length. Mathematical analysis is presented which proves that DSRS is -optimal with respect to the average scheduling cost while still bounding the average queue length by . Experiments are also conducted which demonstrate that DSRS can make dynamic control decisions to adjust to variable environments and achieve the tradeoff between scheduling cost and queue length.

The remainder of this article is organized as follows. In Section 2, we present the system model for dynamic request scheduling among multiple MEC systems and formulate the optimization problem. In Section 3, based on Lyapunov optimization techniques, we propose an online and Dynamic Service Request Scheduling algorithm. Theoretical analysis of the scheduling algorithm is presented in Section 4. Experiments are conducted to evaluate the efficiency and effectiveness of the scheduling algorithm in Section 5. We conclude this article in Section 6.

2. System Model

2.1. Overview

Consider mobile edge computing (MEC) systems. Each MEC system has an edge server virtualized to virtual machines to process offloaded requests of types of services from the terminal devices [25, 27]. More specifically, the -th virtual machine on each edge server in the MEC system serves the offloaded requests for the -th type of service. Let be the collection of indexes for applications and be the collection of indexes for edge server in the MEC systems. Without loss of generality, the edge servers in different MEC systems are supposed to be heterogeneous. We consider a time-slotted model and the length of time slot is denoted by . The main notations in this section are listed in Table 1.

Table 1: Notations and definitions.
2.2. Problem Formulation
2.2.1. Service Request Scheduling

In each time slot , a number of service requests for the types of services are offloaded. Let be the number of requests for service offloaded to the MEC systems in time slot . In our article, we require no prior knowledge of the statistics of , which is generally hard to obtain or precisely predict in real-life. represents the number of requests for service that are scheduled to the edge server in the -th MEC system in time slot . is the request scheduling control variable. It should be satisfied that

The request scheduling method in our article will make use of the diversity of different MEC systems to provide service in order to reduce scheduling cost while providing performance guarantees.

2.2.2. Scheduling Cost

Let be the unit cost of scheduling requests for service to the -th MEC system. can be different among different services and different MEC systems . It can also vary across time for other factors such as traffic, wireless fading, the available resources, etc. The request scheduling cost of service in time slot can be calculated as . The total scheduling cost for all the services can be expressed as

Instead of studying the instantaneous scheduling cost, we focus on the long-term average cost. The time-average scheduling cost across time slots can be expressed as is the minimization objective of the request scheduling problem in this article.

2.2.3. Performance

Queueing delay is one of the most important performance metrics. According to Little’s Law, queueing delay is in proportion to the number of requests waiting in the queue. Thus, we seek to reduce queue length and maintain low congestion states. Let represent the queue length of service in the -th MEC system in time slot . denotes the number of requests for service that can be served by the -th MEC system. Thus, the queue length evolves as

To reduce queueing delay and maintain system stability, we seek to bound the average queue length. Let the time-average queue length across the slots represented by . The service request scheduling method in this article bounds the average queue length as

2.2.4. Unified Framework

To combine scheduling cost and performance, the request scheduling problem in this article is formulated assubject to constraints (1), (5).

Solving problem (6) offline requires the future information (such as requests arrival information, scheduling cost information) which is generally hard to obtain or precisely predict in practice. Thus, we propose an online Dynamic Service Request Scheduling algorithm to solve the problem, which will be shown in Section 3.

3. Dynamic Request Scheduling Algorithm Design

In this section, based on the Lyapunov optimization framework [28], we decompose the original optimization problem into a series of independent subproblems. Then, we design a Dynamic Service Request Scheduling algorithm to solve these subproblems in a distribute way.

3.1. Problem Transformation Using Lyapunov techniques

Based on Lyanuov optimization techniques, we define as the queue length matrix of the MEC systems. Then, we denote as the Lyanuov function as follows, which is a scalar measure of the queue congestion state in the system,

A small value of indicates that the queue lengths of all MECs are small, which represents a low congestion state of the MEC systems according to the Little’s Law. In order to reduce queue length and maintain system stability, we seek to keep the Lyanuov function at a small value. Then, we define the conditional Lyapunov drift,

By reducing the value of , we can push the Lyanuov function towards to a small value. To integrate scheduling cost and queue length in the MEC systems, we define the drift plus cost according to Lyapunov optimization framework, which is expressed as

The parameter can be considered as the tradeoff parameter between the scheduling cost and queue length, which can be determined by service providers or users according to their requirements in real applications. Next in Theorem 1, we show that the drift plus cost is upper bounded if the service arrival rate can be upper bounded.

Theorem 1 (bounding drift plus cost). In each time slot , under any algorithm, for all possible values of and any parameter value of , if there exits a peak value that upper bounds the number of requests arrived in each time slot, the drift plus cost can be upper bounded bywhere is a constant.

Proof. By squaring the both sides of (4) and applying the inequality that , we haveThen, we define as the actual number of requests for service served by the -th MEC system in time slot ,We can obtain that and rewrite (11) as follows:Because , we haveTaking the expectations on the condition of to both sides in (14) and summing over and , it can be obtained thatSince it holds that and , we haveIn addition, we define as the upper bound of over all the time slots. We can obtain thatBy adding to both sides and letting take the value of , it can be obtained thatSubstituting (2) into the right-hand-side (R.H.S.) of (18), we can obtain (10).

3.2. Dynamic Request Scheduling Algorithm

Following the design principles of Lyapunov optimization techniques, we design an efficient Dynamic Service Request Scheduling (DSRS) algorithm to minimize the upper bound of drift plus cost in each time slot . By decomposing the minimization of upper bound problem into a series of independent subproblems, our DSRS algorithm optimizes the average scheduling cost concurrently in a distributed way. In addition, it will be proven that DSRS algorithm can achieve a long-term time-average scheduling cost that is arbitrarily close to the optimal value while maintaining the stability of the MEC systems.

In each time slot , based on the current queue length matrix of the MEC systems, the DSRS algorithm makes request scheduling decisions to minimize the upper bound of R.H.S. of (10). Since and can be considered as constant in the optimization problem, we can rewrite the minimization of upper bound assubject to

As the request scheduling decisions are independent among different services, the above centralized minimization problem (19) can be decomposed into the following subproblem (21) for each service , i.e.,subject to

Problem (21) can be regarded as a generalized min-weight problem, where the number of requests scheduled to the MEC systems is weighted by the value of . Therefore, for each service , the optimal solution is to schedule all the requests to the MEC system with the minimum value of ; i.e.,where for all .

Remark. There exist tradeoffs between the scheduling cost and queue length of the MEC systems. Scheduling all the service requests to the MEC system with low cost can reduce the overall scheduling cost; however, the queue length of the MEC system can be very large. The DSRS algorithm combines scheduling cost and queue length, and can be regarded as the penalty factor for each MEC system. Recall that represents the tradeoff between scheduling cost and queue length. The intuition of the optimal scheduling policy obtained by the DSRS algorithm is to minimize the penalty function of the MEC systems in each time slot. In this way, the DSRS algorithm can reduce both the scheduling cost and the queue length. In addition, by changing the value of , the DSRS algorithm can achieve the arbitrary tradeoff between scheduling cost and queue length.

After the scheduling decisions are determined, the queue length updates according to (4). The detailed algorithm is shown in Algorithm 1.

Algorithm 1: Dynamic Service Request Scheduling (DSRS).

4. Algorithm Analysis

In this section, we present mathematical analysis of the boundary of the time-average queue length and scheduling cost of our DSRS algorithm. It can be proven that our algorithm can achieve the scheduling cost arbitrarily close to the optimal value while maintaining the stability of the MEC systems. Let denote the long-term time-average queue length,

We present in Lemma 2 that if the arrival is independent and identically distributed (i.i.d.) over time slots, there exists a randomized policy which can achieve the minimum cost defined in (3), where the control decision follows certain fixed probability distribution independent of the queue length matrix .

Lemma 2. For any service request arrival rate , where is the capacity region of the system, if the arrival is i.i.d. over time slots, there exists a randomized policy that determines the control decision in each time slot and achieves the following:where denotes the minimum time-average cost under the arrival rate .

Proof. Lemma 2 can be proven by Caratheodory’s theorem in [28], we omit the detailed proof here for simplicity and brevity.

Since it is assumed that there exists upper bound of the service request arrival rate, there also exists upper bound and lower bound of the objective . Then, we derive the boundary of queue length and scheduling cost of the DSRS algorithm based on Lemma 2.

Theorem 3. Assume that there exists satisfying , then, under our DSRS algorithm, for any value of the parameter , the time-average queue length defined in (24) is bounded as Furthermore, the time-average system scheduling cost can be bounded by (27), which shows the cost derived by our DSRS algorithm can approach the optimal value by increasing the parameter . Here, is the constant defined in Theorem 1.

Proof. Since it holds that , we can obtain that there exists a randomized policy which satisfies (28) and (29) according to Lemma 2.As our DSRS algorithm can achieve the minimum value of the R.H.S of (10) among all feasible policies (including policy ), it can be obtained thatSubstituting (28) and (29) into the R.H.S. of (30), taking expectations on both sides, and then using iterated expectations, we can yieldMoving to the R.H.S. of (31), it can be obtained thatTo be general, we assume the queue length is empty when . By summing both sides of (32) over and applying the fact that , we can obtainDividing both sides of (33) by and taking a lim as yield (26).
By summing both sides of (31) over and applying the fact that , it can be obtainedDividing both sides of (34) by , we haveTaking a lim of (35) as , applying Lebesgue’s dominated convergence theorem, and letting yield (27).

Remark. Theorem 3 shows that our DSRS algorithm can achieve a tradeoff between the time-average scheduling cost and queue length. According to (27), the gap between the time-average scheduling cost obtained by our DSRS algorithm and the optimal value is within . By setting the value of sufficiently large, the DSRS algorithm can approach the optimal scheduling cost. However, a large will cause a large queue backlog of the MEC systems. Nevertheless, the queue length obtained by our DSRS algorithm is also bounded according to (26). And constraint (5) can be satisfied by letting take the value of .

Then, we analyze the time complexity of the DSRS algorithm. According to Algorithm 1, for the two inner loops (line 5-10 and line 11-17), DSRS algorithm traverses each edge server once. Therefore, each loop terminates in operations, where is the number of edge servers. For the outer loop (line 1-18), since the request scheduling of different service applications is independent, it terminates in operations. Thus, the time complexity of the DSRS algorithm is .

5. Evaluation

In this section, we conduct experiments to evaluate our DSRS algorithm. First, we analyze the impact of parameters. Then, we present comparison experiments which show the effectiveness of our DSRS algorithm.

In the experiments, we consider 4 MEC systems, each with an edge server providing services for the offloaded requests. There are two types of heterogeneous services. For each service , the request arrival process is generated according to Poisson distribution with arrival rate [29]. Note that the DSRS algorithm actually requires no knowledge of the statistical information of request arrivals. The computing capacity of the MEC systems is set as where . Without loss of generality, we assume the MEC systems are heterogeneous with different computing capacities. And the unit scheduling costs of different MEC systems are set to be positively related to its computing capacity.

5.1. Parameter Analysis
5.1.1. Effect of Tradeoff Parameter

Figures 1 and 2 show the time-average scheduling cost and queue length of the MEC systems with different values of . In Figure 1, it can be seen that the scheduling cost decreases as the value of increases, which is in accordance with (27) in Theorem 3. This is because as increases, more weight is put on scheduling cost, and the DSRS algorithm would schedule more service requests to the MEC system with lower unit cost in order to reduce the overall scheduling cost. However, Figure 2 shows that the queue length also rises with the increase of , which is consistent with (26) in Theorem 3. Nevertheless, the queue length would stabilize gradually with far more increase of . Together with Figures 1 and 2, we can see that the DSRS algorithm can make a tradeoff between scheduling cost and queue length by adjusting the value of .

Figure 1: Scheduling cost with different values of .
Figure 2: Queue length with different values of .
5.1.2. Effect of Service Request Arrival Rate

We analyze the effect of service request arrival rate on the scheduling cost and queue length. In the experiments, for each application , we scale the service request arrival rate up or down to . We consider three different cases, where = 1, and , respectively. Figures 3 and 4 show that both of the scheduling cost and queue length increase as the request arrival rate increases. Nevertheless, the queue length can stabilize quickly with the increase of service request arrival rate. This shows that our DSRS algorithm can dynamically adjust the request scheduling decisions according to different service request arrivals and maintain the stability of the MEC systems.

Figure 3: Scheduling cost with different arrival rates.
Figure 4: Queue length with different arrival rates.
5.1.3. Effect of Unit Scheduling Cost

To analyze the effect of unit scheduling cost on the MEC systems, we scale the unit scheduling cost up or down to . We consider three different cases, where = 1, and , respectively. We can see from Figure 5 that the overall scheduling cost rises as the unit scheduling cost increases, since the scheduling cost of each request increases. In Figure 6, we can see that the queue length of the MEC systems also increases with the increase of unit scheduling cost. The reason is that our DSRS algorithm tries to achieve low scheduling cost by scheduling more requests to the MEC with smaller unit scheduling cost. However, this will lead to the larger queue backlog in some MEC systems.

Figure 5: Scheduling cost with different unit scheduling costs.
Figure 6: Queue length with different unit scheduling costs.
5.2. Comparison Experiment

We conduct comparison experiment and compare our DSRS algorithm with Randomized algorithm to evaluate the effectiveness of the DSRS algorithm. The Randomized algorithm schedules all the service requests to each MEC system randomly. The scheduling costs and queue lengths of the two algorithms are shown in Figures 7 and 8, respectively.

Figure 7: Scheduling cost under different algorithms.
Figure 8: Queue length under different algorithms.

We can see from Figure 7 that the scheduling cost of our DSRS algorithm is smaller than that of Randomized algorithm, which shows the effectiveness of our DSRS algorithm in reducing cost. In Figure 8, we can observe that the queue length of the Randomized algorithm is slightly smaller than our DSRS algorithm at the very beginning. However, as time goes by, the queue length of the Randomized algorithm increases continuously along with the time. The queue length of our DSRS algorithm stabilizes quickly and maintains at a small level. The reason is that our DSRS algorithm can adjust scheduling decisions dynamically according to the current queue backlog and maintain low congestion state in the MEC systems. Together with Figures 7 and 8, we can see the effectiveness of our DSRS algorithm in optimizing both scheduling cost and queue length.

6. Conclusion

In this article, we study dynamic request scheduling for MEC systems. We formulate it as an optimization problem, and the goal is to optimize scheduling cost while providing performance guarantee. We propose the DSRS algorithm to solve the optimization problem, which transforms it to a series of subproblems and solves each one efficiently in a distributed way. Mathematical analysis is presented which demonstrates that the DSRS algorithm can approach the optimal scheduling cost while bounding the queue length. Parameter analysis experiments and comparison experiments are both conducted to verify the effectiveness of the DSRS algorithm.

Data Availability

Most of the simulation experimental data used for supporting the study of this article are included within the article. Further additional information about the data is available from the corresponding author.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (no. 61370065 and no. 61502040), the Key Research and Cultivation Projects at Beijing Information Science and Technology University (no. 5211823411), Beijing Municipal Program for Excellent Teacher Promotion (no. PXM2017 014224 000028), and the Supplementary and Supportive Project for Teachers at Beijing Information Science and Technology University (no. 5111823401).

References

  1. Q. Ye and W. Zhuang, “Distributed and Adaptive Medium Access Control for Internet-of-Things-Enabled Mobile Networks,” IEEE Internet of Things Journal, vol. 4, no. 2, pp. 446–460, 2017. View at Publisher · View at Google Scholar · View at Scopus
  2. M. Jia, J. Cao, and W. Liang, “Optimal cloudlet placement and user to cloudlet allocation in wireless metropolitan area networks,” IEEE Transactions on Cloud Computing, vol. 5, no. 4, pp. 755–737, 2017. View at Google Scholar
  3. L. Tong, Y. Li, and W. Gao, “A hierarchical edge cloud architecture for mobile computing,” in Proceedings of the 35th Annual IEEE International Conference on Computer Communications, IEEE INFOCOM 2016, pp. 1–9, April 2016. View at Publisher · View at Google Scholar · View at Scopus
  4. B.-G. Chun, S. Ihm, P. Maniatis, and eta, “Clonecloud: Elastic execution between mobile device and cloud,” in Proceedings of the 6th ACM EuroSys Conference on Computer Systems (EuroSys '11), pp. 301–314, ACM, New York, NY, USA, April 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Kosta, A. Aucinas, P. Hui, R. Mortier, and X. Zhang, “Thinkair: dynamic resource allocation and parallel execution in the cloud for mobile code offloading,” in Proceedings of the IEEE INFOCOM, pp. 945–953, March 2012. View at Publisher · View at Google Scholar · View at Scopus
  6. M. S. Gordon, D. A. Jamshidi, S. Mahlke, Z. M. Mao, and X. Chen, “Comet: Code offload by migrating execution transparently,” in Proceedings of the Proceedings of the 10th USENIX Conference on Operating Systems Design and Implementation, pp. 93–106, Berkeley, Calif, USA, 2012.
  7. J. Kwak, Y. Kim, J. Lee, and S. Chong, “DREAM: Dynamic Resource and Task Allocation for Energy Minimization in Mobile Cloud Systems,” IEEE Journal on Selected Areas in Communications, vol. 33, no. 12, pp. 2510–2523, 2015. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Deng, L. Huang, J. Taheri, and A. Y. Zomaya, “Computation offloading for service workflow in mobile cloud computing,” IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 12, pp. 3317–3329, 2015. View at Publisher · View at Google Scholar
  9. X. Lyu, W. Ni, H. Tian et al., “Optimal schedule of mobile edge computing for internet of things using partial information,” IEEE Journal on Selected Areas in Communications, vol. 35, no. 11, pp. 2606–2615, 2017. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Liu, S. Wang, A. Zhou, F. Yang, and R. Buyya, “Availability-aware Virtual Cluster Allocation in Bandwidth-Constrained Datacenters,” in Proceedings of the IEEE Transactions on Services Computing, pp. 1–1, 2017.
  11. W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: vision and challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, 2016. View at Publisher · View at Google Scholar · View at Scopus
  12. Y. Yu, J. Zhang, and K. B. Letaief, “Joint subcarrier and CPU time allocation for mobile edge computing,” in Proceedings of the IEEE Global Communications Conference (GLOBECOM), pp. 1–6, December 2016. View at Scopus
  13. Y. Mao, J. Zhang, S. H. Song, and K. B. Letaief, “Stochastic joint radio and computational resource management for multi-user mobile-edge computing systems,” IEEE Transactions on Wireless Communications, vol. 16, no. 9, pp. 5994–6009, 2017. View at Publisher · View at Google Scholar
  14. S. Wang, J. Xu, N. Zhang, and Y. Liu, “A Survey on Service Migration in Mobile Edge Computing,” IEEE Access, vol. 6, pp. 23511–23528, 2018. View at Publisher · View at Google Scholar
  15. Y. Kim, J. Kwak, and S. Chong, “Dual-Side Optimization for Cost-Delay Tradeoff in Mobile Edge Computing,” IEEE Transactions on Vehicular Technology, vol. 67, no. 2, pp. 1765–1781, 2018. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 12, pp. 3590–3605, 2016. View at Publisher · View at Google Scholar · View at Scopus
  17. S. Wang, Y. Zhao, L. Huang, J. Xu, and C.-H. Hsu, “QoS prediction for service recommendations in mobile edge computing,” Journal of Parallel and Distributed Computing, 2017. View at Google Scholar · View at Scopus
  18. J. Zheng, Y. Cai, Y. Wu, and X. S. Shen, “Dynamic computation offloading for mobile cloud computing: A stochastic game-theoretic approach,” in Proceedings of the IEEE Transactions on Mobile Computing, pp. 1–1, 2018.
  19. C. You, K. Huang, H. Chae, and B.-H. Kim, “Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading,” IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1397–1411, 2017. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Tan, Z. Han, X. Li, and F. C. Lau, “Online job dispatching and scheduling in edge-clouds,” in Proceedings of the IEEE INFOCOM 2017 - IEEE Conference on Computer Communications, pp. 1–9, Atlanta, Ga, USA, May 2017. View at Publisher · View at Google Scholar
  21. S. Deng, L. Huang, D. Hu, J. L. Zhao, and Z. Wu, “Mobility-Enabled Service Selection for Composite Services,” IEEE Transactions on Services Computing, vol. 9, no. 3, pp. 394–407, 2016. View at Publisher · View at Google Scholar · View at Scopus
  22. Q. Ye and W. Zhuang, “Token-Based Adaptive MAC for a Two-Hop Internet-of-Things Enabled MANET,” IEEE Internet of Things Journal, vol. 4, no. 5, pp. 1739–1753, 2017. View at Publisher · View at Google Scholar · View at Scopus
  23. R. Urgaonkar, S. Wang, T. He, M. Zafer, K. Chan, and K. K. Leung, “Dynamic service migration and workload scheduling in edge-clouds,” Performance Evaluation, vol. 91, pp. 205–228, 2015. View at Publisher · View at Google Scholar · View at Scopus
  24. Y. Nan, W. Li, W. Bao et al., “Adaptive Energy-Aware Computation Offloading for Cloud of Things Systems,” IEEE Access, vol. 5, pp. 23947–23957, 2017. View at Publisher · View at Google Scholar · View at Scopus
  25. Z. Zhou, F. Liu, H. Jin, B. Li, B. Li, and H. Jiang, “On arbitrating the power-performance tradeoff in SaaS clouds,” in Proceedings of the 32nd IEEE Conference on Computer Communications, IEEE INFOCOM 2013, pp. 872–880, Italy, April 2013. View at Scopus
  26. S. Ren, Y. He, and F. Xu, “Provably-efficient job scheduling for energy and fairness in geographically distributed data centers,” in Proceedings of the 32nd IEEE International Conference on Distributed Computing Systems, ICDCS 2012, pp. 22–31, China, June 2012. View at Scopus
  27. K. Ha, P. Pillai, W. Richter et al., “Just-in-time provisioning for cyber foraging,” in Proceedings of the 11th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '13), pp. 153–166, New York, NY, USA, 2013.
  28. M. J. Neely, Stochastic Network Optimization With Application to Communication and Queueing Systems, Morgan and Claypool, 2010.
  29. E. Chlebus and J. Brazier, “Nonstationary Poisson modeling of web browsing session arrivals,” Information Processing Letters, vol. 102, no. 5, pp. 187–190, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus