Abstract

Mobile cloud computing (MCC) combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs). In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of energy-efficient scheduling for wireless uplink in MCC. By introducing Lyapunov optimization, we first propose a scheduling algorithm that can dynamically choose channel to transmit data based on queue backlog and channel statistics. Then, we show that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in a channel-aware MCC system. Simulation results show that the proposed scheduling algorithm can reduce the time average energy consumption for offloading compared to the existing algorithm.

1. Introduction

Mobile devices (MDs) are increasingly becoming an essential part of human life [1]. As the most effective and convenient tools, MDs are not bounded by time and place. However, the limited computational power, storage space, and battery lifetime of existing MDs significantly limit their ability to execute resource-intensive applications [2]. Fortunately, mobile cloud computing (MCC), which combines cloud computing with mobile internet, can provide service via Infrastructure as a Service (IaaS) platform [3]. Using this IaaS platform, on the one hand, the performance of MDs can be improved by offloading mobile applications to the cloud servers. On the other hand, as users offload the mobile applications to the cloud, the data, which are transmitted on both wired and wireless networks, are increased rapidly, the communication overhead of MD will significantly consume battery energy [4]. Therefore, reducing the energy consumption of transmission is one of the most significant issues in MCC.

The issue about energy efficiency of data transmission in wireless networks was investigated in [57]. In [5], Zafer and Modiano used a novel continuous-time optimal-control formulation and Lagrangian duality, to propose an optimal transmission scheduling that can dynamically adapt to the rate over time considering the channel variations to minimize the transmission energy cost. In [6], Fu and van der Schaar investigated the structure-aware online learning for the energy-efficient and delay-sensitive transmission. In [7], Neely exploited time-varying channel conditions to design an energy-efficient control (EEC) algorithm based on the Lyapunov optimization. However, all the above papers only consider a single user transmitting data over a time-varying channel. To this end, much research work [8, 9] has focused on the issue of energy-efficiency for multiuser multichannel scenario. In [8], Li and Neely consider a wireless base station serving users through time-varying channels and propose a dynamic channel acquisition algorithm (DCAA) that all mobile users dynamically share all wireless channels. In [9], Xiang et al. used discrete-time stochastic dynamic program, to propose an approximate dynamic programming (ADP) that can dynamically select channel and execute data transmission scheduling with time-varying channel conditions. The basic idea of the proposed schemes is optimized energy-efficiency of downlink data transmission between MDs and base stations.

Although all of the above work focused on the wireless downlinks, the energy efficiency of the wireless uplink is lacking. In fact, the energy consumption of uploading data is more than that of downloading data [10]; hence reducing the energy consumption of uploading data is the key to save the energy consumption on MDs in the MCC system. In [11], Ra et al. implemented Lyapunov optimization framework on MDs in multiple wireless uplinks environments and designed a stable and adaptive link selection algorithm (SALSA) with time-varying values for meeting different applications’ delay tolerances. This SALSA algorithm can automatically select channels and requires channel state information to decide whether and when to defer a transmission. However, the SALSA algorithm supports each user with a dedicated-channel when it has already selected a channel while an application is executing. In this way, user might miss better transmission opportunity on other channels; it is not so good for reducing MDs’ energy consumption. For example, there are two users (1 and 2) uploading data over two time-varying channels (A and B). Let SALSA algorithm allocate channel A and channel B to user 1 and user 2 at the beginning of uploading, respectively. We assume user 1 has low service rate on channel A and high service rate on channel B, while user 2 has low service rate on channel B and high service rate on channel A after a period of time. According to the SALSA algorithm, on the one hand, users may defer the data transfer until their channels’ service rate is high enough, rather than reallocate channel. On the other hand, as the queue backlog increases, users may transmit the data to cloud with the low service rate. Indeed, the energy-delay tradeoff can be provided. However, the high energy consumption is incurred, because the SALSA algorithm cannot reallocate channel and users may upload data with the low service rate when the queue backlog is high enough.

In this paper, based on the MCC system with IaaS platform, we address the issue of energy-efficient scheduling considering the resource-intensive applications, which need to upload large data to the cloud such as mHealth, mobile commerce, and mobile office. In our scheduling algorithm, mobile user chooses appropriate channels to transmit applications’ data packet with the queue backlog and channel statistics every time unit. In the above example, our scheduling algorithm can transmit user 1’s data over channel B when the service rate of channel B is higher than that of channel A. Compared with the SALSA algorithm, our scheduling algorithm has a lower queue backlog because it can transmit user’s data over any channel that has a high service rate support user. Thus, for the same parameter , which control the tradeoff between energy consumption and queue backlog, our scheduling algorithm has the lower energy consumption than SALSA algorithm. Our main contribution includes the following.(i)Adopting the framework of Lyapunov optimization, we propose a two-time dynamic offloading (T2DO) algorithm to decrease the energy consumption on MDs by considering queue backlog and channel states.(ii)We demonstrate that the proposed algorithm can approach the optimal energy consumption within deviation, and a tradeoff in average queue backlog is .(iii)We compare the performance of our T2DO algorithm with SALSA and random and minimum-delay algorithms using simulation. The results show that, by appropriately choosing , the T2DO algorithm outperforms the other three algorithms, with smaller time average energy consumption while achieving queue stability.

The remainder of this paper is organized as follows. In Section 2, the review of related work is presented. In Section 3, we present the system model and problem statement in the MCC. In Section 4, according to Lyapunov optimization, we propose a T2DO algorithm which makes a tradeoff between queue backlog and energy consumption. The performance analysis of T2DO algorithm is given in Section 5. In Section 6, we compare the performance of the T2DO algorithm with SALSA and random and minimum-delay algorithms using simulation. Section 7 concludes this paper and provides future directions.

MCC is an emerging technology to extend the capabilities of MDs by offloading. According to [12], not all applications offloading can extend battery life of MDs. Thus, a vast amount of previous work [10, 1316] has been investigating the scheme of applications offloading. In [10], Altamimi et al. proposed an energy model for the task offloading to the cloud. It minimized power consumption on the MDs under the constraints of the responsiveness and accuracy of demands of interactive perception tasks. In [13], Chen formulated the decision problem of computation offloading among mobile users as a decentralized computation offloading game and proposed a computation offloading mechanism based on game-theoretic approach to save energy on the MD. In [14], Yang et al. discussed an assisting execution offloading approach by reducing the size of transferred state and proposed an execution offloading scheme to improve the performance of the MCC significantly in terms of execution time and energy consumption. In [15], Zhang et al. investigated collaborative task execution between MD and cloud clone for mobile applications under the stochastic wireless channel. In [16], Zhang et al. proposed an energy-optimal execution strategy for the MCC under the stochastic wireless channel. For the mobile execution, it minimized the computation energy by dynamically configuring the clock frequency of the chip. For the cloud execution, it minimized the transmission energy by optimally scheduling data transmission across the stochastic wireless channel.

With offloading, the data transfer between MD and cloud is increasing rapidly; thus the higher data transmission energy consumption is incurred, especially with a bad wireless channel. Some literatures have studied the energy issues of data transmission for offloading. In [17], Kumar and Lu considered a fixed computation scheduling with a fixed data rate model in the MD for the wireless channel. In [18], Huang et al. presented a dynamic offloading algorithm to transfer data to save energy on the MD while meeting the application execution time. In [19], Tilevich and Kwon presented a determined functionality to offload at runtime, which can maximize efficiency by automating program transformation. In [9], Xiang et al. presented a flexible link selection and data transmission scheduling and proposed a scalable approximate dynamic programming (ADP) algorithm based on the discrete-time stochastic dynamic program to reduce the average energy consumed for delivering a packet.

In a real-world application, uplink power consumption dominates the wireless power budget due to RF power requirements for reliable transmission over long distances; thus energy-efficiency of uploading data is a key issue in MCC due to energy-poverty of MDs. Some researchers have worked on the energy-optimal scheduling in wireless uplinks. In [20], Katranaras et al. considered clustered cooperation and investigated effective techniques for managing uplinks of intercluster interference to improve users’ performance in terms of both spectral and energy efficiency. In [21], Miao et al. developed an energy-efficient scheme with significantly lower complexity when compared to iterative approaches in an uplink OFDMA system. This scheme allocates the system bandwidth among all users to optimize energy efficiency across the whole network. In [22], Liu et al. proposed a dynamic carrier aggregation (DCA) scheme to improve the energy efficiency of uplink communications. In [23], Deb and Monogioudis proposed LeAP, a measurement data-driven machine learning paradigm for power control to manage uplink interference in LTE. The authors in [11] derived the energy-efficient scheduling automatically select channels and requires channel state information to decide whether and when to defer data uploading. However, it cannot reselect channels while an application is executing, even though the service rate of other channels is higher than that of current channel in some time.

This paper investigates energy-efficient scheduling for wireless uplinks in the MCC. Compared to previous works, this paper has several differences. First, we employ the multiqueueing model and consider MD can reselect appropriate channel to transmit data while an application is executing. Second, we provide the theoretical framework of data uploading, in which MD first calculates the amount of data transfer of each offloading application to balance queue length among all queues; then MD chooses appropriate channels to transmit its data according to the queue backlog and channel statistics every time unit. Finally, we prove that the proposed scheduling achieves the exact energy-queue tradeoff under wireless uplinks in the MCC system.

3. System Model and Problem Statement

In this section, we first present the model for resource-intensive application offloading in the MCC system. Then, we describe the formulation for the problem of energy-efficient for wireless uplinks. Finally, we employ the mHealth as an instance to show the process of the proposed algorithm.

3.1. System Model

We consider a MCC system with IaaS platform, which owns a server cluster to support mobile users. Without loss of generality, we use one server to represent the server cluster. When users are going to offload the mobile applications, the server will allocate appropriate amounts of virtual machines to execute these applications. For more details about this MCC system one can refer to [2426]. Let be the time unit; we use to denote the th time unit, where is any time. The MCC system operates in unit time. We consider a user who has heterogeneous applications offloading to cloud through time-varying channels. The data generated in each offloading application is processed in a corresponding queue, denoted by , , which is assumed to operate in a discrete time unit manner. For any application , represents its queue backlog of data to be transmitted from the MD to the cloud at the beginning of the time unit . Let be the vector denoting the amount of newly generated data of each offloading application in every time unit , which is referred to as the vector data arrival rates that arrive in their corresponding queues . Suppose is Poisson distributed with ; then we have . The system model is shown in Figure 1.

Let be the vector of current channel states for different channels during time unit . After acquiring channel states, MD chooses appropriate queues to transmit their data packet in each time slot , where , . Let be the service rate of application supported by channel , where . For the service rate of the queue , we havewhere denotes the number of slots that the queue is supported by channel in time unit .

According to [27], the power state of MD is divided into active state and idle state. Let be the power of the MD at the active state, which is supported by the channel with current channel state in time unit . Let be the idle power of the user in time unit . Then the energy consumption of user in time unit can be denoted aswhere .

3.2. Problem Statement

In the following, we assume that MD can estimate the unfinished delivery in its queues accurately. is the unfinished application data of the offloading in the MD at time unit . We use the following queueing dynamicsThroughout the paper, we require all the queues to be stable, which is defined aswhere is the time average queue length and the expectation is taken over the randomness of .

In practice, dead spots or coverage holes usually cause disconnection (i.e., WLAN and 3G are not available). In this situation, a huge queue backlog even queue instability might be caused, because the mobile application cannot be offloaded to cloud any more. In fact, this problem can be tackled by making the cloud services temporarily stopped. We will treat this case as a joint optimization of local execution and transmitting cost in the future. Moreover, MCC faces the channel’s competition problem, when several users offload the mobile applications to cloud at the same time. This problem has been treated in [28, 29] and is beyond the scope of this paper. In this paper, we only consider that a mobile user offloads mobile application from MD to cloud.

The focus of our work is energy optimal scheduling for data uploading with the time-varying wireless channel. We call every feasible policy that ensures (4) a stable policy and use to denote the infimum average energy consumption over all stable policies. Then, we define the time average energy consumption of a feasible policy aswhere denotes the energy consumption on the MD by policy at time unit .

The objective is to find a stable policy by reducing the number of transmission slots at every time unit , so as to minimize the time average energy consumption of MD. We refer to this as the energy consumption minimization (ECM) problem in the remainder of the paper.

4. A Two-Time Dynamic Offloading Algorithm

In this section, we first analyze the ECM problem using Lyapunov optimization. Then, we describe T2DO algorithm and discuss corresponding insight and implementation-related issues.

4.1. The ECM Problem Analysis Using Lyapunov Optimization

We first define the Lyapunov function, , to measure the aggregate queue backlog in the systemNext, we define the -unit Lyapunov drift, as the expected change in the Lyapunov function over unitsFollowing the Lyapunov optimization approach, we add the expected energy consumption with over units (i.e., a penalty function), , to (7), which leads to the drift-plus-penalty term. This is a key step to obtain an upper bound on this term. Before further discussing the drift-plus-penalty term, we first present a lemma in [30], which is related to the derivation of the upper bound of drift-plus-penalty term.

Lemma 1. If , and , then .

Based on (7) and Lemma 1, we derive a theorem, which characterizes the upper bound for our case.

Theorem 2. For any given , under any possible actions and , one haswhere .

Proof. Squaring both sides of the queueing dynamic (1) and using Lemma 1, we haveSumming (9) from to , using the fact that and , we obtainSubstituting (10) into (7), we getTherefor by defining and adding to both sides of (11), we obtain (8).

4.2. The T2DO Algorithm Design (See Algorithm 1)

According to [18], the design principle of Lyapunov framework is minimizing the upper bound of the drift-plus-penalty term; that is, in every time unit , we try to choose a scheduling algorithm to minimize the RHS of (8). In fact, the scheduling algorithm only affects the energy consumption and the queue ’s service rate in the time unit ; hence we can minimize the RHS of (8) by minimizing the following simplified term

Input:
Output:
() At the beginning of time unit , monitor the queue backlog.
() Monitor the channel states between cloud and MD.
() The mobile agent chooses queue to transmit data to minimize
     ,     (*)
() if then
()  The mobile agent chooses the channels to transmit data to minimize
              (**)
() else
()  Stay idle for energy conservation.
() end if
() Update the queues using (1).

We now describe the system implementation of T2DO algorithm as follows.

The T2DO algorithm works at two different time scales. The MD assigns the service rate of application offloading at the beginning of every time unit . Then MD chooses the channel to transmit data at every time slot. Two different time scales are important from an implementation perspective, because the MD’s decision interval is usually much longer than channels’ data transmission slot .

The proposed T2DO algorithm has two important properties. First, MD assigning decision remains unchanged for a time unit. We can properly increase to reduce the computational overhead at the MD. Second, channel can defer the transmission of data if its channel state is too bad. Channel may choose not to transmit data in a particular time slot or time unit , even if , due to the low data transmission rates at the user.

4.3. A Practical Instance for T2DO Algorithm

In this section, we take the mHealth as an instance to show the process of the T2DO algorithm. The mHealth combines wearable body sensors [31] and MD (i.e., mobile-based medical monitoring device) to provide more effective and affordable healthcare services. Medical organizations focusing on hypertension, diabetes, geriatrics, or chronic diseases can take care of patients at their home instead of in the hospital. However, one of the disadvantages of eHealth is that different medical technologies are often supported by different vendors, which leads to poor interoperability. In addition, dedicated MDs are required. Constrained by computational power, storage space, and battery life of existing MDs, it is hard for them to execute resource-intensive applications. In this case, as an IaaS platform, cloud computing can contribute to offloading the eHealth services from MDs to the cloud. This paradigm not only improves the performance and the compatibility of MDs, but also raises the possibility of providing more accurate offsite personalized medical diagnosis and treatment [32].

In a MCC-based mHealth, as shown in Figure 2, the MD first connects to physiological body sensors to collect data, such as blood pressure, temperature, heart rate, and electrocardiograph. Then, the mobile device manages all sensing data acquired from wearable body sensors and uploads the physiological data to the cloud. Finally, the cloud stores and processes these physiological data. Once the diagnostic decision on the cloud is finished, the treatment plans will be sent back to the MD. In this case, our T2DO algorithm can reduce the energy consumed in data uploading between the MD and the cloud. This part of energy is the major energy consumption on MD for the mHealth.

Next, we show that how the T2DO algorithm reduces the energy consumption of uploading for the mHealth. Here, we also employ the mobile agent as an entity to manage and upload physiological data; thus the mobile agent is an executor of our T2DO algorithm. When the MD obtains physiological data from wearable body sensors, the mobile agent firstly estimates the queue backlog of these physiological data and monitors the channel state between the MD and the cloud at the beginning of time unit . Secondly, the mobile agent chooses queue to transmit data to balance queue length among all queues. Thirdly, the mobile agent decides whether the data of queues be transmitted or not in current time unit and decides the amount of data of those queues which can be transmitted based on equality . Finally, according to the amount of transmitting and arriving data, the mobile agent updates the queues based on equality (3). According to above process of the T2DO algorithm, we can save the energy consumption on MD by postponing data transmission when the channel states are terrible.

In this paper, we only take the mHealth as an example to show how our T2DO algorithm reduces the energy consumption of uploading. In fact, the T2DO algorithm can also be applied to other mobile applications such as mobile commerce, mobile learning, and mobile office.

5. Performance Analysis

According to [7], we characterize the optimal time average energy consumption with below lemma, which can be achieved by any algorithm that stabilizes the queue.

Lemma 3. For any data arrival rate vector and time unit , there exists a stationary randomized control policy that chooses appropriate queue to transmit its data packet in each slot ; then one has the following equalitieswhere denotes the capacity region of the system.

Lemma 3 shows that by using a stationary randomized algorithm, it is possible to achieve the minimum time average energy consumption for a given data arrival rate vector .

Based on Theorem 2 and Lemma 3, we derive a theorem, which presents bounds on the time average energy consumption and queue backlogs achieved by T2DO algorithm.

Theorem 4. Suppose there exists an such that ; then under T2DO algorithm, the performance bounds of the time average energy consumption and queue backlog can be denoted aswhere denotes the vector of all 1’s and and are the optimal energy consumption and the maximum energy consumption for stationary randomized control, respectively.

Proof. Since , it can be shown using Lemma 3 that there exists a stationary and randomized policy that achieves the followingwhere is the optimal energy consumption corresponding to the rate vector . Substituting (16) into (8), we getTaking the expectation of (17) over , we obtainSumming (18) from to , we haveThen, using the fact that , , and we haveDividing both sides of (20) by , we obtainTaking a lim sup as , (14) is proven.
Moreover, based on (19), using the fact that and , we can deriveSumming (22) from to and dividing both sides by , we haveTaking a lim sup as , using Lebesgue’s dominated convergence theorem, and then letting , we obtain inequality (15).

Equation (15) demonstrates that the energy consumption can arbitrarily close to the optimum with an arbitrarily large (i.e., approach within deviation). At the same time, according to (14), the proposed algorithm guarantees an average queue backlog. Thus, by appropriately selecting the control parameter , we can achieve a desired tradeoff between energy consumption and queue backlog.

From the perspective of algorithm’s performance, it is very important to value the parameter . The following theorem gives the range of values for .

Theorem 5. Assume that is the service rate of application supported by channel , where , is the energy consumption of offload application during the time unit , and the reasonable range of values allowed for is , where .

Proof. Suppose there exists , MD transmits queue ’s data in the time unit , and the service rate of the queue is . According to the T2DO algorithm, we haveLet some channel be in the best state, its corresponding service rate , and we are setting . If , then MD may defer a transmission for saving energy on the MD. Otherwise, MD chooses transmitting data. In this case, based on (24), we haveBased on (25), we can derive . Moreover, according to Lyapunov optimization approach, we have . Thus, we may safely draw the conclusion that the reasonable range of values allowed for is .

6. Simulation Results

In this section, we take the mHealth as a data uploading scenario to evaluate the performance of the proposed algorithm, as shown in Figure 3. Let each MD upload physiological data to the cloud with four time-varying channels. According to [27], the powers of MD at active state and idle state are set to 1.680 W and 0.594 W, respectively. The average service rate of each channel is set to 1000 Kbit/s. The time unit is set to 1 s; the slot is set to 0.1 s. We assume the data arrival rate of each application equals 800 Kbit/s. The simulation time is set to 1000 s. Each is assumed to have an empty queue at the first time unit.

We first characterize the tradeoff between the energy consumption and the queue backlog for the T2DO algorithm. We plot the tradeoff between energy consumption and queue backlog for data transfer in Figure 3. It is clear that energy consumption falls quickly at the beginning and then tends to descend slowly while the time average queue backlog grows linearly with . Hence, the variable controls the energy-delay tradeoff of the T2DO algorithm. These results in Figure 3 are consistent with Theorem 4. Particularly, there exists a sweet spot of (e.g., ), beyond which increasing leads to little energy conservation yet significantly increases average queue backlogs.

Next, let be fixed to 40000; we vary the data arrival rate from 0.3 Mbit/s to 0.6 Mbit/s. Figure 4 shows a comparison of energy consumption among our T2DO algorithm, random algorithm, SALSA algorithm, and minimum-delay algorithm. Comparing with the random algorithm and the minimum-delay algorithm, the energy consumption of the proposed T2DO algorithm has decreased by . The reason is that our T2DO algorithm can find an expected channel with good condition to transmit the data by postponing the communication for a while. Furthermore, we also note that proposed T2DO algorithm can provide better performance than SALSA algorithm. This is because user can obtain data from any channel with better channel conditions in the proposed algorithm, while in SALSA algorithm user is only supported by a dedicated-channel.

Figure 5 shows a comparison of average queue backlog among four algorithms. We can see that the average queue backlog of proposed T2DO algorithm is longer than that of random algorithm and minimum-delay algorithm. It verifies that proposed T2DO algorithm can provide energy-delay tradeoff. Furthermore, we also note that proposed T2DO algorithm can provide better performance than SALSA algorithm in queue backlog. This is because the proposed T2DO algorithm can effectively increase service rate by dynamically choosing appropriate channels to transmit data for each time unit, and higher service rate results in lower queue backlog.

Finally, we compare the energy efficiency of T2DO algorithm with that of random algorithm, SALSA algorithm, and minimum-delay algorithm. For ease of comparison, we define a performance metric , referred to as the average energy consumption per data packetLet the data arrival rate be fixed to 0.9 Mbit/s, and let the simulation time be set to 1000 s. we vary from 0 to 100000. Figure 6 compares the values of under the four algorithms. It shows that the random algorithm and minimum-delay algorithm consume about 1.5 Joules to transmit per Mbit data. We can see that the of T2DO algorithm and SALSA algorithm is lower than that of random algorithm and minimum-delay algorithm. Since T2DO algorithm and SALSA algorithm are designed according to Lyapunov optimization, they can transmit the data with good condition by postponing the communication for a while. It is interesting to note that the of T2DO algorithm and SALSA algorithm tends to decrease as the parameter increases, and T2DO algorithm can provide better performance than SALSA algorithm. The reason is that the parameter control the tradeoff between energy consumption and queue backlog, so the energy consumption will decrease when increases. Moreover, T2DO algorithm can effectively reduce the queue backlog by transmitting data over any channel with better channel conditions; thus for the same parameter , our T2DO algorithm has the lower energy consumption per data than SALSA algorithm.

7. Conclusion

In this paper, we investigate the energy-efficient problem about the uploading data when applications are offloaded with a time-varying channel scenario in MCC. By the Lyapunov optimization, a T2DO algorithm is presented, where the tradeoff between energy and queue backlog for offloading is achieved. This T2DO algorithm first allocates data to the appropriate queuing and then chooses the appropriate channel to transmit their data packet with the queue backlog and channel statistics every time unit. Moreover, we take the mHealth as an example and show the process of the proposed algorithm in real applications. The simulation results fully demonstrate the correctness and effectiveness of the proposed algorithm.

Unfortunately, this study also suffers from some practical issues. For instance, the delay constraint is another key factor of concern for mobile application, besides computational performance and energy efficiency. Moreover, different mobile applications have delay requirements. In the follow-up work, we will characterize the energy consumption caused by various applications uploading and explore alternative ways to reduce transmission energy, while meeting their delay requirements. we also plan to evaluate the performance of our T2DO algorithm by running real mobile applications, such as mHealth, mobile office, mobile commerce, and mobile learning.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Nature Science Foundation of China under Grant no. 61173017 and the National High-Tech R&D Program under Grant no. 2014AA01A701.