Abstract

Mobile edge computing (MEC) has produced incredible outcomes in the context of computationally intensive mobile applications by offloading computation to a neighboring server to limit the energy usage of user equipment (UE). However, choosing a pool of application components to offload in addition to the volume of data transfer along with the latency in communication is an intricate issue. In this article, we introduce a novel energy-efficient offloading scheme based on deep neural networks. The proposed scheme trains an intelligent decision-making model that picks a robust pool of application components. The selection is based on factors such as the remaining UE battery power, network conditions, the volume of data transfer, required energy by the application components, postponements in communication, and computational load. We have designed the cost function taking all the mentioned factors, get the cost for all conceivable combinations of component offloading decisions, pick the robust decisions over an extensive dataset, and train a deep neural network as a substitute for the exhaustive computations associated. Model outcomes illustrate that our proposed scheme is proficient in the context of accuracy, root mean square error (RMSE), mean absolute error (MAE), and energy usage of UE.

1. Introduction

Nowadays, technological advancement is emerging drastically fast, along with numerous applications. The future wireless communication grasps various application setups in the form of augmented reality and cognitive assistance. Billions of smart devices are equipped with high computational resources and considerable memory size but also necessitate performing larger number of tasks. To tackle these challenges, researchers suggested a substantial solution, namely, mobile cloud computing (MCC). Aside from providing a solution, it also imposes an extra load on backhaul and radio mobile networks which consequences into a high or variable latency to remote data centers. Therefore, the concept of mobile edge computing (MEC) emerged which offers users storage and computing resources. It also minimizes the load on network resources, the energy consumption of user equipment (UE), and the network delay. Within the MEC, resource allocation along with offloading approaches precisely afflicts the performance of the framework which has turned out to be a research trend recently.

In [1], a two-phase traffic distribution method was projected for mobile edge server (MES) computing resources and mutually optimizing channel bandwidth to minimalize latency. In [2], an algorithm by the game theory was offered to mutually optimize channel bandwidth and MES computing resources to minimize energy consumption and overall time. Within MEC, enhancing the mobile applications’ performance mainly relies on effective task offloading decisions [36]. Thus, offloading decision-making has shown promising results over the past few years [712]. The authors in [11] measured the capacity limitations of backhaul links and real-time and maximum delay limitations of users and presented an unloading strategy to minimize the overall network energy consumption. In [12], the authors presented a computation offloading strategy considering energy perception through the consumption of weighing energy and time delay. The authors also contributed the residual energy of smart device battery within the characterization of the weighted factor of delay and energy consumption, decreasing consumption of the entire system. Both approaches mentioned above neglected to allocate computing resources and a limited spectrum. In [13], the authors presented an adaptive resource allocation and task offloading system for MEC. The proposed algorithm utilized deep reinforcement learning (DRL) technique to identify whether or not a task demands to remain offloaded and allocate computing resources for the particular task. However, this technique has drawbacks such as tough to regulate parameters and long training time. In the scenario with multiple resources, [14] considered an algorithm based on task scheduling of energy consumption minimization particle swarm optimization aimed at multiple resources corresponding to decrease edge terminal equipment energy consumption. In [15], a privacy viewpoint computing offloading algorithm is proposed, which is based on the Lyapunov optimization theory. The authors in [16] considered deep learning task offloading to deploy deep learning applications along with improving network energy consumption, a group of sparse beamforming structure based upon mixed L1/L2 norms.

While there are various works of literature concerning the computation offloading system for a user single-cell MEC framework, the vast majority of earlier offloading works based on machine learning (ML) presume either an infinite quantity of accessible communication or computation resources in cloudlet or coarse-grained computation offloading [1, 2]. This motivates us to propose a new offloading method that utilizes deep neural networks to achieve energy efficiency. The proposed approach trains an intelligent decision-making model that selects a reliable set of application components based on various factors, including remaining battery power, network conditions, data transfer volume, energy requirements of the components, communication delays, and computational load. To accomplish this, a cost function is developed that considers all these factors, and then, the most robust offloading decisions are chosen from a large dataset. Finally, a deep neural network is trained to serve as a more efficient substitute for the exhaustive computations required by the cost function.

The main contributions of this article are the following: (i)To deal with the offloading issue, an effective technique is proposed, which selects an optimum component’s part to offload to MES. The proposed technique calculates the costs of implementing a component on both MES and local end. In addition, the cost is the offloading decision-dependent variable that finds the perfect offloading procedure for some particular states of a component(ii)An efficient computation offloading system based upon supervised feed forward architecture has been created, along with random offloading scheme (ROS) and total offloading scheme (TOS) that are employed to evaluate the cost consumption and accuracy rate(iii)Performance of our proposed strategy shown by numerical simulations, which validate that we achieved the lowest possible cost compared to alternative methods and also observed the lowest slope curve by a parameter constraint mathematical model

The rest of this paper is ordered as follows: Section 2 introduces an outline of the related works. Section 3 discusses the proposed model and methodology followed by experimental results presented in Section 4. Finally, the “Conclusion” section concludes this paper.

Several methods have been presented to handle mobile offloading problems in dynamic situations. The approaches presented in the literature are based on optimization methods that aid in allocating the MEC resources [17, 18]. Nevertheless, a system based upon MEC is typically quite complex, and occasionally, it is tough to be portrayed in a mathematics arrangement. Similarly, optimization challenge is primarily expressed based upon a snap of the system, then reformulated when the situation fluctuates eventually. Besides, most of the conventional optimization approaches necessitate a hefty number of iterations to seek a local optimum preferably the global optimum.

Even though mobile cloud computing (MCC) attempts to drive restrictions of mobile applications through involving centralized resources to accomplish computational offloading, mobile edge computing (MEC) further proceeds by allocating the key portion of distant operations directly to nearby structures. These resources, characteristically situated at the logical edges of a network, might consist of LTE base stations, where routers deliver joint resources [19]. Mitsis et al. [20] presented a usage-based pricing mechanism and a user’s risk-based behavior-aware data offloading decision-making scheme in a UAV-assisted MEC system. Considering the pricing mechanism, prospect theoretic utility functions are formulated to capture users’ decision-making behaviors. Moreover, the theory of the tragedy of the commons is utilized to model the UAV’s resource utilization. Each participant in the game is expected to maximize its utility function in a noncooperative manner. However, due to the growing number of UAVs, network resource management faces challenges such as power control, spectrum allocation, and task allocation.

The deep learning [12] technique has accomplished its astonishing performance. Deep learning has surpassed techniques based upon machine learning in entire artificial intelligence areas, comprising of speech recognition [21], computer vision [22], natural language processing [23], and so on. Referring to the traditional machine learning-based offloading techniques, namely, the offloading scheme based upon the Markov decision process in [2], the deep learning scheme conveys two superiorities: (i) remarkable accuracy potential of achieving in decision-making and (ii) radical speed of calculation used for the test by a trained model.

Another eminent method of reinforcement learning is suitable for distributed decision-making. In this technique, autonomous agents acquire the most appropriate action by using penalties and rewards obtained during each round of play. Meanwhile, agents are unaware of which action is suitable to take; the agents learn through balancing search of unidentified actions and utilization of the existing data of already utilized actions. Simply put, agents utilize trial and also error tactics to capitalize on their functions over the horizon. Few eminent reinforcement learning approaches comprise learning automata, Q-learning, and Roth-Erev. However, reinforcement learning tactics are quite well suitable for learning in minority game (MG) [24]; subsequently, adjustment to the joint action of other agents in the existence of information deficiency can be accomplished by such approaches [25].

Moreover, in MEC literature, quite a lot of research work is available to mitigate the transmission latency issue and offer more optimized system performance. In [26], an RL framework based upon a deep Q-learning approach was utilized to estimate action-value action. They also provided a strategy to get the overhead-aware optimal computation offloading. Further, each user can learn via surrounding environment interactions and then approximate its performance in value function form. In the next step, the user can choose the overhead-aware whether edge computing or local computing by its condition. Another study also utilized a reinforcement learning algorithm-based technique called deep Q-network [27]. The key finding of their work was to automatically learn the offloading decision to improve the system performance and greatly decrease the latency and energy consumption.

In [28], they studied long-term throughput maximization problems considering a multicell multiuser framework for MEC. They did not solely emphasize two key issues, namely, energy and latency minimization issue, but a novel strategy is presented from the service provider’s perception to improve the system-wide throughput with latency limits through equally getting user accord along with resource distribution for communications and computing in consideration. Additionally, the Markov decision process (MDP) is applied to model the queuing conditions for mobile devices and also MEC servers.

In addition to latency and energy issues, computational offloading has risen. In [28], a way to alleviate offloading is presented, that is, distributed deep learning-based offloading (DDLO) algorithm. This approach uses multiple parallel DNNs to produce offloading choices. They follow a joint replay memory to save newly produced offloading choices which are then trained along with improving all DNNs. [29] presented MEC-enabled long-term evolution (LTE) framework and analyzed the impact of numerous vehicular communication modes on the performance of task offloading. They employed extensively used deep Q-learning method for optimal target MEC server decision to assist in maximizing utilities of offloading scheme subjected to specified delay limitations. Further, to improve task offloading reliability suggested an effective redundant offloading algorithm.

Finally, this persuades us to scheme a flexible deep learning-based offloading technique. In our approach, we assume that the mobile users are in a still position when they are performing to offload the mobile task to edge devices, considering that the communication between edge devices and mobile users is always consistent. Therefore, we did not include the mobility of users in our approach. Additionally, the network can become more critical with the user's high mobility.

3. Methodology

The process of implementation of an approach can be distributed into various stages. Every stage with associated data is a component of the approach implementation. The component can be installed either on the mobile edge server or local end. An effective offloading method should choose an optimum component part to offload to MES but not to entire, targeting to which. There are vital stages proposed in this approach, starting with calculating the implementation cost of a component on the MES end and local end, respectively, proving the implementation of the cost function offloading procedure, whereas the value of the cost variable is dependent on the offloading decision, finding the perfect offloading procedure for some particular states of a component with extensive methodology and the perfect offloading procedure including their states of component, respectively, and these two segments are then measured as the inputs and outputs of our training dataset. Lastly, applying a convolutional neural network, with that, we can achieve the perfect offload method of any states of a component through the training dataset. Moreover, Abbreviations represents all the notations to be used in this section.

3.1. Implementation of Local End Cost

The implementation of local end cost contains execution and consumption time. The work in [30] presented that implementation time can be computed through the amount of input data required for a single component. However, this approach neglected that the processed data with input data were not identical in size. Suppose we accept that the component output data is the input data to the incoming component, then we can apply to show the component input data; thus, the amount of work of component is given as where is calculated in the clock cycle of CPU and shows the amount of clock cycle a microprocessor will process each byte, and it is computed in cycles per byte. [31] introduced the research of this parameter. is the cost of computation of and denotes the data augmentation feature of . Understandably, the processed data and input data differ in size because a single component might handle the same input data many times; that is the reason the significance of is presented in this work. As a result, if the component is installed and performed on the local UE end, its implementation time is equivalent to the time required to complete the amount of work and is stated as follows: where is CPU rate for UE denoted by , where million instructions per second (MIPS) are calculated. The energy usage because of this amount of work represented by and energy is given below: where shows the consumption of UE’s unit power, and each CPU cycle is calculated in MAH. Assuming the entire energy of UE is , the lingering energy for the coming component can be calculated by

Later, the energy usage and implementation time were computed by the above equations, and the local end implementation component cost can be measured by the following equation: where and are coefficients of weights which can make equilibrium of energy usage and time delay in the cost function of local end, respectively.

3.2. Implementation of MES End Cost

With the local end implementation, the user equipment can also afford a component offloading to the remote end, such as MES to implement. Similar to the local end, the MES end implementation cost also has the implementation time but is much smaller compared to the local end time. We can assume implementation time same as local end time: where is MES CPU rate. The time used on data transfer from UE to MES has to be kept in mind. This UE mobile Internet environment defines this time; however, at this stage, we have only considered the most regularly applied 4G environment. The orthogonal frequency division multiple access (OFDMA) technology is utilized to deploy the 4G environment. With this kind of access, the download and upload speed relies on the transmission subcarrier number and bandwidth . Considering that the similar additive white Gaussian noise (AWGN) channel is broadcast for uplink and downlink, the obtainable uplink and downlink data rate can be simply computed as [32] where denotes as bandwidth; the path loss exponent is represented by ; the distance between MES and UE is ; further, is the total subcarriers which are given for from UE to MES transmission; power noise is ; and represent the MES and UE transmission power, respectively; the coefficient of channel fading for downlink and uplink is represented by and ; and, finally, the required rate of bit error for downlink and uplink is shown by and . Furthermore, the SNR margin for obtaining the bit error rate using quadrature amplitude modulation which is equal to is represented by . Applying the above equation, we can calculate the time taken for UE to dispatch the component input data to MES can be achieved by where describes the offloading decision of the past component . The following equation will define the execution of the component:

If the past component was done on MES, then the output of that component, for instance, the component input, requires not to be communicated between MES and UE. Therefore, the time consumed will be zero. On the other hand, the past component was run locally. Therefore, the data communication will be essentially required. Likewise, once the component execution is completed, the output of that component is likely to be dispatched back to UE if the coming component will be set to execute locally, whereas the communication is not necessary if component will be set to execute on MES; finally, the time consumed for UE to allow the component output data of MES is given below:

Lastly, the cost of MES end implementation is given below: where , , and are weights that can work as balancing the time, respectively.

3.3. Design a Cost Function

As we all know, a component can perform a task either remotely or locally, where equations (5) and (11) represent the cost function, respectively. To formalize the offloading cost function efficiently, we introduce the single component cost as

The offloading procedure cost function is the sum implemented of all the components and thus can be denoted as

Suppose the decisions of offloading of all components make the decision environment as , whereas denotes the total number of components used. After, our method is aimed at searching out an authentic decision environment as to minimize the above equation, which we can derive as

4. Simulation of Algorithm

To calculate the efficient offloading model as presented in the above equation, a deep ANN structure is done in this article. The most vital stuff is generating the datasets for training for our ANN model. The steps to collect datasets for training are given below: (i)The decision of component offloading relies on its condition; thus, the condition components and the related decisions of offloading must be the outputs and inputs of the ANN, respectively(ii)A component possesses a condition (, , , and ) showing the current component mobile environment, where is denoted as the amount of input data, the component number is denoted as , the distance between MES and UE is measured as , and, finally, shows the bandwidth(iii)Let us suppose there are total components, and we arbitrarily produce a condition for every component; therefore, we finally obtain different conditions. A know possesses two offloading selections; therefore, there would be distinct offloading states. Applying the exhaustive approach, we computed each state cost and choose among the best which can minimize equation (17)(iv)After, if the ANN is trained accurately and if we give states as input, it should give an accurate offloading scheme as output(v)We train the ANN model for number of times as done in step number 3; then, we can obtain different conditions and the related better offloading procedures, which make number of rows data for training. The -th training data row is shown below:where shows the condition of all components. Hence, is composed of data features since each condition consists of 4 condition features. The number of units of the input layer has the same to obtain perfectly. is the preferred robust component offload scheme. Expand equation (16) according to the following conditions:

For instance, is the component 1 condition which is arbitrarily produced in the -th iteration in step number 5, and is the related offloading scheme of the same iteration.

The feed forward architecture is shown in Figure 1, which consists of fully connected layers. While we input a training dataset into our ANN, we set a fixed batch size; it will take the first batch as a set of component conditions into it. The training data is generated from a small number of conditions. However, the pretrained ANN is capable of predicting the robust offloading decision of any set of conditions.

Moreover, deep learning is used on offloading policies where computations are managed according to data flow in the network. Deep learning performs as a smart agent according to the data provided. Data can be lost from low-resource communication channels. However, loss function and stochastic gradient descent can overcome this problem with a loss of negligible accuracy.

4.1. Computational Complexity

The reason for choosing the proposed method is to solve the energy usage by application components, postponements in communication, and computational load, which mainly relies on its benefit of low complexity compared to benchmark solutions, e.g., random offloading and total offloading schemes. In particular, the complexity of random offloading grows exponentially to as the number of fine-grained components of an application grows, whereas the complexity of the proposed method is typically on the order of , where is the number of neurons in a hidden layer that indicates the scale of the learning model.

5. Experiment Settings

We performed the experiments with NVIDIA GTX TITAN Xp GPU. We used and randomly produced conditions for every component after we obtained the dataset for training . We implemented different batch sizes of the datasets for training, for instance, 16, 32, 64, 128, 256, 512, and 1020, respectively. The average batch size is 100 MBs. We assume that the mobile users are in a static position while performing to offload the mobile task to edge devices, considering that the communication between edge devices and mobile users is always consistent. Therefore, we did not include the mobility of users in our approach. Additionally, the network can become more critical when user mobility is greater.

We chose three offloading strategies based on the literature and attempted to assess their efficiency; the three schemes are described below: (i)Random offloading scheme (ROS) [33] randomly chooses robust components irrespective of the volume of data transfer needed, remote and local resources, and network conditions(ii)Total offloading scheme (TOS) [33] is the coarse-grained method, which transfers the entire computation complexity to MES. This policy does not require a decision on offloading strategies because the entire computations are offloaded(iii)Offloading policy based on deep learning considers the amount of data transfer needed and network conditions. It applies a deep neural network with 2 middle layers (hidden layers) and 128 units in each layer

Table 1 gives several network parameters applied in this article. Most numbers of parameters applied were similar as in [34], but the rates of CPU ( and ) were specified according to the theory that MES has a greater CPU rate against the UE. We proposed the predictive performance as follows for comparison purposes: where is a component’s predicted offloading policy, is their actual optimal offloading policy, and the total number of components is denoted by the letter . Point 3 of the above section demonstrated how to obtain the true optimal offloading policy.

Figure 2 depicts our deep model prediction accuracies for various sample values. Figure 2 shows that the model’s prediction accuracy improves as the number of samples increases. When the number of samples reaches 50, the root mean square error (RMSE) and mean absolute error (MAE) become less than half, which shows that model can make predictions accurately as However, when the sample size approaches a thousand, the prediction accuracy soon deteriorates. It indicates that the model’s performance is not proportional to the sparsity. We examined our algorithm’s cost consumption and accuracy rate with ROS and TOS. The accuracy rate is formulated as follows: where denotes the number of components that have been correctly offloaded. The accuracy rate of our algorithm is examined in Figure 3 between ROS and TOS. The accuracy rate is calculated using a training dataset with 500, 1000, 1500 samples, respectively. We found that when the sample size grows larger, our algorithm’s prediction accuracy improves. As a result, given a big sample size, the ANN can create a more accurate offloading scheme.

Figure 3 also shows that our proposed algorithm performs better as compared to other schemes. ROS and TOS are offloading either randomly or totally, which lowers the accuracy rates. On the other hand, our proposed scheme improves the accuracy with the help of a deep neural network with various conditions such as energy, data volume, and network condition. This multitude of consideration has significantly improved the overall accuracy under various conditions, as shown in Figure 3.

The performance of the proposed scheme concerning energy and time delay is shown in Figure 4. Figure 4(a) shows that our proposed scheme outperforms TOS and ROS for all number of samples. For the number of , our proposed scheme improves the energy consumption by 57% and 87% compared to TOS and ROS, respectively. Similarly, Figure 4(b) shows the comparison of time consumption of our proposed model with other schemes. The higher accuracy rate of our proposed scheme has also contributed to the lower time delay. It can be observed that our proposed schemes suffers from lower time delay as compared to TOS and ROS and has improved the performance by 26% and 43%, respectively.

Figure 5 depicts the time complexity of our algorithm versus TOS and ROS for different sample sizes. It can be observed from Figure 5 that the time complexity of random offloading grows exponentially to as the number of samples grows. In contrast, the complexity of the proposed method is typically on the order of , with a slow increase in the graph trend. Therefore, our proposed scheme outperforms the others in terms of learning capacity.

Figure 6 shows the accuracy cost consumption of our algorithm w.r.t. TOS and ROS for various sample sizes. The application will run using our technique at the lowest possible cost when compared to alternative techniques. Another noteworthy point is that our method observed the lowest slope curve, indicating that when the prediction scenario becomes more complicated, our method might significantly decline offloading performance.

The combined energy consumption and time delay effect is depicted in Figure 7, representing λ1 and λ2, respectively. We call it cost of consumption (). In addition, to check the impact of energy and time on the performance, the difference values of are taken. It is observed that for lower number of samples, energy () positively affects as compared to time (), which can be seen in Figure 7 for 2000 and 4000 number of samples. We can see the lower cost of consumption () for . However, for the remaining samples, it is observed that time () plays a significant role in decreasing as compared to energy (). It is concluded that both time and energy affect the overall performance and diversely contribute towards the cost of consumption. This unique property can fulfil the user preference related to energy and time. In order to deal with energy constraint applications, higher value of can be used. Similarly, for delay intolerant applications, can be preferred over . This way, the users’ requirements’ variability can be effectively satisfied.

6. Conclusion

In this article, we described a new method to smartly offload application components to the cloud, applying extensive mathematical modeling and deep neural networks technique. We designed a cost function for the application to be implemented on user equipment (UE) as well as on the cloudlet server giving network conditions, accessible computation resources, delays, and energy usage. Considering these factors in the cost function, our introduced method is highly extensive and produces greater accuracy for robust decision-making for the offloading issue in MEC. Throughout a careful measure of the cost function, energy usage, and accuracy, we illustrated that our method is more accurate and uses an advanced state-of-the-art technique. To prevent complex computation and make the process of decision-making quicker, we trained deep neural networks. Further, to train the deep neural networks, the datasets are created from the designed mathematical model in which we include all the significant features in the cost function formula. Our model obtained approximately a 2.3% reduction in the total cost and approximately a 3.1% reduction in energy usage, as distinguished with other past techniques. Finally, we obtained greater accuracy with the lower number of neurons in the deep neural networks.

Abbreviations

:The amount of work of component
:Time required to complete the amount of work
:The energy usage on amount of work
λ1, λ2:Coefficients of energy and time weights, respectively
:Uplink data rate
:Downlink data rate
:The offloading procedure cost function.

Data Availability

The data/numbers are generated by the simulation to check validity. Therefore, no supporting dataset is available.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 61932005 and in part by the 111 Project of China under Grant B16006.