Collaborative Big Data Management and Analytics in Complex Systems with EdgeView this Special Issue
Review Article | Open Access
Zhiguo Qu, Yilin Wang, Le Sun, Dandan Peng, Zheng Li, "Study QoS Optimization and Energy Saving Techniques in Cloud, Fog, Edge, and IoT", Complexity, vol. 2020, Article ID 8964165, 16 pages, 2020. https://doi.org/10.1155/2020/8964165
Study QoS Optimization and Energy Saving Techniques in Cloud, Fog, Edge, and IoT
With an increase of service users’ demands on high quality of services (QoS), more and more efficient service computing models are proposed. The development of cloud computing, fog computing, and edge computing brings a number of challenges, e.g., QoS optimization and energy saving. We do a comprehensive survey on QoS optimization and energy saving in cloud computing, fog computing, edge computing, and IoT environments. We summarize the main challenges and analyze corresponding solutions proposed by existing works. This survey aims to help readers have a deeper understanding on the concepts of different computing models and study the techniques of QoS optimization and energy saving in these models.
With the development of the Internet, more and more computing techniques are developed. In this situation, an increasing amount of data needs to be processed. The increase of users’ requirements causes the development of different types of computing models, such as cloud computing, fog computing, and edge computing.
Cloud computing is an early computing model that has made great contributions to data processing. It provides convenient and quick network access to shared configurable resources, such as networks and servers. In addition, provisioning and publishing these resources do not require much administration and interaction of service providers . The structure of cloud computing is shown in Figure 1.
Due to the development of the IoT and the increasing needs of people, the IoT system based on cloud computing faces some limitations. In this situation, cloud computing cannot play a good role in large-scale or heterogeneous conditions . Therefore, a new computing model called fog computing is developed on the basis of cloud computing. Compared with cloud computing, the main advantage of fog computing is that it extends cloud resources to the network edge. Therefore, fog computing can facilitate the management of resources and services . The structure of fog computing is shown in Figure 2.
Edge computing allows operations to be performed on the edge of a network . Edge computing refers to all the resources of computing and network from data sources to cloud data centers. In edge computing, the flow of computing is bidirectional and things in edge computing can both consume data and produce data. That is, they can not only ask the cloud for services but also carries out computing jobs in the cloud . The structure of edge computing is shown in Figure 3.
The most popular embodiment of edge computing is the MEC, which refers to the technology of performing computation-intensive and delay-sensitive tasks for mobile devices. And its theory is collecting a large amount of free computing power and storage resources located at the edge of a network. The European Telecommunication Standards Institute was the first to define it as a computing model. MEC provides the capabilities of information technology and cloud computing at the network edge.
The IoT is created by the diffusion of sensors, actuators, and other devices in the communication driven network. The development of wireless technologies, such as the wireless sensor network technology and actuator nodes, promotes the development of the IoT technology. With the development of the IoT, its application has gradually expanded to cover increasingly wider domains. However, it always aims to make computers perceive information .
This paper investigates the important papers related to these computing models. For each paper, we point out the problems it aims to solve and introduce the solutions it proposes. The main contribution of this paper is as follows: (1) do a comprehensive survey on the techniques of QoS optimization and energy saving in different computing models, (2) classify papers according to the problems solved by the reviewed works, and (3) compare and summarize the main features of each type of paper. The structure of this paper is Section 2 studies five energy saving techniques under different computing models, and Section 3 concludes this paper.
2. QoS Optimization and Energy saving Techniques in Different Computing Models
In this section, we introduce the main works of QoS optimization and energy saving techniques in different computing models. We categorize these works in terms of the means they use to achieve the objective of QoS optimization and energy saving, which are (1) quality of service (QoS) guarantee or service-level agreement (SLA) assurance, (2) resource management and allocation, (3) scientific workflow execution, (4) server optimization, and (5) load balancing.
2.1. QoS Guarantee or SLA Assurance
Improving QoS or reducing SLA violations can effectively guarantee the transmission bandwidth, reduce the transmission delay, and reduce the packet loss rate of data. Striking a balance between QoS and limited resources can achieve energy saving.
2.1.1. Cloud Computing
Mazzucco et al.  let cloud service providers get the maximum benefit by reducing power consumption. In addition, they introduced and evaluated the policy of dynamic allocation of powering servers’ switches. It can optimize users’ experience while consuming the least amount of power. He et al.  proposed a service-based system supporting keyword search, in which different search keywords represent different tasks. This method can help unprofessional service users build service-oriented systems. Sun et al.  proposed a cloud service selection method to measure and aggregate the nonlinear relationship between standards. And a framework based on priority is designed to determine the criteria relationships and weights when historical information is insufficient. Mazzucco and Dyachuk  were also committed to making cloud service providers obtain the largest profits. They proposed the dynamic distribution strategy of powering server switch. The strategy not only enables users to get good service but also reduces power consumption. The number of live servers determines the state of the system, but running or closing a server cannot be done in a flash. So it is important to take into account of the time. Given the short time required for the server switch, formula (1)  represents the cost of changing the number of servers running per unit time. In order to make users have a good experience, this paper further uses a forecasting method to accurately predict the users’ time-changing needs:where t represents the observation time, represents the number of servers whose state change over time, represents the cost of the state change of a hardware component, represents the energy consumed in a unit time to change the state, k represents the average time to change the state of a server, and l represents the amount of component.
Mazzucco et al.  and Mazzucco and Dyachuk  both explore strategies of reducing the power cost of running a data center and changing the on-off state of the servers. The two strategies can maximize the users’ experience and save energy at the same time. Their difference is that Mazzucco and Dyachuk  believes that it is impossible to accurately predict the changes of users’ needs over time. So, compared with paper , the strategy proposed in paper  is fault-tolerant. He et al.  proposed three service selection methods that support QoS and can combine multitenant service-based systems. These three methods can achieve three degrees of multitenant maturity, which is more efficient than the traditional single-user approach. Sun et al.  proposed a unified semantic model to describe cloud service. This model expands the basic structure of unified service description language. And it defines a transaction module to model the rating system for cloud services from various perspectives. So, it can improve the ability of the model on service ranking. In addition, an annotation system is put forward to enrich the language expression. Wang et al.  proposed a fault-tolerant strategy based on multitenant service criticality, which can provide redundancy for key component services, evaluating the criticality of each component service to determine the optimal fault-tolerant policy. Therefore, the quality of the multitenant based service system can be guaranteed. Mustafa et al.  leveraged the notion of workload consolidation to improve energy efficiency by putting incoming jobs on as few servers as possible. The concept of SLA is also imported to minimize the total SLA violations. Given that a change in workload changes the utilization of CPU required over time. So, an integral function (formula (2) ) is used to represent the total energy (E) consumed by a server (S) operation:where P is the amount of power consumed by the server in terms of CPU utilization (u) in time t.
Bi et al.  established an architecture that can administrate itself in cloud data centers firstly. The architecture is suitable for web application services with several levels and has virtualization mechanism. Then, a mixed queuing model is proposed to decide the number of virtual machines (VMs) in each layer of application service environments. Formula (3)  is used to represent the local profit that can be made by the ith virtualization application service environment. Finally, a problem of misalignment restrained optimization and a heuristic mixed optimization algorithm are proposed. Both of them can make more revenues and meet requirements of different customers:where , , , and , respectively, represent the total benefit, penalty, loss, and cost of VMs.
Singh et al.  proposed a technology named STAR which can manage resources itself in the cloud computing environment and reduce SLA violations. So, the payment efficiency of cloud services can be improved. Beloglazov and Buyya  proposed a system to manage energy in the cloud data center. By continuously integrating VMs and dynamically redistributing VMs, the system can achieve the goal of saving energy and providing a high QoS level at the same time. Guazzone et al.  proposed an automatic management system (see Figure 4) for resources to provide certain QoS levels and reduce energy consumption. Resource manager of the framework in Figure 4 combines virtualization technologies and control-theoretic technologies. Virtualization technologies deploy each application to independent VM. And control-theoretic technologies realize the automatic management of computer performance and energy consumption. In addition, the resource manager consists of several independent components named Application Manager, Physical Machine Manager, and Migration Manager. Different from traditional static methods, this method can both fit the changing workloads dynamically and achieve remarkable results in reducing QoS violations. Sun et al.  established a model to simplify the decision of cloud resource allocation and realize the independent allocation of resources. The optimal resource configuration can be obtained, so the QoS requirements can be well met. Siddesh and Srinivasa  explored the problems of dynamic resource allocation and SLA assurance. They proposed a framework to deal with heterogeneous workload types by dynamically planning computing capacity and assessing risks. The framework uses scheduling methods to reduce SLA violation risks and maximize revenues in resource allocation.
Garg et al.  proposed a resource allocation strategy for VM dynamic allocation. The strategy can improve resource utilization, increase providers’ profits, and reduce SLA violations. Jing et al.  proposed a new dynamic allocating technique using the mixed queue model, meeting customers’ different requirements of performance by providing virtualized resources to each layer of virtualized application services. All these methods can reasonably configure resources in the cloud data center, improve system performance, reduce additional costs of using resources, and meet the required QoS.
Qi et al.  proposed a QoS-aware VM scheduling strategy named QVMS to satisfy QoS. Firstly, the scheduling problem is transformed into a problem with several objectives. And then the optimal VM migration method is found according to the genetic algorithm. The scheduling strategy can effectively manage resources in the network physical system, thus reducing the energy consumption and improving QoS levels. Qi et al.  proposed a service recommendation strategy by considering the time factor to improve the traditional location-sensitive hash technology. The policy emphasizes the influence of dynamic factors on QoS and the protection of user privacy.
Table 1 shows a summary of the abovementioned works. The solution of the problems in Table 1 can improve QoS in cloud computing environment. Server management refers to dynamically allocating powering servers’ switches. Workloads consolidation refers to combining work to save energy. VM management refers to reasonable scheduling or integration of VMs to achieve better performance. Self-management refers to the realization of self-management of resources, which can achieve higher efficiency. Resource management refers to the correct allocation of resources to reduce waste. Service management is about making reasonable service choices .
2.1.2. Fog Computing
Gu et al.  used fog computing to process a large amount of data generated by medical devices and built Fog Computing Supported Medical Cyber-Physical System (FC-MCPS). In order to reduce the cost of FC-MCPS, research studies were carried out on the joint of base station, task assignment, and VM layout. The problem is modeled as a mixed integer linear programming (MILP). A two-stage heuristic algorithm based on linear programming (LP) is proposed to solve the problem. Ni et al.  proposed a resource allocation approach based on fog computing, which enables users to select resources independently. In addition, this approach takes into account the price and time required to finish the job. Formula (4)  is used to define the credibility of Resource received from , when the user interacts with Resource :where the value of , , which can be determined by the user or the actual situation, , , , and are the response speed of the corresponding index service, the efficiency of execution, the speed of restart, and the reliability, respectively.
2.1.3. Edge Computing
Wei et al.  proposed a unified framework in the sustainable edge computing to save energy, including the energy that is distributed and renewable. And the architecture can combine the system that supply energy and edge services, which can make full use of renewable energy and provide better QoS. Lai et al.  proposed an optimized allocation method for edge users. The method can not only maximize the amount of resources allocated to users but also consider the dynamic QoS level of users. So, edge user allocation problem can be made more general and improving the quality of experience.
Xu et al.  used block chain to improve the traditional crowdsourcing technology. Firstly, they proposed a mobile crowdsourcing framework using block chain technology to protect user privacy. Then, they used dynamic programming strategy of clustering algorithm to classify requesters. Finally, they generated service policies to balance profits and energy consumption.
Rolik et al.  proposed a method to build a framework of IoT infrastructure based on microcloud. The method can help users use resources rationally, reduce the cost of managing infrastructure, and improve the quality of life of consumers. He et al.  proposed a dynamic network slice strategy. The network slice can be dynamically adjusted according to the time-varying resource demands. This method can improve the utilization of the underlying resources and better meet different QoS demands. Yao and Ansari  proposed an algorithm to determine the number of VMs to be rented and to control the power supply. Thus, the cost of the system can be minimized and the QoS can be improved. Formula (5)  is used to limit the delay requirement of QoS. The total delay must not exceed the computation deadline of each task, and the total delay is composed of wireless transmission delay and fog processing delay:where c and , respectively, represents fog processing and wireless transmission, i denotes a location, represents the delay of processing, represents the delay of wireless transmission, denotes the deadline, and N denotes different locations.
2.2. Resource Management and Allocation
Rational allocation of resources is an effective means to save energy.
2.2.1. Cloud Computing
Wang et al.  introduced an allocation method based on distributed multiagent to allocate VMs to physical machines. The method can realize VM consolidation and consider the migration costs simultaneously. In addition, a VM migration mechanism based on local negotiation is proposed to avoid unnecessary VM migration costs. Hassan et al.  established a formulation of universal problem and proposed a heuristic algorithm which has optimal parameters. Under this formulation, dynamic resource allocation can be made to meet the QoS requirements of applications. And the cost needed for dynamic resource allocation can be minimized with this algorithm. Wu et al.  proposed a scheduling algorithm based on the technology that can scale the voltage frequency dynamically in cloud computing. The algorithm can allocate resources for performing tasks and realize low power consumption network infrastructure. Compared with other schemes, this scheme not only sacrifices the performance of execution operations but also saves more energy.
Sarbazi and Zomaya  used two job consolidation heuristic methods to save energy. One is MaxUtil to better utilize resources and the other is Energy-Conscious Task Consolidation to reduce energy consumption. These two methods can promote the concurrent execution of multiple tasks and improve the energy efficiency. Hsu et al.  proposed a job consolidation technique to minimize energy consumption. Formula (6)  defines the energy consumption of VM from time to in the cluster is defined. And formula (7)  defines the total energy consumption in a virtual cluster , in the period. In addition, the proposed technique limits the CPU usage and merges tasks in virtual clusters. Once a task migration happens, the energy cost model will take into account the network latency. Sarbazi-Azad and Zomaya  and Hsu et al.  both maximize the benefit of cloud resources by using task merging techniques. Sarbazi-Azad and Zomaya  uses a greedy algorithm called MaxUtil. While, Hsu et al.  takes into account the network latency associated with task migration. So, in , a 17% improvement is achieved over MaxUtil:where is the energy consumption in unit time and n is the number of VMs in the cluster.
Hsu et al.  proposed a task integration technology based on the energy perception. According to the characteristics of most cloud systems, the principle of using 70% CPU is proposed to administrate job integration among virtual clusters. This technology is very effective in reducing the amount of energy consumed in cloud systems by merging tasks. Panda and Jana  proposed an algorithm with several criteria to combine tasks. The algorithm not only considers the time needed for processing jobs but also considers the utilization rate of VMs. And the algorithm is more energy efficient because it takes into account not only the processing time but also the utilization rate of VMs. Wang and Su  proposed a resource allocation algorithm to deal with wide range of communication between nodes in cloud environment. This algorithm uses recognition technology to dynamically distribute jobs and nodes according to computing ability and factors of storage. And it can reduce the traffic when allocating resources because it uses dynamic hierarchy. Lin et al.  proposed a dynamic auction approach for resource allocation. The approach can ensure that even if there are many users and resources, providers will have reasonable profits and computing resources will be allocated correctly. Yazir et al.  proposed a new method to manage resources dynamically and autonomously. Firstly, resource management is split into jobs and each job is executed by autonomous nodes. Second, autonomous nodes use the method called PROMETHEE to configure resources. Krishnajyothi  proposed a framework which can implement parallel task processing to solve the problem of low efficiency when submitting large tasks. Compared with the static framework, this framework can dynamically allocate VMs, thus reducing costs and the time of processing tasks. Lin et al.  proposed a method to allocate resources dynamically by using thresholds. Because this method uses the threshold value, it can optimize the reallocation of resources, improve the usage of resources, and reduce the cost. Xu et al.  proposed a data placement strategy named IDP for the data generated by IoTs devices to achieve reasonable data placement. In this way, the privacy of these data can be protected while resources are allocated reasonably. Jo et al.  proposed a computing offload framework under 5G network. The framework transfers the computing burden to the cloud, thus reducing the computing load of clients and the communication cost.
Table 2 shows a summary of the abovementioned works. The problem of resource allocation and management in cloud computing can be divided into problems in Table 2. VM management is about a reasonable configuration of VMs. Resource allocation represents the dynamic and flexible allocation of resources. Task integration refers to combining tasks to save energy and improve efficiency.
2.2.2. Fog Computing
Yin et al.  established a new model of scheduling jobs, which applies containers. In order to make sure that jobs can be finished on time, a job scheduling algorithm is developed. The algorithm can also optimize the number of tasks that can be performed together on the nodes in fog computing. And this paper proposes a redistribution mechanism to shorten the delay of tasks. These methods are very effective in reducing task delays. Aazam and Huh  established a framework to administrate resources effectively in the mode of fog computing. Considering that there are various types of objects and devices, the connection between them may be volatile. So, a method for predicting and administrating resources is proposed. The method considers that any object or device can quit using resources at anytime. Cuong et al.  studied the allocating resources jointly problem and the problem of carbon footprint minimization in fog data center. Formula (8)  is used to denote the energy consumption of servers. In addition, a distributed algorithm is proposed to solve the problem of wide range optimization:where represents the power supply required by the servers in a data center; y represents the video stream; κ denotes a conversion factor that converts the video stream into workload; C represents the data center’s load capacity; and and , respectively, represent the idle power and peak power of the servers.
Jia et al.  studied the problem of computing resource allocation in fog computing network with three levels. Firstly, the problem of resource allocation is transformed into a bilateral matching optimal problem. And then a bi-matching approach is proposed for this problem, which can improve the performance of the system and obtain higher cost efficiency. Zhang et al.  proposed a framework for joint optimization under fog computing to allocate fog nodes’ finite computing resources. The framework can achieve the best allocation and effectively improve the networks’ performance. Tan et al.  presented a method to allocate computing and communication resources. The method transfers computing jobs to remote cloud and nodes and simplifies edge nodes’ computing and computing energy. Vasconcelos et al.  developed a platform to allocate resources accessible to client devices in fog computing environment, allocating the resources of devices near the host to meet the applications needs for rapid response to computing resources. Aazam et al.  presented a method to estimate and manage resource in fog computing. The method is based on the fluctuation of customer abandonment probability, type and price of service, and so on.
Table 3 shows a summary of the abovementioned works. The problems in Table 3 are also derived from resource allocation and management problems. Task allocation represents the scheduling and redistribution of tasks. Resource allocation is still about the dynamic and flexible allocation of resources. Low latency refers to taking short time to configure and manage resources, which can improve efficiency.
2.2.3. Edge Computing
Tung et al.  proposed a new framework for resource allocation based on market needs. The resources come from edge nodes (ENs) with limited heterogeneous capabilities and are allocated to multiple competing services on the network edge. Generating a market equilibrium solution by reasonably pricing ENs can obtain the maximum utilization of marginal computing resources. Xu et al.  proposed a strategy to optimize offloading and privacy protection. This strategy shifts tasks firstly to improve the resource utilization of resource-limited edge cells. And then it balances QoS performance and privacy protection to achieve joint optimization.
Xu et al.  proposed an offload strategy for edge computing under 5G network, which uses block chain technology. The optimal strategy is further obtained by using the balanced offloading method. It solves the problem of data loss under the condition of transmission delay, which is caused by the uneven requirements of user equipments on resources. Xu et al.  proposed a computational offloading method named EACO to reduce the energy consumption in smart computing models. Figure 5 shows architecture of smart edge computing, where the shortest path is used to unload tasks. EACO uses genetic algorithms to reduce the energy consumption for operating edge computing nodes and improve the efficiency of performing complex computing tasks. Xu et al.  proposed a computational offloading strategy for edge computing to protect the privacy of interconnected vehicle networks. They firstly analyzed privacy conflicts of tasks. And then they designed the communication route to obtain routing vehicles, which can achieve the optimization of several objectives. Yeting et al.  proposed a unique resource allocation mechanism. The mechanism takes each individual task as the basis for resource allocation, rather than for the whole service. It reduces the packet loss rate and saves energy by unloading services.
Chen et al.  studied the problem of computing unloading with several users in the environment of MEC with wireless interference which have many channels. In addition, a distributed algorithm for computing unloading is developed. The algorithm can perform the unloading well even when there are a large number of users. Gao et al.  built a quadratic binary program, which is able to assign tasks in mobile cloud computing environment. Two algorithms are presented to obtain the optimal solution. Both of these heuristic algorithms can effectively solve the task assignment problem. Xu et al.  proposed an offloading method using block chain technology. It can guarantee the loss of data in offloading tasks under edge computing. And it can solve the problem of resource requests out of proportion due to the limited load of edge computing equipment during task transfer. Yifei et al.  proposed a model-free reinforcement learning framework to solve the problem of computational unloading. This model can be applied to the computational unloading with time-changing computing requests.
Barcelo et al.  expressed the problem of service allocation  as a mixed flow problem with minimum cost which can be solved by LP, solving this service allocation problem can solve the problems of unbalanced network load and delay of end-to-end service. And it can also figure out the problem of excessive consumption of electricity brought by the architecture of centralized cloud. Angelakis et al.  assigned the requirements of services resources to heterogeneous network interfaces of equipments. So, more heterogeneous network interfaces can be used by a large amount of services.
Li et al.  proposed communication framework in 5G and studied the problem of allocating power and channels. So, the signal data in the channel can be available and the total energy efficiency can be maximum. Formula (9)  shows how to calculate the energy efficiency of a system:where and , respectively, denote the energy efficiency of sensor S and actuator A on channels. The sets of sensors, actuators, and channels are, respectively, represented as , , and .
Liu et al.  studied the problem of allocating resources efficiently on IoT that supplies wireless power. In this method, users are first grouped into accessible channels. And then power distribution of users grouped in the same channel is studied to improve throughput of the network. This method can allocate finite resources to a large group of users. Ejaz and Ibnkahla  proposed the resource allocation framework with several bands under cognitive 5G IoT. In the highly dynamic environment of the IoT, multiband method can manage resources more flexibly and reduce more energy consumption. In addition, a reconstruction approach with several levels is proposed to allocate resources reasonably for applications with different needs of QoS. Colistra et al.  proposed a protocol which is distributed and optimal to allocate resources in heterogeneous IoT. Because this protocol has excellent adaptability when changing topology of network, it can distribute resources evenly among nodes. Jian et al.  proposed a multilevel allocating resources algorithm for IoT communication using advanced technology. The algorithm uses hierarchical structure and has fast data processing rate and very low latency in a saturated or not saturated environment.
Zheng and Liu  proposed a new algorithm to allocate bandwidth dynamically for controlling remote computers in the IoT. This method can reduce the error of signal reconstruction under the same bandwidth and make the bandwidth allocation of IoT more reasonable. Gai and Qiu  used reinforcement learning mechanisms to allocate resources to achieve high Quality of Experience. This method can effectively solve the resource allocation problems caused by the mismatch of service quality and complex service providing condition in the IoT.
Table 4 shows a summary of the above works. The problem in Table 4 represents the realization of dynamic and flexible allocation of resources. The resources here can represent channels, bandwidth, and power.
2.3. Scientific Workflow Execution
Implementing scientific workflows, especially in heterogeneous environments, can reduce resource waste and reduce energy costs. Scientific workflow can be obtained by reasonably allocating resources and dynamically deploying VMs.
2.3.1. Cloud Computing
Xu et al.  proposed a resource allocation method called EnRealan to solve the problem of energy consumption. The dynamic deployment of VMs is generally adopted to execute scientific workflows. Bousselmi et al.  proposed a scheduling method based on energy perception for executing scientific workflows in cloud computing. At first, an algorithm of splitting workflows for energy minimization is presented, which can achieve a high parallelism without huge energy consumption. Then, a heuristic algorithm used to optimize cat swarm is proposed for the created partitions. The algorithm can minimize the total consumption of energy and the execution time of workflows. Sonia et al.  proposed a workflow scheduling method with several objects and hybrid particle swarm optimization algorithm. In addition, a method for dynamically scaling voltage and frequency is proposed. The method can make the processors work at any voltage level, so as to minimize the energy consumption in the process of workflow scheduling. Both Bousselmi et al.  and Sonia et al.  use scheduling method to achieve scientific workflows and study the problem of energy consumption. The difference is that Bousselmi et al.  focuses on intensive computing tasks, while Sonia et al.  focuses on workflow scheduling on heterogeneous computing systems.
Cao  established a scheduling algorithm of scientific workflows with an objective of energy saving. This algorithm can enable service providers to gain high profits and reduce users’ overhead at the same time. Li et al.  proposed a scheduling algorithm based on cloud computing, which can minimize cost of performing workflows within a specified time. In addition, the rented VM was modified to save cost further. Khaleel and Zhu  proposed a scheduling algorithm and took scientific workflows as a model to make full use of cloud resources and save energy. Shi et al.  designed a flexible resource allocation and job scheduling mechanism to implement scientific workflows. Because this mechanism can implement scientific workflows within prescribed budgets and deadlines, so it can work better than other mechanisms.
Table 5 shows a summary of the abovementioned works. The problems in Table 5 are derived from the implementation of the scientific workflow. VM deployment refers to the rational allocation of VMs. Workflow scheduling refers to reducing the scheduling energy and time. In addition, it refers to the scheduling of the workflow on heterogeneous systems. Cost reduction refers to reducing the cost of workflow execution. Effective implementation is about scientific workflow execution within a specified budget and time.
2.4. Server Optimization
Server optimization is also a good way to save energy. The goal of optimizing the server can be achieved by uninstalling unnecessary servers or consolidating servers, as well as by reasonably scheduling tasks. Unlike QoS optimization, server optimization aims to optimize the number of used servers, improves the energy efficiency of servers, and consolidates servers. However, QoS optimization studies how to make users get better experience and meet their needs.
2.4.1. Cloud Computing
Ge et al.  proposed a game-theoretic method and transformed the problem of minimizing energy into a congestion game. All mobile devices in this method are participants in the game. The method chooses a server to unload the computation tasks to optimize QoS levels and save energy, which can optimize the system and save energy. Wang et al.  proposed a MapReduce-based multitask scheduling algorithm to achieve the objective of energy saving. This model is a two-layer model, which considers the impact of server performance changes on energy consumption, and the limitation of network bandwidth. In addition, a local search operator is designed, based on which a two-layer genetic algorithm is proposed. The algorithm can schedule tens of thousands of tasks in cloud and achieve large-scale optimization. Yanggratoke et al.  proposed a general generic gossip protocol, aiming at allocating resources in cloud environment. An instantiation of this protocol was developed to enable server consolidation to allocate resources to more servers to meet changing load patterns.
2.5. Load Balancing
Load balancing can help save energy by managing the number of servers and allocating resources.
2.5.1. Cloud Computing
Paya and Marinescu  introduced an operation model that balances cloud computing load and expands applications to save energy. The principle of this model is to define an operating system. The system should make servers run in the system as many as possible. When no tasks are being performed, the system should adjust servers to sleep, thus energy consumption can be reduced. Justafort et al.  mainly studied the problem of workload distribution across cloud computing environment and proposed a method to solve the problem of the VM layout. So, the footprint of carbon can be effectively reduced. Panwar and Mallick  proposed an algorithm to dynamically manage the load and effectively distribute the total incoming requests between VMs. Through efficient and uniform utilization of resources, this algorithm can achieve uniform distribution of load between servers. Yang et al.  proposed a power management mechanism to balance the load. The system can monitor VMs and dynamically allocate the resources. Yang et al.  proposed an optimization system to better allocate resources dynamically, which can balance the load of VMs running on multiple physical machines. Under this system, VMs can be migrated automatically to adjust high and low loads without interrupting services. Yang et al. [89, 90] manage VMs to achieve load balancing. They allocate resources dynamically to migrate VMs, which can balance workloads on different physical machines. The difference is that Yang et al. integrate a dynamic resource allocation approach with OpenNebula. While, Yang et al.  focuse on avoiding service outages during VM migration.
Table 6 shows a summary of the abovementioned works. The problems in Table 6 are from the load balancing problem. Server management is about the control of the number of servers running in the system. Workload management is the rational allocation of workload or tasks. VM management refers to configuring VM resources and migrating VMs to adjust loads.
2.5.2. Fog Computing
Xu et al.  proposed a method called “DRAM” to dynamically allocate resources in fog computing environment, which can avoid both too high and too low loads. The method first analyzes the load balance of different kinds of computing nodes. And then it designs a fog resource allocation method to achieve load balance, which allocates resources statically and migrates services dynamically. Oueis et al.  studied the load balance problem in fog computing. A custom fog clustering algorithm is proposed to solve the problem. In the problem, several users need to offload computations and all of their demands need to be handled by local computing cluster.
Wang et al.  established architecture of the energy saving targeted system in industrial IoT. In addition, in order to predict sleep intervals, they developed a sleep scheduling and a wake protocol, which provide a better way for energy saving.
This paper did a comprehensive study of QoS optimization and energy saving in cloud computing, edge computing, fog computing, and IoT models. We summarized five main problems and analyzed their solutions proposed by existing works. By conducting this survey, we aim to help readers have a deeper understanding on the concepts of different computing models and the techniques of QoS optimization and energy saving in these models.
The investigated papers focus on issues about ensuring QoS and reducing SLA violations and resource management. In the case of QoS assurance and SLA violations reduction, the main solution of QoS assurance is efficient VM management. This solution can meet customers’ requirements through reasonable scheduling and integration of VMs. Most of resource management techniques are realized by reasonable scheduling of resources, which can reduce the waste of VMs, servers, and traffic.
This manuscript is an extension of A Survey of QoS Optimization and Energy Saving in Cloud, Edge, and IoT in The 9th EAI International Conference on Cloud Computing.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported by the National Natural Science Foundation of China (Grant no. 61702274), Natural Science Foundation of Jiangsu Province (Grant no. BK20170958), and PAPD.
- Q. Zhang, L. Cheng, and R. Boutaba, “Cloud computing: state-of-the-art and research challenges,” Journal of Internet Services and Applications, vol. 1, no. 1, pp. 7–18, 2010.
- W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: vision and challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, 2016.
- M. Iorga, L. Feldman, R. Barton, M. J. Martin, N. S. Goren, and C. Mahmoudi, “Fog computing conceptual model,” Tech. Rep., Recommendations of the National Institute of Standards and Technology, Gaithersburg, MD, USA, 2018, Tech. Rep.
- Nebbiolo, “Fog vs edge computing,” Tech. Rep., Nebbiolo Technologies Inc., Milpitas, CA, USA, 2018, Tech. Rep.
- C. T. Do, N. H. Tran, C. Pham, M. G. R. Alam, J. H. Son, and C. S. Hong, “A proximal algorithm for joint resource allocation and minimizing carbon footprint in geo-distributed fog computing,” in Proceedings of the 2015 International Conference on Information Networking (ICOIN), pp. 324–329, IEEE, Siem Reap, Cambodia, January 2015.
- S. Vashi, J. Ram, J. Modi, S. Verma, and C. Prakash, “Internet of things (IoT): a vision, architectural elements, and security issues,” in Proceedings of the 2017 International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), pp. 492–496, IEEE, Coimbatore, India, February 2017.
- M. Mazzucco, D. Dyachuk, and R. Deters, “Maximizing cloud providers’ revenues via energy aware allocation policies,” in Proceedings of the 2010 IEEE 3rd International Conference on Cloud Computing, pp. 131–138, IEEE, Miami, FL, USA, July 2010.
- Q. He, R. Zhou, X. Zhang et al., “Keyword search for building service-based systems,” IEEE Transactions on Software Engineering, vol. 43, no. 7, pp. 658–674, 2016.
- L. Sun, H. Dong, O. K. Hussain, F. K. Hussain, and A. X. Liu, “A framework of cloud service selection with criteria interactions,” Future Generation Computer Systems, vol. 94, pp. 749–764, 2019.
- M. Mazzucco and D. Dyachuk, “Optimizing cloud providers revenues via energy efficient server allocation,” Sustainable Computing: Informatics and Systems, vol. 2, no. 1, pp. 1–12, 2012.
- Q. He, J. Han, F. Chen et al., “Qos-aware service selection for customisable multi-tenant service-based systems: maturity and approaches,” in Proceedings of the 2015 IEEE 8th International Conference on Cloud Computing, pp. 237–244, IEEE, New York, NY, USA, July 2015.
- L. Sun, J. Ma, H. Wang, Y. Zhang, and J. Yong, “Cloud service description model: an extension of usdl for cloud services,” IEEE Transactions on Services Computing, vol. 11, no. 2, pp. 354–368, 2015.
- Y. Wang, Q. He, D. Ye, and Y. Yang, “Formulating criticality-based cost-effective fault tolerance strategies for multi-tenant service-based systems,” IEEE Transactions on Software Engineering, vol. 44, no. 3, pp. 291–307, 2017.
- S. Mustafa, K. Bilal, S. U. R. Malik, and S. A. Madani, “Sla-aware energy efficient resource management for cloud environments,” IEEE Access, vol. 6, pp. 15004–15020, 2018.
- J. Bi, H. Yuan, M. Tie, and W. Tan, “Sla-based optimisation of virtualised resource for multi-tier web applications in cloud data centres,” Enterprise Information Systems, vol. 9, no. 7, pp. 743–767, 2015.
- S. Singh, I. Chana, and R. Buyya, “Star: sla-aware autonomic management of cloud resources,” IEEE Transactions on Cloud Computing, p. 1, 2017.
- A. Beloglazov and R. Buyya, “Energy efficient resource management in virtualized cloud data centers,” in Proceedings of the 2010 10th IEEEACM International Conference on Cluster, Cloud and Grid Computing, pp. 826–831, IEEE Computer Society, Melbourne, Australia, May 2010.
- M. Guazzone, C. Anglano, and M. Canonico, “Energy-efficient resource management for cloud computing infrastructures,” in Proceedings of the 2011 IEEE Third International Conference on Cloud Computing Technology and Science, pp. 424–431, IEEE, Athens, Greece, November 2011.
- Y. Sun, J. White, and S. Eade, “A model-based system to automate cloud resource allocation and optimization,” in Proceedings of the International Conference on Model Driven Engineering Languages and Systems, pp. 18–34, Springer, Valencia, Spain, October 2014.
- G. Siddesh and K. Srinivasa, “Sla-driven dynamic resource allocation on clouds,” in Proceedings of the International Conference on Advanced Computing, Networking and Security, pp. 9–18, Springer, Surathkal, India, December 2011.
- S. K. Garg, S. K. Gopalaiyengar, and R. Buyya, “Sla-based resource provisioning for heterogeneous workloads in a virtualized cloud datacenter,” in Proceedings of the International Conference on Algorithms and Architectures for Parallel Processing, pp. 371–384, Springer, Melbourne, Australia, October 2011.
- J. Bi, Z. Zhu, and H. Yuan, “Sla-aware dynamic resource provisioning for profit maximization in shared cloud data centers,” in Proceedings of the International Conference on High Performance Networking, Computing and Communication Systems, pp. 366–372, Springer, Singapore, May 2011.
- L. Qi, Y. Chen, Y. Yuan, S. Fu, X. Zhang, and X. Xu, “A Qos-aware virtual machine scheduling method for energy conservation in cloud-based cyber-physical systems,” World Wide Web, vol. 4, no. 3, pp. 1–23, 2019.
- A. Beloglazov, J. Abawajy, and R. Buyya, “Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing,” Future Generation Computer Systems, vol. 28, no. 5, pp. 755–768, 2012.
- L. Qi, R. Wang, C. Hu, S. Li, Q. He, and X. Xu, “Time-aware distributed service recommendation with privacy-preservation,” Information Sciences, vol. 480, pp. 354–364, 2019.
- Z. Zhai, B. Cheng, Y. Tian, J. Chen, L. Zhao, and M. Niu, “A data-driven service creation approach for end-users,” IEEE Access, vol. 4, pp. 9923–9940, 2016.
- L. Gu, D. Zeng, S. Guo, A. Barnawi, and Y. Xiang, “Cost efficient resource management in fog computing supported medical cyber-physical system,” IEEE Transactions on Emerging Topics in Computing, vol. 5, no. 1, pp. 108–119, 2015.
- L. Ni, J. Zhang, C. Jiang, C. Yan, and K. Yu, “Resource allocation strategy in fog computing based on priced timed petri nets,” Ieee Internet of Things Journal, vol. 4, no. 5, pp. 1216–1228, 2017.
- L. Wei, T. Yang, F. C. Delicato et al., “On enabling sustainable edge computing with renewable energy resources,” IEEE Communications Magazine, vol. 56, no. 5, pp. 94–101, 2018.
- P. Lai, Q. He, G. Cui et al., “Edge user allocation with dynamic quality of service,” in Proceedings of the International Conference on Service-Oriented Computing, pp. 86–101, Springer, Toulouse, France, October 2019.
- X. Xu, Q. Liu, X. Zhang, J. Zhang, L. Qi, and W. Dou, “A blockchain-powered crowdsourcing method with privacy preservation in mobile environment,” IEEE Transactions on Computational Social Systems, vol. 6, no. 6, pp. 1407–1419, 2019.
- O. Rolik, E. Zharikov, and S. Telenyk, “Microcloud-based architecture of management system for IoT infrastructures,” in Proceedings of the 2016 Third International Scientific-Practical Conference Problems of Infocommunications Science and Technology (PIC S&T), pp. 149–151, IEEE, Kharkiv, Ukaraine, October 2016.
- W. He, S. Guo, Y. Liang, R. Ma, X. Qiu, and L. Shi, “Qos-aware and resource-efficient dynamic slicing mechanism for internet of things,” Computers, Materials & Continua, vol. 61, no. 3, pp. 1345–1364, 2019.
- J. Yao and N. Ansari, “Qos-aware fog resource provisioning and mobile device power control in IoT networks,” IEEE Transactions on Network and Service Management, vol. 16, no. 1, pp. 167–175, 2018.
- W. Wang, Y. Jiang, and W. Wu, “Multiagent-based resource allocation for energy minimization in cloud computing systems,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 2, pp. 1–16, 2016.
- K. Krishnajyothi, “Parallel data processing for effective dynamic resource allocation in the cloud,” International Journal of Computer Applications, vol. 70, no. 22, pp. 1–4, 2013.
- M. M. Hassan, B. Song, M. S. Hossain, and A. Alamri, “Efficient resource scheduling for big data processing in cloud platform,” in Proceedings of the International Conference on Internet and Distributed Computing Systems, pp. 51–63, Springer, Calabria, Italy, September 2014.
- C.-M. Wu, R.-S. Chang, and H.-Y. Chan, “A green energy-efficient scheduling algorithm using the dvfs technique for cloud datacenters,” Future Generation Computer Systems, vol. 37, no. 7, pp. 141–147, 2014.
- Z. Wang and X. Su, “Dynamically hierarchical resource-allocation algorithm in cloud computing environment,” The Journal of Supercomputing, vol. 71, no. 7, pp. 2748–2766, 2015.
- W.-Y. Lin, G.-Y. Lin, and H.-Y. Wei, “Dynamic auction mechanism for cloud resource allocation,” in Proceedings of the 2010 10th IEEEACM International Conference on Cluster, Cloud and Grid Computing, pp. 591-592, IEEE Computer Society, Victoria, Australia, May 2010.
- Y. O. Yazir, C. Matthews, R. Farahbod et al., “Dynamic resource allocation in computing clouds using distributed multiple criteria decision analysis,” in Proceedings of the 2010 IEEE 3rd International Conference on Cloud Computing, pp. 91–98, IEEE, Washington, DC, USA, July 2010.
- W. Lin, J. Z. Wang, C. Liang, and D. Qi, “A threshold-based dynamic resource allocation scheme for cloud computing,” Procedia Engineering, vol. 23, no. 5, pp. 695–703, 2011.
- X. Xu, S. Fu, L. Qi et al., “An IoT-oriented data placement method with privacy preservation in cloud environment,” Journal of Network and Computer Applications, vol. 124, pp. 148–157, 2018.
- J. Bokyun, P. Md. Jalil, L. Daeho, and S. Doug Young, “Efficient computation offloading in mobile cloud computing for video streaming over 5 g,” Computers, Materials & Continua, vol. 61, no. 2, pp. 439–463, 2019.
- H. Sarbazi-Azad and A. Y. Zomaya, “Energy-efficient resource utilization in cloud computing,” in Large Scale Network-Centric Distributed Systems, pp. 377–408, Wiley-IEEE Press, Hoboken, NJ, USA, 2014.
- C.-H. Hsu, K. D. Slagter, S.-C. Chen, and Y.-C. Chung, “Optimizing energy consumption with task consolidation in clouds,” Information Sciences, vol. 258, no. 3, pp. 452–462, 2014.
- C.-H. Hsu, S.-C. Chen, C.-C. Lee et al., “Energy-aware task consolidation technique for cloud computing,” in Proceedings of the 2011 IEEE Third International Conference on Cloud Computing Technology and Science, pp. 115–121, IEEE, Athens, Greece, 2011.
- S. K. Panda and P. K. Jana, “An efficient task consolidation algorithm for cloud computing systems,” in Proceedings of the International Conference on Distributed Computing and Internet Technology, pp. 61–74, Springer, Bhubaneswar, India, January 2016.
- L. Yin, J. Luo, and H. Luo, “Tasks scheduling and resource allocation in fog computing based on containers for smart manufacturing,” IEEE Transactions on Industrial Informatics, vol. 14, no. 10, pp. 4712–4721, 2018.
- M. Aazam and E.-N. Huh, “Dynamic resource provisioning through fog micro datacenter,” in Proceedings of the 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), pp. 105–110, IEEE, St. Louis, MO, USA, March 2015.
- B. Jia, H. Hu, Y. Zeng, T. Xu, and Y. Yang, “Double-matching resource allocation strategy in fog computing networks based on cost efficiency,” Journal of Communications and Networks, vol. 20, no. 3, pp. 237–246, 2018.
- H. Zhang, Y. Xiao, S. Bu, D. Niyato, F. R. Yu, and Z. Han, “Computing resource allocation in three-tier IoT fog networks: a joint optimization approach combining stackelberg game and matching,” IEEE Internet of Things Journal, vol. 4, no. 5, pp. 1204–1215, 2017.
- J. Tan, T.-H. Chang, and T. Q. Quelc, “Minimum energy resource allocation in fog radio access network with fronthaul and latency constraints,” in Proceedings of the 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 1–5, IEEE, Kalamata, Greece, June 2018.
- D. R. de Vasconcelos, R. M. de Castro Andrade, and J. N. de Souza, “Smart shadow–an autonomous availability computation resource allocation platform for internet of things in the fog computing environment,” in Proceedings of the 2015 International Conference on Distributed Computing in Sensor Systems, pp. 216-217, IEEE, Fortaleza, Brazil, June 2015.
- M. Aazam, M. St-Hilaire, C.-H. Lung, and I. Lambadaris, “Pre-fog: IoT trace based probabilistic resource estimation at fog,” in Proceedings of the 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC), pp. 12–17, IEEE, Las Vegas, NV, USA, January 2016.
- N. D. Tung, L. L. Bao, and B. Vijay, “Price-based resource allocation for edge computing: a market equilibrium approach,” IEEE Transactions on Cloud Computing, p. 1, 2018.
- X. Xu, C. He, Z. Xu, L. Qi, S. Wan, and M. Z. A. Bhuiyan, “Joint optimization of offloading utility and privacy for edge computing enabled IoT,” IEEE Internet of Things Journal, 2019.
- X. Xu, Y. Chen, X. Zhang, Q. Liu, X. Liu, and L. Qi, “A blockchain-based computation offloading method for edge computing in 5G networks,” Software: Practice and Experience, 2019.
- X. Xu, Y. Li, T. Huang et al., “An energy-aware computation offloading method for smart edge computing in wireless metropolitan area networks,” Journal of Network and Computer Applications, vol. 133, pp. 75–85, 2019.
- X. Xu, Y. Xue, L. Qi et al., “An edge computing-enabled computation offloading method with privacy preservation for internet of connected vehicles,” Future Generation Computer Systems, vol. 96, pp. 89–100, 2019.
- G. Yeting, L. Fang, X. Nong, and C. Zhengguo, “Task-based resource allocation bid in edge computing micro datacenter,” Computers, Materials & Continua, vol. 61, no. 2, pp. 777–792, 2019.
- X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing,” IEEE/ACM Transactions on Networking, vol. 24, no. 5, pp. 2795–2808, 2016.
- B. Gao, L. He, X. Lu, C. Chang, K. Li, and K. Li, “Developing energy-aware task allocation schemes in cloud-assisted mobile workflows,” in Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, pp. 1266–1273, IEEE, Liverpool, UK, October 2015.
- X. Xu, X. Zhang, H. Gao, Y. Xue, L. Qi, and W. Dou, “Become: blockchain-enabled computation offloading for IoT in mobile edge computing,” IEEE Transactions on Industrial Informatics, 2019.
- W. Yifei, W. Zhaoying, G. Da, and Y. F. Richard, “Deep q-learning based computation offloading strategy for mobile edge computing,” Computers, Materials & Continua, vol. 59, no. 1, pp. 89–104, 2019.
- M. Barcelo, A. Correa, J. Llorca, A. M. Tulino, J. L. Vicario, and A. Morell, “IoT-cloud service optimization in next generation smart environments,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 12, pp. 4077–4090, 2016.
- B. Cheng, M. Wang, S. Zhao, Z. Zhai, D. Zhu, and J. Chen, “Situation-aware dynamic service coordination in an IoT environment,” IEEE/ACM Transactions On Networking, vol. 25, no. 4, pp. 2082–2095, 2017.
- V. Angelakis, I. Avgouleas, N. Pappas, and D. Yuan, “Flexible allocation of heterogeneous resources to services on an IoT device,” in Proceedings of the 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 99-100, IEEE, Hong Kong, China, April 2015.
- S. Li, Q. Ni, Y. Sun, G. Min, and S. Al-Rubaye, “Energy-efficient resource allocation for industrial cyber-physical IoT systems in 5g era,” IEEE Transactions on Industrial Informatics, vol. 14, no. 6, pp. 2618–2628, 2018.
- X. Liu, Z. Qin, Y. Gao, and J. A. McCann, “Resource allocation in wireless powered IoT networks,” IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4935–4945, 2019.
- W. Ejaz and M. Ibnkahla, “Multi-band spectrum sensing and resource allocation for IoT in cognitive 5g networks,” IEEE Internet of Things Journal, vol. 5, no. 1, pp. 150–163, 2017.
- G. Colistra, V. Pilloni, and L. Atzori, “Task allocation in group of nodes in the IoT: a consensus approach,” in Proceedings of the 2014 IEEE International Conference on Communications (ICC), pp. 3848–3853, IEEE, Sydney, Australia, June 2014.
- J. Li, Q. Sun, and G. Fan, “Resource allocation for multiclass service in IoT uplink communications,” in Proceedings of the 2016 3rd International Conference on Systems and Informatics (ICSAI), pp. 777–781, IEEE, Shanghai, China, November 2016.
- L. I. Zheng and K. H. Liu, “Dynamic bandwidth resource allocation algorithm in internet of things and its application,” Computer Engineering, vol. 38, no. 17, pp. 16–19, 2012.
- K. Gai and M. Qiu, “Optimal resource allocation using reinforcement learning for IoT content-centric services,” Applied Soft Computing, vol. 70, pp. 12–21, 2018.
- X. Xu, W. Dou, X. Zhang, and J. Chen, “Enreal: an energy-aware resource allocation method for scientific workflow executions in cloud environment,” IEEE Transactions on Cloud Computing, vol. 4, no. 2, pp. 166–179, 2016.
- K. Bousselmi, Z. Brahmi, and M. M. Gammoudi, “Energy efficient partitioning and scheduling approach for scientific workflows in the cloud,” in Proceedings of the 2016 IEEE International Conference on Services Computing (SCC), pp. 146–154, IEEE, San Francisco, CA, USA, June 2016.
- Y. Sonia, C. Rachid, K. Hubert, and G. Bertrand, “Multi-objective approach for energy-aware workflow scheduling in cloud computing environments,” The Scientific World Journal, vol. 2013, Article ID 350934, 13 pages, 2013.
- F. Cao, Efficient Scientific Workflow Scheduling in Cloud Environment, Southern Illinois University, Carbondale, IL, USA, 2014.
- Z. Li, J. Ge, H. Hu, W. Song, H. Hu, and B. Luo, “Cost and energy aware scheduling algorithm for scientific workflows with deadline constraint in clouds,” IEEE Transactions on Services Computing, vol. 11, no. 4, pp. 713–726, 2015.
- M. Khaleel and M. M. Zhu, “Energy-aware job management approaches for workflow in cloud,” in Proceedings of the 2015 IEEE International Conference on Cluster Computing, pp. 506-507, IEEE, Chicago, IL, USA, September 2015.
- J. Shi, J. Luo, F. Dong, and J. Zhang, “A budget and deadline aware scientific workflow resource provisioning and scheduling mechanism for cloud,” in Proceedings of the 2014 IEEE 18th International Conference on Computer Supported Cooperative Work in Design (CSCWD), pp. 672–677, IEEE, Hsinchu, Taiwan, January 2014.
- Y. Ge, Y. Zhang, Q. Qiu, and Y.-H. Lu, “A game theoretic resource allocation for overall energy minimization in mobile cloud computing system,” in Proceedings of the 2012 ACMIEEE International Symposium on Low Power Electronics and Design, pp. 279–284, ACM, Redondo Beach, CA, USA, August 2012.
- X. Wang, Y. Wang, and C. Yue, “An energy-aware bi-level optimization model for multi-job scheduling problems under cloud computing,” Soft Computing, vol. 20, no. 1, pp. 303–317, 2016.
- R. Yanggratoke, F. Wuhib, and R. Stadler, “Gossip-based resource allocation for green computing in large clouds,” in Proceedings of the 2011 7th International Conference on Network and Service Management, pp. 1–9, IEEE, Paris, France, October 2011.
- A. Paya and D. C. Marinescu, “Energy-aware load balancing and application scaling for the cloud ecosystem,” IEEE Transactions on Cloud Computing, vol. 5, no. 1, pp. 15–27, 2015.
- V. D. Justafort, R. Beaubrun, and S. Pierre, “A hybrid approach for optimizing carbon footprint in intercloud environment,” IEEE Transactions on Services Computing, vol. 12, no. 2, pp. 186–198, 2016.
- R. Panwar and B. Mallick, “Load balancing in cloud computing using dynamic load management algorithm,” in Proceedings of the 2015 International Conference on Green Computing and Internet of Things (ICGCIoT), pp. 773–778, IEEE, Noida, India, October 2015.
- C.-T. Yang, K.-C. Wang, H.-Y. Cheng, C.-T. Kuo, and W. C. C. Chu, “Green power management with dynamic resource allocation for cloud virtual machines,” in Proceedings of the 2011 IEEE International Conference on High Performance Computing and Communications, pp. 726–733, IEEE, Alberta, Canada, September 2011.
- C.-T. Yang, H.-Y. Cheng, and K.-L. Huang, “A dynamic resource allocation model for virtual machine management on cloud,” in Proceedings of the International Conference on Grid and Distributed Computing, pp. 581–590, Springer, Gangneug, Korea, December 2011.
- X. Xu, F. Shucun, C. Qing et al., “Dynamic resource allocation for load balancing in fog environment,” Wireless Communications & Mobile Computing, vol. 2018, no. 2, Article ID 6421607, 15 pages, 2018.
- J. Oueis, E. C. Strinati, and S. Barbarossa, “The fog balancing: load distribution for small cell cloud computing,” in Proceedings of the 2015 IEEE 81st Vehicular Technology Conference (VTC Spring), pp. 1–6, IEEE, Glasgow, UK, May 2015.
- K. Wang, Y. Wang, Y. Sun, S. Guo, and J. Wu, “Green industrial internet of things architecture: an energy-efficient perspective,” IEEE Communications Magazine, vol. 54, no. 12, pp. 48–54, 2016.
Copyright © 2020 Zhiguo Qu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.