Abstract

On-demand computing ability and efficient service delivery are the major benefits of cloud systems. The limitation in resource availability in single data centers causes the extraction of additional resources from the cloud providers group. The federation scheme dynamically increases resource availability in response to service requests. The dynamic increase in resource count leads to excessive energy consumption, maximum cost, and carbon footprints emission. Hence, the reduction of resources is the major requirement to construct the optimized cloud source models for profit maximization without considering energy mix and CO2. This paper proposes the novel migration method to reduce carbon emissions and energy consumption. The initial stage in the proposed work is the categorization of data centers based on the MIPS and cost prior to job allocation offers scalable and efficient services and resources to the cloud user. Then, the job with the maximum size is allotted to the VM only if its capacity is less than the cumulative capacity of data centers. A novel migration based on overutilized and underutilized levels provides the services to the user even if the particular VM fails. The proposed work offers efficient maintenance of resource availability and maximizes the profit of the cloud providers associated with the federated cloud environment. The comparative analysis of the proposed algorithm with the existing methods regarding the response time, accuracy, profit, carbon emission, and energy consumption assures the effectiveness in a confederated cloud environment.

1. Introduction

Cloud computing is an architectural framework that helps to construct service-based applications and provide the cost-effective outsourcing to dynamic service environments. Among these service models, IAAS allows resource sharing to the customers in virtual machine (VM) form. The resource provisioning scheme greatly impacts on increasing the profit of IAAS providers. The assurance of quality of service (QoS) constraints with the service level agreement (SLA) agreed upon by customers depends on the effective resource management policies. Depending on the resource performance level, the relaxation of QoS constraints is required to process a number of requests simultaneously. The trade-off between the QoS guarantee and the minimum number of requests require a dynamic increase of available resources. The increasing scenario requires the attainment of coordination between the providers that refers to cloud federation. During situations such as the reception of on-demand requests and the absence of idle resources in a federated cloud environment, the provider makes the decision between the spot price increment or spot VM termination. Research studies address the several policies to govern the decision-making process.

The extension of cloud computing introduces the cloud data center as the new research area regarding minimum energy consumption and revenue improvement. The rise of service demand by the data center management unit increases the complexity, size, and energy. The virtualization concept evolution in research studies reduces energy consumption effectively. The social and political pressure forces the analysis of CO2 emissions to protect the environment from excessive energy consumption. The processor time and the energy consumption are interrelated and such relationship plays a major role in energy consumption measurement. Research studies introduce energy-aware approaches such as power-efficient server prediction, optimal workload placement, and application scheduling. During the implementation of energy-aware models, the estimation of energy consumption is based on the assumption such that the power consumption is proportional to the processor time. But the energy-aware VM model depicts that the processor time is not a required criterion to estimate the power accurately.

The VM management with the established rules maintains the energy and footprint as low as possible. Based on the energy objectives, the ad-hoc VM placement algorithm includes the data center and workload properties to reduce the consumption level. The workload characteristics and the hardware refreshing make the data center as unique which is not appropriate in an ad-hoc VM placement algorithm since the ad-hoc nature prevents upgrading according to the data center properties. Alternatively, the VM allocation algorithm is flexible to address the complexities and dynamic changes. Based on multispecific hardware and SLAs, the energy-aware algorithm provides additional energy savings by using the fine-tuning process. The requirements for the implementation of fine-tuning process are as follows: (i) VM is extensive with new SLA declaration for new service and (ii) adaptability with the data center. The utilization of constraint programming (CP) and its relevant algorithms addresses the user requirements, namely, performance and fault tolerance. But the lack of energy models counteracts the explicit energy-related concerns that play a major role in data center upgradation and capacity improvement.

The adaptability of hosts in data centers with another host in VM depends on the resource profile creation based on load management schemes. Each load management scheme should include heterogeneity, hardware diversity, fluctuations in load patterns, and energy consumption. The centralized approaches proposed in traditional research studies are not enough scalable since they require multiple distributed host monitoring which is complex during the stressful condition of the data center. The increase in resource utilization level efficiently reduces the cloud operational cost. The lack of proper management during virtualization within cloud data centers degrades the cloud operating performance. The rise of the VM migration concept supports effective resource management with the elimination of human supervision. The major categories of migration patterns are live and nonlive. VM live migration enables dynamic load management on data centers and allows the migration of one VM to another VM without suspension of application services compared to nonlive migration. The energy consumption of the VM during the idle state and the data degradation are the major issues in a federated cloud environment. Hence, the research work proposed in this paper reduces energy consumption and improves profit with the novel VM migration model.(i)The grouping of data centers based on their MIPS and cost prior to job allocation supports the scalable service delivery to the cloud users(ii)The migration based on the VM status (overutilized/underutilized) proposed in this work is responsible for service provision during the failure condition(iii)The novel migration and federated cloud environment creation in this paper maintain the resource availability to increase the profit

This paper is organized as follows. Section 2 describes the related works on energy-efficient scheduling and migration techniques for the federated cloud environment. Section 3 discusses the proposed workload-based VM placement/migration and priority-based job allocation algorithm. Section 4 presents the performance analysis of the proposed algorithm over the existing migration techniques. Finally, Section 5 presents the conclusion.

Efficient service delivery by using the resource provisioning on demand conditions governed by the paradigm called cloud computing. The coordination between the service providers increases resource availability dynamically and it plays a major role in the construction of a federated cloud environment. Toosi et al. [1] illustrated the policies to increase resource utilization and profit. The providers who participated in the federated cloud environment had diverse choices that make the decision policies not suitable. The large-size data transmitted in the federated cloud environment consumed more time. Celesti et al. [2] analyzed the potentiality of cloud federated architectures and proposed a more efficient cloud service provisioning strategy with the consideration of the WEB TV test case. The challenges observed from satellite applications are the consideration of application deployment and the migrating services for spanning clouds. Paraiso et al. [3] presented the federated multicloud environment that addressed three foundations: open service model, configurable architecture, and infrastructure service. Cloud federations depend on the aggregation of IAAS providers with their own capabilities. Kertesz et al. [4] introduced the cloud management solution based on the utilization level of cloud brokers. They utilized an integrated monitoring approach that enabled provider selection enhancement and cloud service executions. The provision of elastic on-demand computing ability to support the service delivery. Petri et al. [5] evaluated the federation establishment and determined the impact of policies on the system status by using comet cloud-based implementation. The utilization of special gateways supported the comet cloud implementation. The large deployment of data centers to deliver various services to the user consumed more energy and power. Hence, development techniques were required to construct eco-friendly cloud computing.

Wadhwa and Verma [6] proposed the carbon-di-oxide (CO2) emission rate-based new technique to reduce energy consumption in data centers. The distributed data centers used in this technique have different energy sources and carbon footprint rates that affected the VM placement. Wadhwa and Verma [7] proposed the carbon-efficient VM placement and migration (CEPM) algorithm for the optimization of the VM placement problem. The current utilization level of servers was considered to improve efficiency. The metrics such as efficiency/scalability of data center and the performance of hosted applications are dependent on resource allocation. Ferdaus et al. [8] presented the necessary background and characteristics of various components for VM placement and migration techniques. The VM management by rule-based approaches was responsible for the minimization of energy consumed by each data center. The rule-based approaches were not applicable for additional energy saving. Dupont et al. [9] proposed the energy-aware VM placement algorithm called Plug4Green that is used to compute the suitable location for the VM and the state of the servers depending on large size constraints. The QoS constraints ensure caused maximum energy consumption. Hence, a suitable technique was required to provide the balance between the minimum energy consumption and an accurate assurance of QoS constraints. Horri and Dastghaibyfard [10] proposed a novel study of QoS-aware VM consolidation to provide the necessary trade-off between QoS constraints and energy consumption. They adopted the new method based on historical data existing in CPU and VM memory. The increase in scalability of cloud data centers maximized energy consumption, and a suitable resource allocation strategy was required to reduce the energy consumption.

The traditional data center models have not included energy consumption as a key parameter for their configuration. Dupont et al. [11] presented the energy-aware framework for VM reallocation. They employed the constraint programming (CP) and entropy-based approaches to achieve the flexibility of the model. The evolution of high-throughput computing resource consolidation models effectively reduced energy consumption with idle resource problems. Ding [12] discussed the particle swarm optimized tabu (PSO-T) search for resource utilization level improvement in order to reduce energy consumption. The PSO-T algorithm turn-off the sparse servers to reduce the power and time consumption level. The sustainability of the cloud model was the major concern to develop energy efficiency and CO2 emissions. Wajid et al. [13] considered the usefulness of sustainability of the cloud models to allow the conception and development of new techniques. The optimization of assets associated with sustainability cloud models was an important requirement to improve energy efficiency. Volk et al. [14] discussed the energy-efficient approach to Eco2Clouds projects under minimum energy consumption and CO2 emissions. The cloud environment monitoring was based on data gathering regarding energy consumption with workload variations. Based on the processor’s time and VM instances, the cost of virtualization varied since the cooling and energy costs for data centers exceeded the purchasing cost. Kim et al. [15] measured the energy consumption level based on the processing events. They utilized a scheduling algorithm to provide the necessary resources with an energy budget basis. The cloud service utilization level improvement required the resource optimization approach in addition to scheduling.

The simultaneous multiple cloud service requests were handled by parallel processing and required suitable resource allocation and scheduling of tasks. Li et al. [16] proposed resource allocation algorithms with preemptable tasks that adjusted the allotted resource dynamically. The periodical update regarding the task execution was required for the dynamic operation. The makespan and energy consumption were more in pre-emptive models. Nesmachnow et al. [17] introduced meta-broker (level I) and local providers (level II) to schedule all the received tasks. They investigated the energy consumption and capacity of the resources from the multicore processing systems. The advertisement of the illusion of resources to the customers required higher quality and reliability levels leads to excessive energy consumption. Benyi et al. [18] proposed the pliant system-based VM scheduling approach to reducing energy consumption. They designed the cloud-sim-based simulation environment to evaluate the performance of the pliant-system. The nonadaptability of scheduling algorithms to uncertain and the dynamic nature made the job scheduling as a problematic task. Miranda et al. [19] presented the scheduling problems and reviewed the scheduling algorithms to provide the solution to uncertainty. The evolution of VM migration techniques from one physical machine to another in order to reduce time and service degradation. Soni and Kalra [20] discussed the various migration techniques (offline and live migration) to reduce the overall time consumption. They also presented the various live migration scenarios for better performance with minimum bandwidth. The distinct participants with their own objectives required the multiagent system with negotiation capabilities.

Leite et al. [21] proposed the server consolidation approach called federated application provisioning (FAP) strategy to manage the power consumption level in virtualized federated cloud environments. The major objective of the consolidation approach was to provide the trade-off between the QoS constraints satisfaction and minimum energy consumption compared to the trivial approach. The increase in processors provided the chance for failure occurrence. The application running in a cloud environment was represented by the workflow. But the inclusion of link failure and service provision failure violated the robustness of the computing environment. Singh and Kinger [22] discussed the failure removal with the fault tolerant mechanism (FTM). Failure detection was considered as an important stage in the virtualization process, and it required an effective control scheme with parametric guidance. The balance and consolidation of workloads with energy minimization required problem-solving techniques. Garcia and Nafarrate [23] proposed the novel load-balancing heuristic algorithm to migrate the VM from the overutilized host to the underutilized host. The increase in cloud resource utilization level reduced the operational cost effectively. Liaqat et al. [24] surveyed the migration techniques and policies to represent future challenges in the VM domain. They proposed the queue-based migration model for memory page migration. The incorporation of virtualization and consolidation approaches were not provided the necessary balance between minimum energy consumption and better execution performance. Kaur and Chana [25] proposed the green cloud scheduling model (GCSM) that exploited the heterogeneity property of tasks and resources by using the scheduler unit. The GCSM facilitated energy efficiency and prevented the degradation on provider perspective. Alternatively, the execution of tasks within the time limit was achieved by GCSM under the client perspective. Efficient resource utilization was the challenging issue in the task execution performance improvement. Rathor et al. [26] provided the best fit and worst fit techniques to improve the resource utilization level at a high cost. Kruekaew and Kimpan [27] discussed the artificial bee colony algorithm with three scenarios first come first serve (FCFS), shortest job first (SJF), and longest job first (LJF) to improve the resource management performance. The migration and placement techniques in traditional approaches were not considered the utilization scenarios (overutilization/underutilization) and the job priority levels that lead to excessive energy consumption and less profit. Recent advances in cloud data centers and their energy-efficient methods were discussed in the following references [2831].

3. Preserving Resource Handiness and Exigency-Based Migration (PRH-EM)

This section discusses the implementation of the novel techniques to maintain resource availability in order to deliver services to cloud users efficiently. Figure 1 shows the flow of the proposed preserving resource handiness and exigency-based migration (PRH-EM) algorithm for profit maximization with less energy consumption.

The (PRH-EM) algorithm contains successive processes such as federated environment creation, grade-based VM placement, job allocation, and exigency-based migration. Initially, the data centers are federated into four groups based on MIPS and cost. Then, the workload on the FDC and the capacity of the VM are measured. The capacity of the VM is compared with the FDC workload and place the VM into FDC if its capacity is less than the FDC. Then, the jobs are allotted to the respective VM in the updated host/VM list within the same and different FDC. When the jobs are allotted to the VM in the next FDC, the threshold value to indicate the utilization level is computed. Then, based on the threshold, the overutilized and underutilized VM are separated. Finally, the jobs are migrated from one VM to another according to their capacity.

3.1. Federation

The service provider is assumed to be autonomous with its own customers. During the demand conditions, the federation mechanism helps the providers identify the overloads. The federation model contains the cloud exchange service as the center. The providers send the necessary query to the exchange service to identify the available resources. The cloud exchange service generates the list of providers with the MIPS and cost value. The redirection of the requests helps to identify the suitable providers that use the MIPS and price list.

The component responsible for the decision-making regarding the allocation of additional resources is called the cloud coordinator. The simultaneous measurement of MIPS and cost for each data centers available in a cloud environment is responsible for federated environment creation. For each data center, the proposed algorithm computes the cost for million instruction (MI) execution. The overall cost of the MIPS for each data center is measured. The computed cost and MIPS are separated into three limits: low, medium, and high (). Depending on the MIPS and cost, the federation comprises four types as follows:(i)Type I–More than high MIPS and less than low cost(ii)Type II–More than high MIPS and less than medium cost(iii)Type III–More than medium MIPS and less than medium cost(iv)Type IV–More than low MIPS and less than medium cost

Algorithm 1 for the federation scheme is listed as follows:

(1)Collect the data centers (DC)
(2)Compute of each data center and cost/one MI
(3)Compute the cost of of each data center
(4)Split the MIPS and cost as follows:
   
   
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)

The federated data centers are assumed to have enough resources to handle the various jobs. But the changes in resource availability and the virtualization process makes the job assignment as uncertain. The proposed algorithm addresses this issue to maintain resource availability and deliver services during failure conditions.

3.2. Grade-Based VM Placement

The VM placement in federated data centers is the second stage of the proposed PRH-EM algorithm. After all the data centers are federated into geographic locations, the carbon footprint for each data center is computed as follows:where represents the carbon footprint of the cloud and describes the power usage effectiveness which is the ratio of the overall power consumption of the data centers () to the power consumed by IT devices within holding time (). The arrival of new request initiates its allocation to the host depending on the carbon level. The estimated carbon level is considered as the workload for federated cloud data centers. Algorithm 2 proposed in this paper is referred to as grade-based VM placement since it follows the descending order of workload.

(1)Compute the workload for all the FDCs
  ()
(2)Load the and compute the carbon level using (1)
(3)Arrange the in descending order
(4)
(5)Place into host
(6)Set
(7)Compute the remaining capacity of VM
(8)Update and
(9)
(10)Select the next data center ()
(11)Repeat from step 2
(12)Update the and
(13)Place all to the host

The workload computation for each data center is the initial stage of the VM placement algorithm. Then, the carbon level for each VM is estimated for each machine in the VM list. Based on the workload and the carbon level estimation, the host list is created with the high-capacity host in the first place and the low-capacity host in the last place. Then, compare the capacity of the VM with the workload of the selected FDC to locate VM to the specific host. If the capacity of the selected VM is less than the FDC workload, then the corresponding VM is allotted to the selected host. Otherwise, the next FDC is selected and repeated from the list formation. During the comparison, and are periodically updated to know the specific host for each VM. The grade-based VM placement proposed in this paper is responsible for the effective service delivery on failure conditions. But the execution of the job is also a major concern in the energy-efficiency cloud environment.

3.3. Job Allocation

The major assumption for an energy-efficient cloud model is to allocate the job to one cloud only without replication. If there are multiple jobs arise under the specific instance, high-priority jobs are to be executed initially. Then, according to the capacity of the VM, the other jobs are allotted to the other VM. The particular data center includes the routers and switches responsible for the transportation of traffic between the servers and the outside world. The interconnection of processors is static in nature, and their utilization levels are periodical changes depending on the size of a job executed. The large size of I/O data transfer between them causes the overload condition. The brokers associated with VM monitor the capacity and job size to allocate the job to the maximum capacity VM. During the overload conditions (either VM capacity limit is reached or not), the broker will exchange their monitoring control with each other to provide constant resource availability.

The algorithm 3 for job allocation is described as follows:

(1)Collect the job list
(2)Compute the size of each job ()
(3)Arrange the jobs in descending order based on size
(4)Compute and
(5)Assign and
(6)Compute VM capacity ()
(7)Load high priority job to the VM with maximum capacity
(8)For
(9)Collect the VMs from
(10)
(11)Allocate the job to VM
(12)Compute the remaining capacity of VM
(13)Update the and
(14)
(15)Allocate the job to nonallotted list

Initially, the jobs to be executed are collected and their size is estimated. Then, the jobs are arranged in descending order with maximum size as the first one and minimum size as the last one. Then, the priority level is assigned to each job based on the size value. The capacity of the VM is estimated parallel to the priority assignment. Then, the job size is compared with the VM capacity level in the initial round of execution. If the capacity of the VM is greater than the job size, then the corresponding job is allotted to the VM from the . Then, the remaining capacity and jobs are estimated and placed to the nonallotted list for further processing. The job allocation algorithm 4 for a nonallotted list is described as follows:

(1)Collect the nonallotted jobs
(2)Identify the broker correspond to the job
(3)Identify the host from the associated to broker in same FDC ()
(4)
(5)Allot the job to VM monitored by
(6)
(7)Move the job to next FDC ()
(8)Update and
(9)
(10)Execute the job

The jobs from the nonallotted list are collected and the corresponding broker and host are identified. Then, switch the job to the next broker within the same FDC if there is any VM available with the necessary capacity until the last job on the list. If there is any job available in the list for the capacity limit reached state, then the next FDC is selected for job allocation. Finally, and are updated after allocation. During the demand conditions (capacity limit is reached) for job execution, the jobs executed on VM is migrated from one VM to another VM in order to provide the service delivery to the user effectively.

3.4. Exigency-Based Migration

The traditional VM migration schemes violated resource availability. Alternatively, the exigency-based migration algorithm proposed in this paper estimates the utilization level of VM by the job. Based on the utilization level, two conditions arise such as overutilization and underutilization. The migration of jobs in underutilized status converts the idle VM into a sleep mode that reduces unnecessary energy consumption. Similarly, the jobs in overutilized status are migrated to the maximum capacity VM provides the immediate response that reduces the time delay. Algorithms 5 and 6 for overutilized and underutilized conditions are described as follows:

(1)Load
(2)Compute the threshold values () for each VM
(3)
(4)Assign VM as overutilized
(5)List the remaining VMs with sufficient capacity ()
(6)If
(7)Migrate the over utilized job to the
(8)
(9)Migrate job to next type
(10)Compute remaining capacity of VM
(11)Update and
(1)Identify underutilized VMs
(2)List the remaining VMs with sufficient capacity in FDC ()
(3) && (VM not in underutilized list))
(4)Migrate the underutilized job to the
(5)
(6)Migrate job to next type
(7)Set VM free and sleep mode
(8)Update and

The VMs allotted for the execution of the job are extracted from the . The threshold values for the categorization of VM are computed to identify the status whether it is overutilized or underutilized. If the capacity of the VM is exceeded the maximum threshold value, then the corresponding VM is placed in overutilized list. Then, the job executed by the corresponding VM is migrated to the remaining VM for the specific FDC. Alternatively, if the capacity of the selected VM is less than the minimum threshold value, it can be regarded as underutilized. The jobs executed by the underutilized VM are migrated to the next VM with sufficient capacity. The migration based on the utilization level maintains resource availability and reduces energy consumption effectively. VM failure detection and service failover are the two main principles in failure handling.In general, failure detection is the process of identifying when something went wrong and informing the appropriate administrator so they can address it.System monitoring and failure detection are two different things. There are frequent and quick failure detection checks (for example, every 5 seconds). They are typically more constrained in what they check as a result.

Failure tests in API Connect simply examine the availability of the web server and database. Contrarily, monitoring checks are often performed less regularly and are more likely to examine factors like CPU, RAM, and disc space utilization. These checks’ outcomes can then be kept track of for historical trend analysis to find memory leaks.

4. Performance Analysis

This section discusses the performance of proposed preserving resource handiness and exigency-based migration (PRH-EM) regarding the accuracy, response time variations for various jobs, and VMs. The comparative analysis of proposed PRH-EM with the best/worst fitness, greedy [26], and ABC approaches [27] assures the effectiveness in energy-efficient cloud environment. Besides, the profit variations for nonfederated totally in-house (NFTI), federation aware outsourcing oriented (FAOO), federated aware profit oriented (FAPO), [1] and proposed PRH-EM also presented.

4.1. Response Time

The overall time consumption from the rise of a new request to the response from the particular machine refers to the response time. The comparison between the proposed PRH-EM with the existing best/worst fitness and greedy [26] regarding the response time depicts that the hybrid processes such as VM placement, scheduling, and migration in the proposed approach reduce the response time efficiently.

Figure 2 shows the response time variations for a various number of VMs. The increase in VM reduces the response time effectively in traditional approaches. But grade-based placement and exigency-based migration provide a constant resource availability level that reduces the response time compared to other. The comparison shows that the PRH-EM offers a 28 and 50% reduction in response time compared to greedy approaches.

4.2. Accuracy

The measure of how the data centers are utilized for the overall time limit refers to the accuracy of the proposed system. The accuracy of the FDCs in the proposed PRH-EM validated for various VM shows better performance compared to traditional approaches. Table 1 shows the accuracy variations for each FDC over the various VM count.

For the minimum value of VM (25), the FDC 4 shows a better performance than the other FDCs. Similarly, the FDC 3 provides better performance for higher VMs. The increase in VMs reduces the accuracy of FDC within the considerable level of the constant resource availability and efficient delivery to the user during the failure conditions are achieved by PRH-EM.

4.3. Makespan

The time consumption for the number of tasks executed within FDC is represented by makespan. The increase in tasks requires more VM that consumes more time for service delivery. But the migration based on exigency conditions reduces the makespan effectively.

Figures 3 and 4 show the makespan performance for a number of jobs and VMs, respectively. The increase in job size and VM size will increase the makespan value. But the provision of migration and grade-based VM placement reduces the makespan effectively. The comparative analysis shows that the PRH-EM scheme reduces the makespan by 40 and 8.3% of minimum and maximum jobs respectively compared to the ABC-SJF approach. Similarly, the PRH-EM reduces the makespan by 11.76 and 25% compared to the ABC-LJF approach respectively.

4.4. Energy Consumption

The energy model creation in this paper is based on the assumption that processor utilization and energy consumption are directly proportional to each other. The utilization level for the particular resource is expressed aswhere –number of VMs, –utilization level for particular VMj

The energy consumption for the particular VM depends on the utilization level, power consumption () at the peak overload (100% utilization), and power consumption () at the active mode (1% utilization), and it is expressed as

The energy variations with the utilization level of CPU for proposed PRH-EM and existing MD_MMT [28] show the effective reduction of energy consumption in PRH-EM.

Figure 5 shows the comparative analysis of PRH-EM with the MD_MMT regarding energy consumption. The proposed PRH-EM provides the reduction of energy consumed by the group of data centers by 15.79% for minimum (20%) utilization levels.

4.5. Carbon Emission

The carbon emission level depends on the PUE level and the number of VMs used according to equation (1). The analysis of carbon utilization level for proposed PRH-EM and existing carbon efficient placement and migration (CEPM) and round-robin (RR) migration algorithms [6]. Among these methods, the CEPM offers less carbon emission level compared to RR.

Figure 6 shows the comparative analysis of carbon emission level variations for each VM request. The increase in the number of requests increases the carbon footprint level linearly. The comparative analysis shows the PRH-EM reduces the utilization level by 20% for a maximum number of VM requests (200) compared to CEPM, respectively.

4.6. Profit Analysis

The deviation of revenue achievement and the cost required to achieve the revenue at the same time refers to profit. The mathematical formulation of profit is expressed as

The experimental analysis of profit variation for the trivial, FAPO strategies [1] shows that the FAPO models provide more profit.

Figure 7 shows the comparative analysis of the profit variation for existing FAPO and PRH-EM models with the different percentages of spot request count. The comparative analysis shows that PRH-EM increases the profit by 0.9% compared to FAPO due to the exigency-based migration and grade-based VM placement for the minimum percentage of spot requests.

5. Conclusion

This paper discussed the various issues in the scheduling/migration schemes during the maintenance of resource availability in a confederated cloud environment. A novel VM migration algorithm is proposed to provide the trade-off between the reduction in energy consumption and profit maximization. Initially, the available data centers are categorized based on MIPS and cost value. The prior categorization to job allocation has a great impact on resource availability maintenance. The comparison between the cumulative workload capacity of data centers and the individual VM capacity offers immediate migration during the demand conditions that prevented resource loss. The overutilized and underutilization-based migration reduced the number of resources that directly reduced energy consumption and carbon emissions. The proposed work maintained the constant resource availability and service delivery to the users during the VM failure conditions that increased the profit level. The comparative analysis of the proposed algorithm with the existing methods assured the effectiveness of the proposed work in a federated cloud environment.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.