Abstract

The heterogeneous resource-required application tasks increase the cloud service provider (CSP) energy cost and revenue by providing demand resources. Enhancing CSP profit and preserving energy cost is a challenging task. Most of the existing approaches consider task deadline violation rate rather than performance cost and server size ratio during profit estimation, which impacts CSP revenue and causes high service cost. To address this issue, we develop two algorithms for profit maximization and adequate service reliability. First, a belief propagation-influenced cost-aware asset scheduling approach is derived based on the data analytic weight measurement (DAWM) model for effective performance and server size optimization. Second, the multiobjective heuristic user service demand (MHUSD) approach is formulated based on the CPS profit estimation model and the user service demand (USD) model with dynamic acyclic graph (DAG) phenomena for adequate service reliability. The DAWM model classifies prominent servers to preserve the server resource usage and cost during an effective resource slicing process by considering each machine execution factor (remaining energy, energy and service cost, workload execution rate, service deadline violation rate, cloud server configuration (CSC), service requirement rate, and service level agreement violation (SLAV) penalty rate). The MHUSD algorithm measures the user demand service rate and cost based on the USD and CSP profit estimation models by considering service demand weight, tenant cost, and energy cost. The simulation results show that the proposed system has accomplished the average revenue gain of 35%, cost of 51%, and profit of 39% than the state-of-the-art approaches.

1. Introduction

Nowadays, cloud computing has become a backbone for government enterprises and education sectors because of providing continuous resource (memory, CPU, and bandwidth) allocation service to ensure their application service reliability. The cloud service supplier shares the resources among end-users based on cost function’s value (CF) to meet the demand of system performance. Many service suppliers estimate the server cost based on bandwidth usage rate (BUR) and energy usage rate (EUR). As per the Gartner report, the cloud service provider (CSP) market would grow approximately 331.2 billion dollars in 2022 [1]. The cloud global report [2] confines 623.3-billion-dollar market growth rate in 2023 for data computation. The statistical analysis states that cloud computing has a notable impact on the Internet of Things (IoT), blockchain, and soft computing measurement systems with artificial intelligence models.The tasks are divided into subtasks with relative attribute definitions through DAG theory. The DAG approach shows a prominent impact while dealing with complex workflow applications such as systematic mathematical applications [35]. Data analytic languages such as Hive and Pig [68] platforms handle the MapReduce model queries. Thus, the DAG theory’s importance tremendously changed over the past decade since it influences the service execution time and resource usage. Therefore, this issue is formulated as NP-hard [9], and many heuristic approaches resolved the same issue through resource usage consolidation [1012].

Each machine enables a list of resource attributes (e.g., CPU, RAM size, and hard disc space) provided by CSP. In our solution, the cloud resource cost is optimized by estimating user service demands (such as CPU, IOPS, memory, and storage). For instance, an online incremental learning method has been designed in [1315] to estimate service completion time based on heuristic algorithms by allocating the arrived service requests to the correct VM. However, these approaches have not considered server size and machine resource usage rates which causes performance delay. Therefore, in our approach, we consider CSC size, effective resource management of machines, and resource autoscaling methods; these are not present in state-of-the-art approaches. Several examinations were carried out for designing effective resource allocation methods to reduce allocation cost by satisfying service request requirements. Most current studies [16] have not considered the pricing models and data analysis models; some on-demand pricing models are considered with an inadequate measurement index. Several recent studies [17] recognize the importance of both on-demand data analytical models and reserved pricing models to minimize resource allocation costs. However, our solution assesses the server resource capacity rate, profit, and cost based on the data analysis model. The user service demand measurement algorithm is essential for profit maximization by autoscaling the resource allocation certainty.

Our research work aim is to design a novel profit optimization model for CSPs to enhance their revenue maximization (RM) by maintaining reliable quality of service (QoS). The profit optimization model must impact active server count, cost, and speed to meet the end-user satisfaction, influencing their service continuity. If there is no precise profit optimization model, then the profit and service quality and revenue generation factors will be affected. However, CSP revenue maximization has become a billion-dollar question in the competitive service computing market because of heterogeneous resource-required application tasks.

To address the listed issues, we develop two algorithms for profit maximization and adequate service reliability. First, a belief propagation-influenced cost-aware asset scheduling approach is derived based on the data analytic weight measurement (DAWM) model for effective performance and server size optimization. Second, the multiobjective heuristic user service demand (MHUSD) approach is formulated based on the CPS profit estimation model and the user service demand (USD) model with dynamic acyclic graph (DAG) phenomena for adequate service reliability. The DAWM model classifies prominent servers to preserve the server resource usage and cost during an effective resource slicing process by considering each machine execution factor (remaining energy, energy and service cost, workload execution rate, service deadline violation rate, cloud server configuration (CSC), service requirement rate, and service level agreement violation (SLAV) penalty rate). The MHUSD algorithm measures the user demand service rate and cost based on the USD and CSP profit estimation models by considering service demand weight, service tenant cost, and machine energy cost.

1.1. Key Contributions

The trade-off between cost optimization and revenue maximization models is extensively examined in Section 2. Our manuscript’s key contributions are summarized as follows:(1)Develop a data analytic weight measurement (DAWM) approach to optimize service quality and price of CSP during an effective resource slicing process by considering each machine cost and revenue, and profit.(2)Develop a multiobjective heuristic user service demand (MHUSD) based on the CPS profit estimation model and the user service demand (USD) model to measure the user demand service rate cost by considering service demand weight, service tenant cost, and machine energy cost. Subsequently, the MHUSD algorithm also considers maximum baring wait-time of end-user to maximize CSP revenue and optimize operational energy cost.(3)Simulation results confirm the advantage of the proposed approaches, enhancement rate of revenue, and the CSP’s profit attributes. The impacts of mathematical key factors are being analyzed theoretically and practically.

The manuscript’s respite is designed as Section 2 briefly explains research gaps and problem statements of extant approaches. Section 3 describes the proposed system and its mathematical models with an algorithm in detail. Section 4 evaluates the investigation outcomes, and Section 5 concludes the manuscripts.

This section describes the examination of related research work, which is classified into 3 steps, such as profit maximization, green data center, and graph theory-based task consolidation approaches.

2.1. Profit Maximization

Several profit maximization methods are proposed for the sustainability of green computing. We can observe the current scenario and requirement analysis of revenue in Figure 1. In [18], the broker management system has been designed to maximize the VM cost and minimize user cost. The author formulates multiserver configuration cost as a profit maximization issue, and a heuristic method has been designed to solve this issue. The delay-sensitive workload dimensionality has been examined based on a novel online heuristic approach to optimize the system’s cost and profit [19]. Subsequently, the offline issue is formulated as NP-hard, and it has been resolved by a linear programming concept. In [20], a dynamic cost charging method has been designed to fix specific prices to servers as per the resource demand. A pricing approach has been designed to regulate the prices dynamically as per the demand of a kind. In [21, 22], the service penalty has diminished and enhances the profit by VM replacement approach through a mixed-integer nonlinear program called NP-hard; subsequently, a novel heuristic method has been designed to optimize the penalties and profits.

CPS profit maximization approaches have been extensively examined in this literature survey. In [23], the authors designed a stochastic programming scheme for the subscription of computing resources to maximize service providers’ profit during user request uncertainty. In [24], a profit control policy has been designed to assess machine computing capacity, which decides to maximize the service provider profit. In [2527], an SLA-based resource allocation issue has formulated with profit maximization objective with the consideration of 3 dimensions (processing, storage, and communication). In [28], a service request (SR) distribution approach is designed to enhance the profit with quality of service rate as per the service demand. In [29], the author has addressed the service provider revenue maximization issue by consolidating the service tenant cost and power consumption cost. A joint optimization scheduling model has been designed to manage delay-tolerant batch services based on pricing decisions to maximize service provider revenue [30]. In [31], the authors designed a model to maximize the service provider revenue based on the machine’s tenant cost, resource demand size, and the application workload. A suitable online algorithm has been designed for the geo-distributed cloud with an adaptive VM resource cost scheme to maximize the service provider revenue [32]. The relationship between load balance, revenue, and the cost has concentrated on maximizing the service provider revenue than state-of-the-art approaches [33]. In [34, 35], a virtual resource rental strategy has been designed based on tenant cost, task urgency, and task uncertainty to enhance provider profit.

A hill-climbing algorithm has been designed to estimate customer service satisfaction by analyzing demand mark and profit fluctuations [36]. It assesses the customer satisfaction from economic growth ratio by leveraging the cloud server configuration (CSC), task arrival rate, and profit up-downs. Therefore, the CSC directly impacts the cloud user service satisfaction rate and the inadequate customer satisfaction also has a direct impact on service request arrival rate. However, there is a lack of an accurate decision-making system and data analysis system that affects the server’s profit and performance cost. A profit estimation model has been designed by considering CSC, service requirement rate, SLA, SLAV penalty rate, energy cost, tenant cost, and current CSP margin profit [37]. A server task execution speed-based power usage model is also designed to assess the CSP profit.

2.2. Green Data Centre

In [38], a mixed-integer linear program has been designed for resource allocation to optimize the data center cost and energy consumption. Green computing accomplishes the proficient process and usage of assets by limiting the vitality utilization. An enhanced ant colony approach for optimal VM execution has been developed to enhance vitality utilization and to optimize the cost of cloud environment [3942]. The practical swarm optimization (PSO) approach resolves the task allocation issue by consolidating data center count and task demand. In distributed computing, the assets have to schedule effectively to achieve a high-performance rate. Accordingly, the multitarget PSO approach remains preferable to enhance the resource usage rates. Therefore, this approach effectively increases the usage of assets and lessens energy and makespan. The outcomes delineated that the proposed strategy multiobjective practical swarm optimization (MOPSO) performance is quite beneficial than concerned existing models. A VM scheduling approach has been designed based on multidimensional resource imperatives, for example, link capacity, to diminish the quantities of dynamic PMs to preserve energy utilization. The 2-step heuristic approach resolves the VM scheduling through migration and VM positioning models [43, 44]. The designed method has consolidated the execution time than extant systems in a simulation platform. Asset overburdening is still an issue, and live relocation does not uphold the change of VM performance. In [45], the energy-aware asset allocation approach has been investigated to improve the energy productivity of a server farm without SLA negotiations. An asset scheduling strategy with a hereditary method has been proposed to improve the usage of assets and save the expense of energy in distributed computing [46, 47]. It utilizes a migration approach dependent on 3 load degrees (CPU usage, the throughput of organization, and pace of circle I/O). The calculation succeeds in improving the usage of assets, and saving energy by run-time asset scheduling is high. An energy preservation system is classified by assorting the asset into four distinct classifications (CPU, memory, storage, and networks). Additionally, the author designed a unique asset scheduling system dependent on cloud assets’ energy streamlining with assessment technique [48]. The study [49] evaluates every machine’s fitness value, which helps assess the machine rank based on the performance and resource usage rate. However, the machine rank evolution process consumes more time which influences the performance, and task scheduling policy leads to high-performance cost. The complexity rate is high over large-scale frameworks.

2.3. Graph Theory-Based Resource/Task Scheduling

Dynamic acyclic graph (DAG) has been used for task scheduling by considering PM capacity and task resource weight to formulate the issue [50]. Here, matrix identifies the errand evolution time of all VMs under different instances. To address all these issues, we design a data analytic weight measurement (DAWM) approach to optimize a cloud service provider’s quality and price during an effective resource slicing process by considering each machine’s cost and revenue, and profit. The entire cost does not iteratively consider traditional DAG-based models during the measurement of data analysis. Subsequently, we design a multiobjective heuristic user service demand (MHUSD) algorithm based on the CPS profit estimation model and the user service demand (USD) model to measure the user demand service rate and cost by considering service demand weight, service tenant cost, and machine energy cost.

3. DAWM System Model

A belief propagation-influenced data analysis model is designed for CSP profit maximization by formulating DAG task and resource scheduling policy, as shown in Figure 2. The CSP receives a service request from the cloud user, and by default, the CSP has three service modes: on-demand, advanced reservation, and spot resource allocation, which helps to slice the resources as per resource demand. As per the received service request, the CSP assesses its demand, cost, performance, profit, and required server size factors. The CSP consolidates the overprovisioning machines by optimizing the service execution cost and machine asset usage. Cloud service suppliers drive the data utility analytic method on machines to classify the high- and low-resource usage rate machines, preserve CDC usage and performance cost, and avoid instant repudiations/migrations.

It classifies adaptive servers after the first iteration by concocting an exact data analytic weight measurement (DAWM) model. First, a belief propagation influences a cost-aware asset scheduling approach based on the data analytic weight measurement (DAWM) model, which effectively optimizes the performance cost and server size. The DAWM model classifies prominent servers to preserve the server resource usage and cost during an effective resource slicing process by considering each machine execution factor (remaining energy, energy and service price, workload execution rate, service deadline violation rate, cloud server configuration (CSC), service requirement rate, and service level agreement violation (SLAV) penalty rate). Second, the multiobjective heuristic user service demand (MHUSD) approach is processed based on the CPS profit estimation model and the user service demand (USD) model with dynamic acyclic graph (DAG) phenomena for adequate service reliability. The MHUSD algorithm prognosticates the user demand service rate and cost based on the USD and CSP profit estimation models by considering service demand weight, service tenant cost, and machine energy cost. The USD model estimates the resource service demand to estimate the profit and revenue gain and the system’s performance cost. The CSP profit estimation model helps assess the service profit by forecasting the server’s performance cost, energy usage, and resource tenant cost. Each subsection describes a subcomponent of the framework mathematically and theoretically.

3.1. Cloud Service Provider Model

The CSP offers various services to cloud end-users. For instance, infrastructure is a service, where the resources are being offered as VMs to meet the end-user satisfaction by running their applications. The user service request (USR) is submitted to the service provider, which runs on a multiserver system to deliver the response for the received service requests. Consider a multiserver system (MSS) enables homogeneous servers with speed, and these are modeled based on the queuing system. Assume that the MSS framework receives a number of user service requests with a rate of . The service time , where refers to required instruction count to execute the USR and mean . The service rate of the USR is denoted as . The server utilization rate is estimated with equation (1), and it is denoted with :where refers to probability of service requests which are executing at a server. In case if there are no tasks/service requests, then the probability of zero service request is

Subsequently, is the probability of new arrived SRs, which should wait when the server system is busy executing assigned tasks where refers to probability of all SRs. The probability density function is defined with equation (5), and refers to service waiting time:

Figure 3 illustrates the DAG task classification and scheduling scheme that accomplishes by evaluating cost price/unit of the machine, which is magnified with ample of time required for task completion. Therefore, for instance, is the number of VMs of type with weight , . Let be the required time to finish all the errands on a set of VMs through the DAG-based approach. The collected value/unit time is . Appropriately, the complete performance weight is , and it is characterized as

3.2. Service Level Agreement Model

The SLA is a method which maintains a trade-off between price and service quality between end-user and CSP. Here, the required service attribute is executed within the response time , to meet the application deadline:where is the service cost/unit, is the penalty cost if any SLA violation, is the constant weight of SLA, and is the expected service processing speed. There are three conditions listed even the service request has under waiting time. Therefore, :(1)If has low value than , it provides high-quality, reliable service(2)If is in-between the , time interval leads to moderate service quality(3)If is longer than , then the service is free because the service request waited long time in queue

Equation (7) is used to assess the prognosticated service charge of the CSP based on 5 parameters: . Here, refers to service cost/unit, refers to SLAV penalty cost, refers to expected service speed, refers to SLA constant weight, and is the average service waiting time.

3.3. User Service Satisfaction Model

User service satisfaction (USS) is estimated in two ways: quality of service (QoS) and price of service (PoS). QoS describes the discrepancy between users’ expectations (how to server SR) and users’ perceptions (how to perform service). The user’s quality of service is evaluated with

The is a fundamental expression to assess the price of service (PoS) with equation (9). Here, and refer to expected cost and actual cost, respectively:(1)If , then , shows there is not impact on user satisfaction(2)If , then it leads to the higher service cost , and it decreases by increasing the actual price(3)If , then it leads to the lower service cost , and it increases by decreasing the actual price

The USS is defined as product of service price and quality of service with (10).

Such that,

The product of sum is calculated with equation (8) and equation (9).

3.4. User Demand Service Estimation Model and Algorithm

The user service demand weight factor assessment plays an essential role to optimize the cost of cloud service provider, and it is estimated with equation (11):where refers to a list of service attributes, refers to the service attribute weight, refers to the attribute perception, and refers to the attribute expectation.

The service demand is formulated as the product of potential demand and user service demand weight factor. It is defined aswhere and refer to constant basic demand and constant potential demand. Subsequently, both values must be greater than , such as .

MHUDS algorithm 1 assesses the user service demand adequately. Lines 1-2 define the entail parameters and attributes for estimation of the user service demand. Line 4 assesses all the service attributes of the cloud service provider and also checks the CPS set. Line 5 helps assess the lower and upper bound value that should not be less than . Line 6 estimates the median value of the service attribute demand. Line 7 assesses the which should not be less than 0. refers to user service demand of attribute with middle-range value. Similarly, the rest of the two variables refer to higher and lower values of the user service demand rate. Lines 12–15 are used to update the concerned value at each iteration of time.

input: CPS:
output: user demand service
(1)Let initialize , ;
(2)Int u, define range , , ;
(3)for each do
(4) Estimate ;
(5)whiledo
(6)  ;
(7)  ifthen
(8)   Assign ;
(9)  else
(10)   Assign ;
(11)  end
(12)  Update and ;
(13)  Estimate ;
(14)end
(15) Confine it as potential value for next iteration ;
(16) Return user demand service value.
(17)end
3.5. CPS Profit Estimation Model

The CSP profit is assessed based on the gap between the profits gained by acquiring services to users and the monetary cost of processing user SRs. Equation (13) is defined with function number and server speed (i.e., N and m). The average revenue of CSP is estimated as a product of the expected cost of SR and user service demand:where refers to USD based on user service attribute value. The CSP cost is defined as a paid infrastructure tenant cost and the power cost of system function, and it is assessed with equation (15). The server energy consumption is also estimated with equation (17):where refers to server usage, refers to dynamic power usage, and refers to static power usage. Assuming that refers to energy usage cost at processing time , the electricity bill is defines as

The CSP profit at is described as the revenue minus from the rental and electricity cost, and it is estimated with equation (18):

3.5.1. CSP Profit Maximization Factor

The probability of having SRs is described with equation (19). The Taylor series influences approximately to assess the CSP profit as follows:updated derivation

The CSP maximized profit assess as follows:

3.6. DAG Task Scheduling Methodology

The errands are assigned through a computational method, which comes under the DAG-based process by considering the framework’s performance weight. It can be observed in Figure 4. We characterize a graph . where speaks to a comparing errand and it executes consecutively on a machine. remains priority connection among errands because of information reliability. An errand is not initiated until the last errand remains finished.

Because of dissimilar conditions in the cloud, each PM ability remains to differ. Therefore, we consider the matrix to identify and for a keen track of each errand processing time on VM. Here, we have not considered weight and performance factors to measure the assets. In our system, we deliberately utilize a matrix to measure performance time on various VMs, rather than utilizing a consistent weight factor to estimate execution time. As per the data analysis model dataset, we measure each level of the convolution network with DAG-based spark. Specifically, each spark stage alludes a vertex, and the connection among 2 phases is compared with organized point. The apexes with 0 degree remain reflected as phases that complete in parallel . The 0-degree vertices of DAG indicate with . The organizing system remains recursively performed and forwards the outcome to any phase of DAG. According to equation (23), we measure most outrageous performance time of all processing phases in parallel and that recursively upgrades the task finish time:

3.7. Estimation of Optimal Price

The price-demand function estimates optimal price of service by considering the trade-off between service price and the concern service demand based on their service request mode such as on-demand service, reserved service, and spot instance service. It is formulated aswhere refers to price-demand of on-demand service and refers to price-demand of reserved service, and similarly, for price, refers to price for on-demand service and refers to price for reserved service.

Theorem 1. Let us assume that the CSP considers units of time. If service price is and average service execution time is , then the anticipated service price is

Proof. The CSP considers units of time, the optimal price is measured with average service execution time , and it can be measured asIt is defined as follows: the service request price is in time interval.
The probability distribution function of isThe expected price isHence, the theorem is proved and the forecasting service arrival demand is approximately .
The forecasting service price is . So, the maximum price must have to measure , such thatwhere refers to loss of server profit, but the probability of expected server profit loss isSubsequently, the probability of forecasting service price is

3.8. Estimating Optimal Price

In Algorithm 2, the partial derivative is formulated through . It formulates accurate service price though the service arrival rate is high with low profit loss. Lines 1–3 define the input variables, and line 4 applies the models to all arrived service requests. Lines 6–9 estimate the optimal price demand, and lines 10–19 estimate optimal price value based on equations (31) and (13).

input:
output: optimal price of service
(1)Let ;
(2) least price, server usage;
(3);
(4)for eachdo
(5)  Estimate and using equations (30) and (13);
(6)  ifthen
(7)   ;
(8)   Estimate using (13) and with ;
(9)  end
(10)  whiledo
(11)   ;
(12)Estimate using (13) and (31);
(13)   ifthen
(14)    ;
(15)   else
(16)    ;
(17)   end
(18)  end
(19)  ;
(20)end
3.9. DAWM Algorithm for Cloud Server Size and Cost Analysis

Algorithm 3 assess the server size and performance cost. It assesses the customer satisfaction from the machine economic growth ratio by leveraging the cloud server configuration (CSC) called server size, task arrival rate, and performance cost of the machine. Therefore, the CSC has a direct impact on the cloud user service satisfaction rate and the inadequate customer satisfaction, and it also has direct impact on service request arrival rate. Line 1 defines the essential input parameters to accomplish the objectives. Lines 2–5 assess the service execution cost using equation (13) and update the machine matrix , for effective prognostication of server configuration size. Lines 6 and 7 update the all machine execution speed rates and maintained in an array. Lines 8 to 10 assess the performance cost in association with CSC , service resource requirement rate , SLAV penalty rate , and energy and resource tenant cost. Lines 12–15 update the iterative value to mitigate performance rate and system execution cost.

input: (1) Host set: ,
(2)Ex : execution time matrix of host
(3)C : cost weight matrix of host/VM
output: performance cost of server
(1)Let
(2)for eachdo
(3)    Find minimum cost-effective host (6) and (13)
(4)    
(5)end
(6)
(7)Update ,
(8)for each to do
(9)     = 
(10)    
(11)end
(12)
(13)for each do
(14)    
(15)end
(16)Return performance cost of server

4. Experimental Result Analysis

The proposed DAWM is simulated with real data in MATLAB R2017b, and the system specifications are 8 GB DDR4 memory and an Intel Core i7-6700HQ CPU with 2.6 GHz. We consider DAG consisting 25–150 sensors. Every network enables of data centres in the network size, and its capacity varies from 5000 to 75000 GHz. The active servers are varying from . The idle server constant energy consumption is ; else the energy consumption is measured based on its energy usage rate, and it is in range ; energy price is . The link bandwidth between sensors varies from and delay transmission is . The revenue gain is , which is not static. Each service execution bandwidth is set from , computing demand is , and the execution of each service is . The simulation parameters related to power cost, constant workloads, CSC, service requirement rate, SLAV penalty rate, energy cost, tenant cost, and current CSP margin profit are listed in Table 1.

Figure 5 illustrates the average execution time required to process the user service request. It has been compared with four state-of-the-art approaches (SPEA2, COMCPM, NSGA-II, and OMCPM) which are published recently. It is noticed that the proposed approach has high-performance rate than remaining approaches such as 41.2%, 55.56%, 59.89%, and 61.52% faster than SPEA2, COMCPM, NSGA-II, and OMCPM, respectively.

Figure 6 illustrates profit, revenue, and cost of the proposed system and SPEA2, COMCPM, NSGA-II, and OMCPM approaches. The proposed system achieved moderately high revenue by 10%, 8.1%, 8.9%, and 8.91% than SPEA2, COMCPM, NSGA-II, and OMCPM approaches. Subsequently, our approach achieves 2.31%, 2.01%, 1.7%, and 1.37% high profit than four approaches, since our approach estimates the demand of service request and it analyses the machine performance before assigning the load. The reason is that user service request (USR) is submitted to the service provider, which runs on a multiserver system to deliver the response for the received service requests. The CSP assesses the machine data with our deep learning data analytical model. It makes an accurate decision to enhance the system performance by preserving service cost and to enhance the revenue gain consolidating each machine performance. The second reason is that the task is being scheduled base on DAG theory which influences the energy and resource of the system leads to enhance the revenue and optimizes the service request cost.

Figure 7 shows the user service demand flexibility impact. We can observe that the active cloud server (from 15 to 75) count and the processing speed of active servers are high, but there is no impact on the service execution demand rate. If the server count increases, then the user service demand execution rate does not increase, and it is sometimes stable to cope up the reliable quality of service with adequate computing performance. If the USD is high, the server system is frequently unable to meet the service demand requirement synchronously. In such cases, if the customer waits for a long time, then the USD rate becomes low due to low service demand. Usually, the USD may remain constant when the USD market is stable, which would not affect third-party factors.

Figure 8 shows CSP profit outcomes. As we can observe, the profit rate is drastically decreased when the active servers are increased from 15 to 75. The high server processing speed has no impact as we expected. The profit ratio is increased due to the USD rate increment than the new active server cost. The revenue enhancement and server size factors are not impacting server cost, but USD will get diminished due to the decrement of CSP profit. Consequently, the profit returns stable when the USD becomes constant. Figure 9 shows the server processing speed comparative study. The server processing speed is decreased when the server size increases; the computation size is fixed, which restricts the execution of the services. The increased server count demands to decrease the systems service execution speed. Figure 10 illustrates the increased profit during server size, and USD rates are increased. The high-computation-required USDs are led to enhance the CSP profit. We can observe that the USD is moderate due to server size enhancement. We noticed that if active servers are less but the server speed is high, the profit increases. If we maintain constant computing capacity, the server speed is impacted by the increase of active server count, which causes a decrease in the profit. Therefore, if the server size is peak and speed remains constant, it saves the energy cost and impacts CSP profit.

Figure 11 shows the comparative analyses of the server size and profit by regulating the server speed and USD rate. To assess the outcomes, we have used Table 1 listed parameters. If we increase the value, then the active server size gets low due to value increment under USD certainty. The profit gets impact when the energy cost is high and influences service execution speed to diminish CSP profit.

Table 2 shows the comparative study analysis concerning all state-of-the-art approaches. The proposed system has outstanding profit, such as a 35.5% average. Subsequently, the profit is accomplished due to the data analysis model, and also performance rate of our system remains increased than existing approaches. The machine performance and execution cost measurement estimations played an essential role to gain adequate noticeable profit for CSP.

Table 3 illustrates our approach’s simulation outcomes with the unit price and average execution time . The service price, service price-demand, maximum average service arrival rate, error rate, and user cost are assessed with average service execution time.

5. Conclusion

The proposed approach has been designed based on a belief propagation-influenced analytical data model to enhance CSP profit through DAG-based task and resource scheduling policy. It optimizes the CDC asset usage rate by consolidating overprovisioning machines. Cloud service suppliers drive the data utility analytic method on machines with low-resource usage rates to preserve CDC usage and performance cost and avoid instant repudiations/migrations.

It initially recognizes feasible servers after the first iteration by concocting the data analytic weight measurement (DAWM) model. The DAWM model optimizes the cloud service provider’s average cost by 51% due to considering each machine’s cost and revenue during an effective resource slicing process. The multiobjective heuristic user service demand (MHUSD) algorithm accomplished average server performance by 41% and average CSP revenue gain by 35% due to CPS profit estimation model and the user service demand (USD) model with dynamic acyclic graph (DAG) phenomena by providing adequate service reliability. It considers service demand weight, service tenant cost, and machine energy cost. Subsequently, the MHUSD algorithm also considers maximum baring wait-time of end-user to maximize CSP revenue and optimize operational energy cost. Google cloud tracer confines the optimized average system profit by 590$, and service execution speed is 4.5 sec/MIPS with the m2.6X large core system. The simulation results show that our system has an average service execution speed faster than the remaining approaches, such as 41.2%, 55.56%, 59.89%, and 61.52% faster than SPEA2, COMCPM, NSGA-II, and OMCPM, respectively. Subsequently, the proposed system achieved moderately high revenue by 10%, 8.1%, 8.9%, and 8.91% than SPEA2, COMCPM, NSGA-II, and OMCPM approaches and profit by 2.31%, 2.01%, 1.7%, and 1.37% than the state-of-the-art approaches.

Data Availability

No data were used to support the findings of this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.