Exploration of Human Cognition using Artificial Intelligence in HealthcareView this Special Issue
OPTCLOUD: An Optimal Cloud Service Selection Framework Using QoS Correlation Lens
Cloud computing has grown as a computing paradigm in the last few years. Due to the explosive increase in the number of cloud services, QoS (quality of service) becomes an important factor in service filtering. Moreover, it becomes a nontrivial problem when comparing the functionality of cloud services with different performance metrics. Therefore, optimal cloud service selection is quite challenging and extremely important for users. In the existing approaches of cloud service selection, the user’s preferences are offered by the user in a quantitative form. With fuzziness and subjectivity, it is a hurdle task for users to express clear preferences. Moreover, many QoS attributes are not independent but interrelated; therefore, the existing weighted summation method cannot accommodate correlations among QoS attributes and produces inaccurate results. To resolve this problem, we propose a cloud service framework that takes the user’s preferences and chooses the optimal cloud service based on the user’s QoS constraints. We propose a cloud service selection algorithm, based on principal component analysis (PCA) and the best-worst method (BWM), which eliminates the correlations between QoS and provides the best cloud services with the best QoS values for users. In the end, a numerical example is shown to validate the effectiveness and feasibility of the proposed methodology.
With the advent of service computing, cloud computing has evolved into a growing computing paradigm that is revolutionizing the way of managing and delivering computing, storage, and on-demand service solution . Cloud computing offered three different service models to its user either infrastructure as a service, platform as a service, and software as a service (IaaS, PaaS, and SaaS). Cloud computing is categorized into four types: private cloud, public cloud, hybrid cloud, and community cloud [2, 3]. The cornerstone of cloud computing is that users can access cloud services from anywhere, at any time on a subscription basis. Cloud computing provides the “pay-as-you-use” pricing model, where cloud users are charged only for the consumed resources. Many of the world’s largest IT firms (including IBM, eBay, Microsoft, Google, and Amazon) have moved their existing business solutions to the cloud because of the advantages it offers .
A growing number of cloud service providers (CSPs) now provide their customers with a wide range of options for selecting the best cloud service for their individual functional needs. Many CSPs offer identical services, but at varied pricing and quality levels, and with a wide range of additional options and features. However, a supplier may be cheap for storage but expensive for computing. Because of the wide range of cloud service options available, it can be difficult for customers to determine which CSP is best suited to meet their specific needs. Incorrect selection of a cloud service provider (CSP) can lead to service failure, data security or integrity breaches, and noncompliance with cloud storage standards in the future .
Cloud service selection usually involves matching customer needs to the features of cloud services offered by various CSPs. The growing number of CSPs and their variable service offerings, pricing, and quality have made it difficult to compare them and choose the best one for the user’s needs. To determine which cloud service provider (CSP) is the greatest fit for a cloud user’s needs, a wide range of evaluation criteria for distinct cloud services from multiple CSPs must be considered. Recently, multiattribute decision-making (MCDM) has come up as one of the most efficient decision-making tools, showing its potential to solve real-world problems [6–10]. Thus, selecting the best CSP is a difficult MCDM problem in which various choices must be reviewed and ranked using a variety of criteria based on specific user preferences [11–13].
In recent year, numerous effort has been dedicated to the cloud service selection problem [11, 14, 15]. In literature, the existing methodology rarely takes into account the correlation of QoS criteria. Indeed, QoS criteria are often correlated with each other, i.e., strong positive correlation between availability and successability and a strong negative correlation between response time and throughput . If a cloud service has a short response time, it may also have high throughput. Due to this reason, weighted summation of QoS data may cause repetitive computation over QoS attribute information. If the number of QoS attributes become larger, then the degree of repetitive computation will be higher and it costs larger calculation time. In this situation, it is difficult for the existing cloud service selection method to assess the QoS value of the cloud service accurately and efficiently .
In this scenario, we suggested a new framework called “optimal cloud (OPTCLOUD)” for the assessment and selection of the best cloud service based on QoS values. The primary objective of the OPTCLOUD model is to reduce the size of selection criteria without significant information loss and to keep the cloud service evaluation process expressive and easy. In view of these challenges, we have introduced an efficient and accurate evaluation method for cloud services based on PCA-BWM. Here, PCA is used to reduce the data dimension and eliminate the correlation among QoS criteria while BWM is used to determine the weight of each QoS criterion based on user preference [18, 19]. Note that basic PCA calculates the weight of the principal component based on the QoS information dataset, which is too objective. For this reason, we integrated the BWM method with PCA, for simplifying the cloud service selection process. In general, this contribution provides a faster and more effective method to minimize the limitation of the previous studies, i.e., subjectivity, high computational requirement, and multicollinearity. To the best of our knowledge, this is the first time that PCA and BWM have been directly applied to cloud service selection problems.
The significant contributions of this paper are listed as follows:(1)A novel “OPTCLOUD” framework is suggested for measuring the cloud service alternatives as per their offered QoS criteria value.(2)A technique is proposed that is both efficient and reliable for removing correlations between QoS criteria in complicated decision-making problems.(3)The experimental results demonstrate the feasibility of the proposed methods.
The remainder of this article is organized as follows. Section 2 talks about the related work. A motivational example is discussed in Section 3. Section 4 introduces some background knowledge that is admissible to our work. The proposed cloud service ranking scheme is discussed in Section 5. Section 6 explains the proposed methodology for optimal cloud service selection. In Section 7, a numerical case study and sets of the experiment are included to depict the feasibility of the proposed methodology. In addition, the results and their validation are discussed. At last, Section 8 discusses the concluding remarks and future scope.
2. Related Works
This section compares and contrasts our work to previous efforts in order to demonstrate how our approach differs from those already in use for cloud service selection. In general, all proposed techniques select the cloud services based on customer preferences and QoS criteria. These cloud service decision-making methods are classified into two categories: MCDM-based cloud service selection method and non-MCDM-based cloud service selection method.
2.1. MCDM-Based Cloud Service Selection Methods
A thorough review of the literature reveals that the application of MCDM-based techniques for cloud service selection and ranking has received a significant amount of attention. Some of the frequently used MCDM techniques such as analytic hierarchy process (AHP) , analytic network process , Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) , Simple Additive Weighting (SAW) , multiattribute utility theory , best-worst method (BWM) , and outranking . Using a modified DEA and SDEA model, the author  shows how to pick the best cloud service from among a variety of options based on the needs of the customer. Using a fuzzy ontology, a new fuzzy decision-making framework is proposed in the paper , which is capable of handling fuzzy information and finding the best cloud service. The authors use fuzzy AHP and TOPSIS to calculate QoS weights and measure cloud service performance. Lang et al.  proposed a Delphi method for identifying and classifying the QoS criteria for cloud service provider evaluation. In , a framework called TRUSS was presented for the identification of trustworthy cloud services. A brokerage-based cloud service selection framework was proposed in the paper . Ding et al.  came up with collaborative filtering to make service recommendations that are time-sensitive. The objective was to expedite the process of identifying cloud service providers with a higher level of customer satisfaction. These decision-making methods can be divided into stochastic, deterministic, and fuzzy methods depending on the type of data they use and how many decision-makers participate in the decision process, i.e., single or multiple (group). Table 1 presents the overview of the cloud service selection-related work based on the MCDM method.
2.2. Non-MCDM-Based Cloud Service Selection Methods
In this subsection, we review the existing literature on service selection methods that are not based on MCDM. These methods include optimization techniques, game theory, graph theory, descriptive logic, collaborative filtering, and linear programming.
Lang et al.  proposed a Delphi method for identifying and classifying the QoS criteria for cloud service provider evaluation. In , a framework called TRUSS was presented for the identification of trustworthy cloud services. A brokerage-based cloud service selection framework was proposed in the paper . Somu et al.  suggested a method for cloud services based on hypergraphs. Somu et al.  developed a preliminary set-based hypergraph technique for service selection. In the paper , the author suggested a method for identifying the best cloud service by excluding less dependable services based on QoS criteria. Ding et al.  came up with collaborative filtering to make service recommendations that are time-sensitive. The objective was to expedite the process of identifying cloud service providers with a higher level of customer satisfaction. For nonfunctional requirements, Ma et al.  suggested a collaborative QoS model for identifying the optimal cloud service. Wang et al.  developed a game-theoretic strategy for QoS-aware cloud service selection.
2.3. Differences between Our Research and Existing Work
To the best of our knowledge, there are a few research studies that have a correlation between QoS attributes. In the paper , the author considers the correlation between different QoS attributes but how to handle these correlations is not discussed. A business service-based correlation model is discussed in the paper ; however, it only discusses modeling aspects and the model is not applied in the service selection process. A PCA-based Web service selection method is discussed in the paper . By using the PCA method, this paper tries to eliminate the correlation between QoS attributes but the author does not consider the QoS attribute weight information that plays an important role in the service selection process. The paper  discusses a method for selecting multimedia services based on weighted PCA. However, two shortcomings exist in this paper. First, this paper still lacks adaptability in weight assignment for QoS attributes, i.e., how to assign appropriate QoS weight information accurately and efficiently. Second, this paper used uniform standardization for all QoS attributes that bring discrepancies between negative and positive attributes.
To address the shortcoming of existing work, we have developed a PCA-BWM-based scheme for optimal cloud service selection based on correlation aspects. Our work differs in several ways from the existing cloud service selection methodology. On the one hand, we use the PCA method to exclude the correlation between different service quality attributes and reduce the impact of false or artificial QoS attribute information. On the other hand, in order to assign a weight to the various QoS attributes efficiently and precisely, we use the BWM method. The BWM is a subjective method that considers the subjective preference of cloud customers in the cloud service selection process. This study combines the objective and subjective aspects to achieve a better assessment result in cloud service selection problems.
3. Motivational Example
Suppose cloud service providers satisfy the cloud customer’s functional requirement and are ready to deliver their services with QoS parameters, i.e., availability, latency, response time, throughput, etc. These QoS parameters are not independent but correlate with one another. In this situation, selecting an efficient and accurate cloud service among service providers based on these QoS criteria becomes a challenging task for a cloud user.
Our hypothesis was tested using a real-world QWS dataset of 2507 real services with 9 QoS parameters, including response time (RT), availability (Ava), successability (Succ), throughput (Th), reliability (Re), compliance (Com), best practices (BP), latency (Lat), and documentation (Doc) . We computed the correlation coefficient matrix of the QWS dataset as displayed in Table 2. We define the correlation coefficient for the QoS parameters as a regression line and this is displayed in Figure 1. In this figure, we can observe a strong positive correlation between successability and availability at a value of 0.9892. Moreover, there is a positive correlation between best practice and reliability at a value of 0.6895. Furthermore, the correlation between response time and throughput shows the strongest negative correlation at a value of −0.2530. This finding indicates that certain QoS parameters associated with cloud services are not independent but rather correlated. As a result, the current cloud service selection methods are ineffective in this case, leading to an inaccurate result when selecting cloud services.
4. Background Knowledge
4.1. Best-Worst Method
Razaei introduced the best-worst method (BWM) in 2015 as a new MCDM method for determining the relative importance of QoS criterion weights . Compared to other well-known MCDM methods like AHP, this novel method finds more consistent results with fewer pairwise comparisons . In the AHP method, if there are a number of QoS criteria, then comparisons are required while, in the BWM, only comparisons are needed.
4.2. Principal Component Analysis
PCA is a powerful multivariate statistical procedure proposed by Pearson . It is a dimensionality reduction technique which decreases the dimension of a dataset without much loss of information . PCA transforms a large number of interrelated variables into a set of values of linearly uncorrelated variables, which are known as principal components. The principal components are calculated by identifying the eigenvalue of a covariance matrix of the original dataset. Note that the number of principal components can be equivalent to or less than the original variables. This conversion is performed so that the first main component maintains maximum variance and all succeeding components have remaining variance without being correlated with the preceding components.
5. The Proposed Cloud Service Ranking Scheme
The objective of this study is to introduce a new cloud service selection scheme among the available cloud service alternatives according to the needs of cloud users. The proposed scheme has three major parts: (a) the basic idea of the cloud service selection scheme, (b) the OPTCLOUD framework, (c) and the schematic diagram. This section focuses on the detailed description of the proposed framework and explores the schematic diagram. The following subsection describes the detailed explanation of each part.
5.1. Basic Idea
In essence, a cloud service selection problem leverages the correlation information among various QoS attributes which involves objective and subjective aspects. Based on the QoS value of cloud service, objective aspects and subjective aspects are collected based on cloud customer preference over different QoS attributes.
To assess the objective aspect of the QoS attribute value, the PCA method is used to minimize dimensionality through the analysis of covariance between the QoS attributes. It transforms a series of correlated QoS attributes into an independent principal component without losing too much information.
Apparently, the only objective aspect is not enough to find a suitable cloud service. Because individual preference over different QoS attributes plays an important role in cloud service selection problems, we calculate the weight of the QoS attributes in our proposed scheme by applying the best-worst method (BWM). It has more consistent results and has less pairwise comparison than the popular AHP method. Finally, the integration of BWM and PCA methods provides a trade-off between computational complexity and results in a more rational way with consistency in the selection of cloud services.
5.2. Proposed OPTCLOUD Framework
This section introduces a proposed broker-based framework (OPTCLOUD) for cloud service as shown in Figure 2. This framework consists of three distinct components: (i) cloud broker, (ii) cloud benchmark service provider, and (iii) cloud service repository. The framework is reliant on the quality of service (QoS) information that comes from multiple sources. The service provider’s specifications and third-party monitoring services are examples of these sources. Furthermore, we also suppose that the cloud service repository keeps track of available cloud service providers. The thirst party monitoring services keep track of all of the cloud services that have been registered. In order to collect QoS performance data, the thirst party monitoring services run benchmark tests against all of the cloud services that are available. The performance data are kept in the QoS repository, and QoS performance data is used by cloud brokers to recommend appropriate cloud services to users.(i)Cloud broker: the suggested framework is built around the cloud broker. It performs a variety of activities, as illustrated in Figure 2, including cloud service discovery and cloud service ranking. It interacts with the cloud service repository to filter the cloud services that meet the cloud user’s requirements. The cloud broker’s cloud service ranking module ranks filtered cloud services according to the significance of each QoS parameter provided by the cloud user. For each cloud service, a ranking is generated using the proposed methodology. The cloud service discovery module is used to discover and store information about the various cloud services available.(ii)Cloud benchmark service provider: this component is a third party that audits or monitors cloud services continuously. It performs benchmark tests on available cloud services on QoS criteria such as availability, reliability, throughput, and efficiency. It goes through an extensive testing process on a QoS criterion several times in order to verify the QoS claims made by the cloud service providers. Cloud benchmark service providers such as CloudSpectator , CloudHarmony , and others analyze cloud services on a regular basis and put the results in a cloud service repository.(iii)Cloud service repository: it is a database that store information about cloud service providers and their services on various QoS attributes. This is where data from cloud providers and third-party monitoring services on service standards and performance is stored. This repository is used for prescreening services by cloud broker when they are looking for candidate services that meet their customer’s needs.
5.3. Schematic Diagram
Figure 3 shows the suggested schematic diagram. The overall process of cloud service selection includes four important steps:(1)Determine the evaluation criteria and cloud service alternatives used in the service selection process.(2)Utilize the best-worst method to calculate the weight of the evaluation criteria.(3)Eliminate correlation between QoS criteria using the PCA method.(4)Combine BWM and PCA to evaluate the cloud service alternatives and rank them based on performance values.
6. The Proposed Cloud Service Selection and Ranking Methodology
We have illustrated the suggested methodology in this section, using a schematic diagram followed by a detailed procedure to select the appropriate cloud services alternative among the eligible alternatives.
6.1. Cloud Service Selection Methodology
In this subsection, we introduced PCA-BWM-based techniques for selecting the best cloud service among many available cloud alternatives.
6.1.1. Construct a Decision Matrix
We create a decision matrix DM of m ∗ n, in which represents the eligible cloud service alternatives denoted by that satisfy the cloud customer’s functional and nonfunctional requirements, and represents the number of QoS criteria for determining the best cloud service provider. It is shown inwhere represents the QoS value delivered by on QoS criteria .
6.1.2. Apply Best-Worst Method for QoS Criteria Weight Calculation
In the decision-making process, the calculation of the weights of criteria is a critical phase that has a direct impact on the ranking of alternatives. In most cases, the criteria are not given equal weight. Each criterion has a different value depending on the needs of a cloud user. Because of this, determining the weight of each criterion is essential. The majority of authors, according to the literature, employ the AHP approach to calculate the weights of the criterion. However, due to AHP’s limitations, we recommend that the weights of the criterion be computed using BWM.
According to Rezaei , MCDM problems need to be evaluated against a set of criteria in order to choose the best option. However, BWM works in a different way. It is a process in which the decision-maker identifies the best and worst criteria. Decision-makers make the decision by comparing the best/worst criteria and other criteria in pairwise comparison. In the BWM method, a linear programming max-min problem is used to determine criterion weights. Using the CR, the decision-maker can confirm the validity of the criterion weight.
The best-worst technique is used to determine the weight of QoS criteria. It consists of six steps:
Step 1. Make a list of criteria. The decision-maker establishes a list of n criteria against which all potential cloud service providers will be evaluated.
Step 2. The cloud customer and the decision-maker work together to figure out which QoS criteria are the best (or the most preferred) and which are the worst (or the least preferred) out of all of the other criteria.
Step 3. To estimate the best criterion’s preference over all other criteria (best-to-others), use a scale of 1–9 as given in Table 3. The preference vector as a result would look likewhere depicts the priority of the best criterion over all other criteria and .
Step 4. Similar to the above, using Table 3, determine the priority of all other decision criteria against the worst criteria. The resulting preference vector would bewhere denotes the preference of criterion over the worst criterion and .
Step 5. Finally, by solving (4) and (5), the optimal weights for each criterion are determined, which is represented by the vector associated with each QoS criterion .Here, is the optimum value required to estimate the consistency ratio of pairwise comparisons. The weight of the best QoS criterion is , and the weight of the worst criterion is .
Step 6. Use (7), to calculate the consistency ratio (CR).where denotes the consistency index, which is depicted in Table 4. The consistency ratio ranges from 0 to 1. CR values near zero are considered more consistent, while those near one are considered less so.
6.1.3. PCA-BWM-Based Cloud Service Selection Method
The various steps of the proposed PCA-BWM methodology are described as follows.
Step 7. Standardize the original decision matrix.
Due to their vast diversity, the values of QoS criteria are estimated in different measuring units and ranges. This may lead to inconsistency during comparison. Therefore, in this step, each QoS criterion value of the matrix is normalized to accomplish a uniform comparison.
We have classified the QoS criteria into positive and negative criteria. The positive criteria include throughput, availability, and reputation, whereas the negative criteria include response time and cost. Note that a higher value of the QoS criterion signifies the higher quality of positive criteria and the low quality of negative criteria. Thus, we must normalize them separately in order to eliminate the inconsistency between negative and positive criteria. For this purpose, we have used the max-min normalization approach. This normalization approach converts each criteria value to the same scale.
The positive and negative criteria are normalized using (8) and (9):where and are the minimum and maximum values of the QoS criterion among all the cloud services. The normalized decision matrix is represented as
Step 8. Calculate the correlation coefficient matrix.
In this step, we have computed the correlation coefficient matrix of matrix , where reflects the correlation coefficient between and and which can be denoted as
Step 9. Calculate the eigenvalues and eigenvector of the correlation coefficient matrix and find the principal component.
By using the characteristic equation , we calculated eigenvalues of the correlation matrix and sort them as . We also calculated the eigenvector corresponding to the eigenvalues , , and normalized it so that where is the component of eigenvector .
Step 10. Compute the contribution ratio and cumulative contribution ratio of the principal components.
In this step, we have calculated the contribution ratio and the cumulative contribution ratio using (12) and (13), respectively.We selected first principal components , whose cumulative contribution value is higher than . We found the principal component , where and is a normalized matrix. Final principal components are denoted as and would be used to replace the original QoS criteria . These principal components are independent of each other and simplify the cloud service selection process with accurate results. We computed the comprehensive evaluation value of the principal component using
Step 11. Integrate the BWM weight of criteria with PCA and determine the final comprehensive value.
We have calculated a weighted normalized matrix , where j = 1,2, …, p and is the weight of jth QoS criteria determined using the BWM method. So, the new principal component is . Finally, we calculated the total comprehensive value determined by PCA usingHere, we have sorted the final values in ascending order and ranked them based on comprehensive values. The cloud service having the highest comprehensive value will be selected as the best cloud service.
7. Case Study with Experiments Analysis
This section evaluates the proposed methodology’s efficacy using a real-world QoS dataset. Cloud services share many aspects with Web services, particularly in terms of quality of service (QoS); hence, we have utilized the publicly available QWS dataset as a benchmark for cloud service. This dataset was created by Eyhab Al-Masri of Guelph University . The QWS dataset has been widely accepted across the research community and used in evaluation studies based on the QoS service selection problem. The QWS dataset includes 2507 real Web services with their quality values over nine parameters such as throughput, availability, response time, reliability, latency, scalability, best practices, compliance, and documentation. The QWS dataset is comprised of a variety of Web services that were collected from the real Web using the Web Service Crawler Engine (WSCE). All of these Web services were obtained from publicly available sources on the Internet, such as service portals, search engines, and UDDI registries. In our experiments, we only use version 1.0 of the QWS dataset because it provides ratings and classification. Version 1.0 of the QWS dataset contains 364 Web services, each of which has a set of nine quality factors tested using commercial benchmark tools. Using the QWS dataset, the following subsection illustrates a case study. Following this, an experiment is carried out to test the practicality of the proposed methodology. The results of the experiment demonstrate that our methodology outperforms all other methods.
7.1. An Illustrative Case Study
For this case study, we looked at 10 alternative cloud services from the QWS dataset that had the same functionality. Eight quality criteria are used to evaluate these services, i.e., the response time (Q1), availability (Q2), throughput (Q3), successability (Q4), reliability (Q5), compliance (Q6), best practices (Q7), and latency (Q8). The response time and the latency of these quality criteria are considered negative criteria and the remaining are treated as positive criteria. We refer to ten cloud services by the acronyms CSP1, CSP2, …, and CSP10. This makes it easier to talk about them. Table 5 shows a decision matrix with ten cloud service alternatives and eight QoS criteria.
7.1.2. Find Relative Weight of QoS Attributes Using BWM Method
At this point, we have utilized the BWM to determine the relative importance of eight quality of service criteria. From all QoS criteria, the best and the worst are determined with the help of the cloud user. Here, we assume that the best criterion is response time (Q1) and that the worst criterion is throughput (Q3). Relative preference (between 1 and 9) is given for the best criteria over all other criteria (Q1, Q2, Q3, Q4, Q5, Q6, Q7, Q8) and also provides a relative preference of other criteria over the worst criteria, as shown in Table 6. We obtained the weights of the criteria using (4) and (5). The weights obtained are Q1 = 0.276, Q2 = 0.174, Q3 = 0.028, Q4 = 0.116, Q5 = 0.063, Q6 = 0.109, Q7 = 0.152 and Q8 = 0.082, and = 0.042. Equation (7) is used to find out the consistency ratio, and the value is CR = 0.01, which indicates high consistency.
7.1.3. Application of PCA-BWM Method
Here, each QoS criterion is different in terms of unit and range. To remove inconsistencies in QoS information of cloud services, the normalization is performed by using the max-min normalization, and the positive and negative criteria are standardized separately with (8) and (9). The normalized matrix of these cloud services is shown in Table 7. The calculated correlation coefficient matrix between QoS criteria using (11) is shown in Table 8. This table shows a strong positive correlation (0.9769) between successability and availability while a negative linear correlation (−0.488) between response time and throughput.
Table 9 illustrates the eigenvalue of the correlation matrix, its contribution ratio, and cumulative contribution ratio. Here, we can see that the cumulative contribution rate of the first three components reaches up to (Figure 4), which is high enough. Therefore, the first three principal components replace the other criteria to do a comprehensive evaluation as shown in Table 10. Thus, we successfully reduce the evaluation criteria from 8 to 3 which are independent of each other.
Now, the values of the three independent principal components , , and are calculated as follows:
We find a new evaluation function, i.e., = . Now, we calculate the comprehensive value using the evaluation function and the results are shown in Table 11. The cloud service alternatives have a higher score of the comprehensive value selected as the best cloud service alternatives.
In this subsection, we carried out a set of experiments to evaluate the suitability of the proposed methodology. For these experiments, we generated some artificial data with QWS data set, by varying the cloud service provider and QoS criteria.
7.2.1. Comparison with Other Existing MCDM Methods
We compared our results with other popular MCDM methods, namely, PCA, BWM_TOPSIS, and AHP_TOPSIS [17, 33, 51]. Figure 5 shows the experimental result of different methods. In most cases, there is a clear resemblance between the outcomes acquired through the suggested methodology and other techniques. Therefore, the results of the proposed approach can be concluded as accurate and precise.
7.2.2. Measuring Execution Time with respect to the Number of QoS Attributes
This experiment evaluates the average execution time of the proposed methodology based on different QoS criteria. The experimental results are shown in Figure 6. In this experiment, the number of QoS criteria varies between 3 and 20, and the number of cloud service providers varies between 10 and 35. We can see that the execution time increases slowly with the increase in the number of cloud service providers, which indicates that the running time costs are not more affected by the number of cloud service providers. From Figure 6, the running time costs are more affected by increasing the number of QoS criteria. This is because the covariance matrix and the priority weight are obtained by considering all QoS criteria.
7.2.3. Reduce Dimensionality of the Selection Criteria
We know that a large number of QoS criteria are involved in the cloud service selection process that is difficult to handle without an advanced computer program. The primary goal of the proposed OPTCLOUD framework is to reduce the dimensionality of selection criteria without significant information loss and to keep the cloud service evaluation process simple. In this experiment, we use the data of eight cloud services from QWS data sets. The experimental results are shown in Figure 7. Here, we can see that the principal components are always smaller than the number of original QoS criteria. These results confirm that the proposed methodology reduces the evaluation criteria and simplifies the cloud service selection process.
7.2.4. Sensitivity Analysis of Result
This subsection validates the robustness and efficiency of the suggested scheme using sensitivity analysis. To carry out the sensitivity analysis, we check how the cloud service provider’s ranking may change under different weight values. In this scenario, we execute the whole process to monitor the changes in various circumstances. The ranks of cloud service providers are determined for each case by evaluating the effect of changes in criterion weight.
We conducted a sensitivity analysis by swapping the weights of each of the nine criteria for the weights of another criterion. Therefore, we created fifteen distinct experiments. We assigned a unique name to each experiment (E1 E15). During each experiment, we used data from our case study to run the proposed methodology and collect data about how it worked (Section 7). Figure 8 shows the outcomes of 15 experiments. CSP3 emerged as the best service in 14 out of 15 experiments, as shown in Figure 8. For second place, CSP2 was preferable to the other in 13 out of 15 studies. Finally, sensitivity analysis shows that the rank of cloud service providers is proportional to the weight of the associated criteria. Therefore, we can infer that the suggested method is reliable and rationally ranks alternatives in accordance with preferences expressed by stakeholders.
Finding the best cloud service for cloud users is a challenge if there are many QoS criteria. In general, most of the QoS criteria are correlated and are ignored by the existing works. In this study, we analyzed the effects of the correlation of QoS criteria and proposed a novel cloud service selection methodology that combines PCA and BWM. The proposed work differs in many ways from the existing research works. First, we reduced the number of QoS criteria to simplify the process of selecting cloud services. Secondly, it removes the correlation between different QoS criteria and produces more authentic selection results. This contribution provides a new OPTCLOUD framework for the cloud service selection process. The proposed scheme demonstrates its feasibility and efficiency through a series of experiments with real datasets. Finally, we make a comparison with the other method to show that the proposed methodology outperforms them. However, the proposed work has some shortcomings. The proposed methodology only retains 87.22% of the total information, which represents a significant loss of information. This opens a possible future extension of our work. Our future efforts will be to improve PCA and simultaneously reduce dimensionality by losing the minimum amount of information.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
All authors declare that they have no conflicts of interest.
P. Mell and T. Grance, The NIST Definition of Cloud Computing, NIST, MD, USA, 2011.
R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud computing and emerging IT platforms: vision, hype, and reality for delivering computing as the 5th utility,” Future Generation Computer Systems, vol. 25, no. 6, pp. 599–616, 2009.View at: Publisher Site | Google Scholar
M. Armbrust, A. Fox, R. Griffith et al., “A view of cloud computing,” Communications of the ACM, vol. 53, no. 4, pp. 50–58, 2010.View at: Publisher Site | Google Scholar
R. Buyya, C. S. Yeo, and S. Venugopal, “Market-oriented cloud computing: vision, hype, and reality for delivering it services as computing utilities,” in Proceedings of the 2008 10th IEEE International Conference on High Performance Computing and Communications, IEEE, Dalian, China, September 2008.View at: Publisher Site | Google Scholar
M. Saha, S. K. Panda, and S. Panigrahi, “A hybrid multi-criteria decision making algorithm for cloud service selection,” International Journal of Information Technology, vol. 13, no. 4, pp. 1417–1422, 2021.View at: Publisher Site | Google Scholar
M. S. Kumar, A. Tomar, and P. K. Jana, “Multi-objective workflow scheduling scheme: a multi-criteria decision making approach,” Journal of Ambient Intelligence and Humanized Computing, vol. 12, no. 12, Article ID 10808, 2021.View at: Publisher Site | Google Scholar
A. Tomar and P. K. Jana, “A multi-attribute decision making approach for on-demand charging scheduling in wireless rechargeable sensor networks,” Computing, vol. 103, no. 8, pp. 1677–1701, 2021.View at: Google Scholar
A. Tomar, R. Anwit, and P. K. Jana, “An efficient scheme for on-demand energy replenishment in wireless rechargeable sensor networks,” in Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 125–130, IEEE, Udupi, India, September 2017.View at: Publisher Site | Google Scholar
S. K. Pande, S. K. Panda, and S. Das, “A customer-oriented task scheduling for heterogeneous multi-cloud environment,” International Journal of Cloud Applications and Computing, vol. 6, no. 4, pp. 1–17, 2016.View at: Publisher Site | Google Scholar
S. K. Panda, I. Gupta, and P. K. Jana, “Allocation-aware task scheduling for heterogeneous multi-cloud systems,” Procedia Computer Science, vol. 50, pp. 176–184, 2015.View at: Publisher Site | Google Scholar
S. K. Garg, S. Versteeg, and R. Buyya, “A framework for ranking of cloud computing services,” Future Generation Computer Systems, vol. 29, no. 4, pp. 1012–1023, 2013.View at: Publisher Site | Google Scholar
R. R. Kumar, S. Mishra, and C. Kumar, “A novel framework for cloud service evaluation and selection using hybrid MCDM methods,” Arabian Journal for Science and Engineering, vol. 43, pp. 1–16, 2017.View at: Publisher Site | Google Scholar
R. K. Tiwari and R. Kumar, “G-TOPSIS: a cloud service selection framework using Gaussian TOPSIS for rank reversal problem,” The Journal of Supercomputing, vol. 77, no. 1, pp. 523–562, 2021.View at: Publisher Site | Google Scholar
U. Z. Rehman, F. K. Hussain, and O. K. Hussain, “Towards multi-criteria cloud service selection,” in Proceedings of the 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, pp. 44–48, IEEE, Seoul, Korea (South), June 2011.View at: Publisher Site | Google Scholar
M. Whaiduzzaman, A. Gani, N. B. Anuar, M. Shiraz, M. N. Haque, and I. T. Haque, “Cloud service selection using multicriteria decision analysis,” TheScientificWorldJOURNAL, vol. 2014, Article ID 459375, 10 pages, 2014.View at: Publisher Site | Google Scholar
G. Kang, J. Liu, M. Tang, and B. Cao, “Web service selection algorithm based on principal component analysis,” Journal of Electronics, vol. 30, no. 2, pp. 204–212, 2013.View at: Publisher Site | Google Scholar
L. Qi, W. Dou, and J. Chen, “Weighted principal component analysis-based service selection method for multimedia services in cloud,” Computing, vol. 98, no. 1-2, pp. 195–214, 2016.View at: Publisher Site | Google Scholar
K. Pearson, “Principal components analysis,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 6, no. 2, p. 559, 1901.View at: Publisher Site | Google Scholar
J. Rezaei, “Best-worst multi-criteria decision-making method,” Omega, vol. 53, pp. 49–57, 2015.View at: Publisher Site | Google Scholar
T. L. Saaty, “Decision-making with the AHP: why is the principal eigenvector necessary,” European Journal of Operational Research, vol. 145, no. 1, pp. 85–91, 2003.View at: Publisher Site | Google Scholar
T. L. Saaty and L. G. Vargas, “The analytic network process,” Decision Making with the Analytic Network Process, Springer, Berlin, Germany, pp. 1–40, 2013.View at: Publisher Site | Google Scholar
M. Behzadian, S. O. Khanmohammadi, M. Yazdani, and J. Ignatius, “A state-of the-art survey of TOPSIS applications,” Expert Systems with Applications, vol. 39, no. 17, Article ID 13069, 2012.View at: Publisher Site | Google Scholar
A. Afshari, M. Mojahed, and R. M. Yusuff, “Simple additive weighting approach to personnel selection problem,” International journal of innovation, management and technology, vol. 1, no. 5, p. 511, 2010.View at: Google Scholar
J. S. Dyer, “MAUT—multiattribute Utility Theory,” The Measurement and Analysis of Housing Preference and Choice, Springer, Berlin, Germany, pp. 265–292, 2005.View at: Google Scholar
B. Roy, “The outranking approach and the foundations of electre methods,” Readings in Multiple Criteria Decision Aid, Springer, Berlin, Germany, pp. 155–183, 1990.View at: Publisher Site | Google Scholar
C. Jatoth, G. R. Gangadharan, and U. Fiore, “Evaluating the efficiency of cloud services using modified data envelopment analysis and modified super-efficiency data envelopment analysis,” Soft Computing, vol. 21, no. 23, pp. 7221–7234, 2017.View at: Publisher Site | Google Scholar
L. Sun, J. Ma, Y. Zhang, H. Dong, and F. K. Hussain, “Cloud-FuSeR: fuzzy ontology and MCDM based cloud service selection,” Future Generation Computer Systems, vol. 57, pp. 42–55, 2016.View at: Publisher Site | Google Scholar
M. Lang, M. Wiesche, and H. Krcmar, “Criteria for selecting cloud service providers: a Delphi study of quality-of-service attributes,” Information & Management, vol. 55, no. 6, pp. 746–758, 2018.View at: Publisher Site | Google Scholar
M. Tang, X. Dai, J. Liu, and J. Chen, “Towards a trust evaluation middleware for cloud service selection,” Future Generation Computer Systems, vol. 74, pp. 302–312, 2017.View at: Publisher Site | Google Scholar
R. R. Kumar, B. Kumari, and C. Kumar, “CCS-OSSR: a framework based on hybrid MCDM for optimal service selection and ranking of cloud computing services,” Cluster Computing, vol. 24, no. 2, pp. 867–883, 2021.View at: Publisher Site | Google Scholar
S. Ding, Y. Li, D. Wu, Y. Zhang, and S. Yang, “Time-aware cloud service recommendation using similarity-enhanced collaborative filtering and ARIMA model,” Decision Support Systems, vol. 107, pp. 103–115, 2018.View at: Publisher Site | Google Scholar
A. Tripathi, I. Pathak, and D. P. Vidyarthi, “Integration of analytic network process with service measurement index framework for cloud service provider selection,” Concurrency and Computation: Practice and Experience, vol. 29, no. 12, p. e4144, 2017.View at: Publisher Site | Google Scholar
S. Singh and J. Sidhu, “Compliance-based multi-dimensional trust evaluation system for determining trustworthiness of cloud service providers,” Future Generation Computer Systems, vol. 67, pp. 109–132, 2017.View at: Publisher Site | Google Scholar
A. Jaiswal and R. Mishra, “Cloud service selection using TOPSIS and fuzzy TOPSIS with AHP and ANP,” in Proceedings of the 2017 International Conference on Machine Learning and Soft Computing, pp. 136–142, Ho Chi Minh City Vietnam, January 2017.View at: Publisher Site | Google Scholar
F. Nawaz, M. R. Asadabadi, N. K. Janjua, O. K. Hussain, E. Chang, and M. Saberi, “An MCDM method for cloud service selection using a Markov chain and the best-worst method,” Knowledge-Based Systems, vol. 159, pp. 120–131, 2018.View at: Publisher Site | Google Scholar
M. Abdel-Basset, M. Mohamed, and V. Chang, “NMCDA: a framework for evaluating cloud computing services,” Future Generation Computer Systems, vol. 86, pp. 12–29, 2018.View at: Publisher Site | Google Scholar
N. Yadav and M. S. Goraya, “Two-way ranking based service mapping in cloud environment,” Future Generation Computer Systems, vol. 81, pp. 53–66, 2018.View at: Publisher Site | Google Scholar
C. Jatoth, G. Gangadharan, U. Fiore, and R. Buyya, “SELCLOUD: a hybrid multi-criteria decision-making model for selection of cloud services,” Soft Computing, vol. 23, no. 13, pp. 1–15, 2018.View at: Publisher Site | Google Scholar
H. Ma, Z. Hu, K. Li, and H. Zhu, “Variation-aware cloud service selection via collaborative QoS prediction,” IEEE Transactions on Services Computing, vol. 14, no. 6, 2019.View at: Publisher Site | Google Scholar
A. Hussain, J. Chun, and M. Khan, “A novel framework towards viable cloud service selection as a service (cssaas) under a fuzzy environment,” Future Generation Computer Systems, vol. 104, pp. 74–91, 2020.View at: Publisher Site | Google Scholar
R. R. Kumar, M. Shameem, and C. Kumar, “A computational framework for ranking prediction of cloud services under fuzzy environment,” Enterprise Information Systems, vol. 16, no. 1, pp. 1–21, 2021.View at: Publisher Site | Google Scholar
D. Lin, A. C. Squicciarini, V. N. Dondapati, and S. Sundareswaran, “A cloud brokerage architecture for efficient cloud service selection,” IEEE Transactions on Services Computing, vol. 12, no. 1, pp. 144–157, 2016.View at: Google Scholar
N. Somu, G. R. M.R., K. Kirthivasan, and S. S. V.S., “A trust centric optimal service ranking approach for cloud service selection,” Future Generation Computer Systems, vol. 86, pp. 234–252, 2018.View at: Publisher Site | Google Scholar
N. Somu, K. Kirthivasan, and S. S. V.S., “A computational model for ranking cloud service providers using hypergraph based techniques,” Future Generation Computer Systems, vol. 68, pp. 14–30, 2017.View at: Publisher Site | Google Scholar
S. Ding, Z. Wang, D. Wu, and D. L. Olson, “Utilizing customer satisfaction in ranking prediction for personalized cloud service selection,” Decision Support Systems, vol. 93, pp. 1–10, 2017.View at: Publisher Site | Google Scholar
P. Wang and X. Du, “QoS-aware service selection using an incentive mechanism,” IEEE Transactions on Services Computing, vol. 12, no. 2, pp. 262–275, 2016.View at: Google Scholar
F. Wagner, A. Klein, B. Klöpper, F. Ishikawa, and S. Honiden, “Multi-objective service composition with time-and input-dependent QoS,” in Proceedings of the 2012 IEEE 19th International Conference on Web Services, pp. 234–241, IEEE, HI, USA, June 2012.View at: Publisher Site | Google Scholar
Y. Luo, Y. Fan, and H. Wang, “Business correlation-aware modelling and services selection in business service ecosystem,” International Journal of Computer Integrated Manufacturing, vol. 26, no. 8, pp. 772–785, 2013.View at: Publisher Site | Google Scholar
E. Al-Masri and Q. H. Mahmoud, “The Qws Dataset,” 2008.View at: Google Scholar
S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and Intelligent Laboratory Systems, vol. 2, no. 1-3, pp. 37–52, 1987.View at: Publisher Site | Google Scholar
P. You, S. Guo, H. Zhao, and H. Zhao, “Operation performance evaluation of power grid enterprise using a hybrid BWM-TOPSIS method,” Sustainability, vol. 9, no. 12, p. 2329, 2017.View at: Publisher Site | Google Scholar