Table of Contents Author Guidelines Submit a Manuscript
Mobile Information Systems
Volume 2016 (2016), Article ID 2517052, 9 pages
http://dx.doi.org/10.1155/2016/2517052
Research Article

mCSQAM: Service Quality Assessment Model in Mobile Cloud Services Environment

Department of Computer Science and Engineering, Kyung Hee University, Yongin, Republic of Korea

Received 26 April 2016; Accepted 2 August 2016

Academic Editor: Yeong M. Jang

Copyright © 2016 Young-Rok Shin and Eui-Nam Huh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Cloud computing is high technology that extends existing IT capabilities and requirements. Recently, the cloud computing paradigm is towards mobile with advances of mobile network and personal devices. As concept of mobile cloud, the number of providers rapidly increases for various mobile cloud services. Despite development of cloud computing, most service providers used their own policies to deliver their services to user. In other words, quality criteria for mobile cloud service assessment are not clearly established yet. To solve the problem, there were some researches that proposed models for service quality assessment. However, they did not consider various metrics to assess service quality. Although existing research considers various metrics, they did not consider newly generated Service Level Agreement. In this paper, to solve the problem, we proposed a mobile cloud service assessment model called mCSQAM and verify our model through few case researches. To apply the mobile cloud, proposed assessment model is transformed from ISO/IEC 9126 which is an international standard for software quality assessment. mCSQAM can provide service quality assessment and determine raking of the service. Furthermore, if Cloud Service Broker includes mCSQAM, appropriate services can be recommended for service users using user and service conditions.

1. Introduction

Cloud Service Providers (CSPs) should provide reliable and consistent quality of services to Cloud Service Customers (CSCs). For that reason, CSPs also need to suggest a quality assessment model based on credible quality indicators that can be measureable quantitatively. Mobile cloud service is a service that can provide various activities through mobile devices such as smartphones and tablet PCs. using cloud storage or computing resources [1]. Most interests and developments of cloud computing were focused on the computing resources between enterprises and research institutes. Due to advances of mobile network and personal mobile devices, many customer’s requirements are increased rapidly to use content sharing service such as social network service (SNS). In other words, the number of cloud computing based services increases rapidly and it becomes possible to use internet easily in mobile service environments using smartphone [2].

Despite the development of mobile cloud services, there are still problems. There are no defined metrics as standardization and quality assessment model for the mobile cloud service. So, CSPs perform quality assessment according to their own policy. In this case, only providers can get the benefit because it is very difficult to assess the service quality systematically. Therefore, it is necessary to quantitatively measure service quality using quality indicators and assessment model for mobile cloud services.

In general, mobile cloud service provides web-based applications to user. For that reason, mobile cloud services can be applied assessment indicators of ISO/IEC 9126 that is an international standard for software quality assessment. Thus, we determine service quality metrics depending on the mobile cloud service features. And we also propose service quality assessment model named mobile cloud service quality assessment model (mCSQAM) that allows providing the function for service quality assessment as quality metric priority and determining raking of the services. And mCSQAM can recommend appropriate services using user and service conditions with Cloud Service Broker (CSB) in collaborative cloud computing environment. In this paper, we also perform the initial validation of the proposed model by evaluating a case study based on mobile cloud service quality value (mCSQV).

The rest of this paper is organized as follows. In Section 2, we present related works and existing researches for mobile cloud quality assessment. Section 3 describes terms that related mobile cloud computing such as Mobile device, mobile cloud computing, and mobile cloud service. Section 4 presents international standard ISO/IEC 9126 and its quality model and metrics in order to select and match features of mobile cloud services. After reviewing and selecting the metrics from ISO/IEC 9126, our proposed service assessment model named mCSQAM presents with case study examples in Section 5. Finally, we conclude our work by presenting a summary and describing future works in Section 6.

2. Related Work

Although the user centric infrastructures are founded for cloud, cloud service is not exactly defined and standardized about its concept and domain. For that reason, Service Level Agreement (SLA) is generally used to guarantee the quality of cloud services. SLA is a part of a standardized service contract where a service is formally defined. Particular aspects of the service agreed between CSP and CSC. A common feature of SLA is a contracted delivery time. In South Korea, one of the standard bodies named Telecommunications Technology Association (TTA) defined an association standard related Cloud Computing SLA. In the standard document that is established from TTA, availability, performance, security, serviceability, and so forth are categorized as cloud service quality characteristics that were suggested as quality metrics [3].

According to [3], a lot of cloud computing features were defined and applied to cloud services. Even with these efforts from TTA, any factors still did not exactly apply for cloud computing and imbalance contract can be caused. Therefore, we propose mobile cloud service quality assessment model from also CSC’s perspective in this paper. To solve the aforementioned problem, we refer to ISO/IEC 9126 as international standard. ISO/IEC 9126 defined a model to perform quality assessment of general software by ISO. Functionality, reliability, efficiency, usability, maintainability, and portability are the main characteristics for measuring and assessment of software quality in that international standard. Furthermore, 6 main characteristics include various subcharacteristics [4]. Although it has been developed systematically for a long time, the main purpose of ISO/IEC 9126 is software quality assessment. And ISO/IEC 9126 also has too many quality characteristic categories and metrics are defined; it is difficult to apply it to mobile cloud service quality assessment directly. For that reason, we propose mCSQAM that considered mobile cloud service features which are referred to as ISO/IEC 9126.

There are various researches on going for the cloud service quality assessment. ISO/IEC 25010 is used for establishing quality model in [5]. And service quality model was also proposed in [6] which described how cloud services are well responded. Furthermore, several frameworks were also proposed for cloud service quality evaluation. References [7, 8] were proposed a framework of cloud service quality evaluation system for activating cloud service ecosystem and service delivery. And another framework was named QoE4CLOUD [9] that divided 4 service layers to consider and assess the quality. And QoS and QoE metrics were defined in [1013] using a quality model for SaaS cloud computing.

However, the existing researches and researches have some problems as follows. For quality assessment, the quality model must consider various metrics and scenarios, However, [6] considered only reliability for service quality assessment. Similarly, [7, 8] focused on security for quality assessment. Furthermore, when performing quality assessment, the quality model has to reflect dramatically changed service conditions and user requirements. However, [11, 12] are not considered to newly generate SLA that will have different quality metrics and weight than the previous one. Also, [9, 10, 13] just suggested quality metrics for SaaS cloud computing without including the method for quality assessment. Thus, we also propose a mobile cloud service quality assessment model named mCSQAM with suggesting quality metrics based on international standard ISO/IEC 9126.

3. Define Related Terms of Mobile Cloud Computing

In Section 3, we introduce brief and clear definitions about mobile device, mobile cloud computing, and mobile cloud service as follows.

3.1. Mobile Device

Mobile device is defined as devices that have mobility and portability and can use internet generally. The mobile devices have limited hardware conditions.

3.2. Mobile Cloud Computing

Mobile cloud computing means the overall technology to provide services from cloud to mobile device. The mobile cloud is generally composed of Data Storage Server and Data Processing Server. This configuration is responsible for infrastructure. Although the mobile device has fewer resources itself, service customer can use additional functions in cloud server. Thus, mobile device must work given operations simply with its own resources. The following theorems are definitions of mobile cloud computing.(i)Mobile cloud computing is the technical and functional supporting about processing of mobile cloud services.(ii)The supporting components for platform service are server, storage, network, controlling device, and so forth.(iii)There are several types like IaaS, PaaS, and SaaS based on a range of support in platform.

3.3. Mobile Cloud Service

CSCs can use a lot of contents and operation software on their mobile devices using their cloud service via internet. Like this, the mobile cloud service means supporting manner and mode of service through cloud infrastructure. Generally, the mobile cloud service requires assurance in functionality for suitable, interoperable services and devices, accurate service delivery, and secure information system and communication, in efficiency for in-time response and resource provisioning on mobile nodes, in usability for operable environment of mobile services, and in reliability for fault tolerant services and resources. For that reason, we derive the features of mobile cloud services as in Figure 1 and determine the metrics using the derived features for quality assessment in Section 4.

Figure 1: Features of mobile cloud service.

4. Define Quality Metrics for Service Assessment Model

In this section, we review main characteristics of ISO/IEC 9126 for establishing quality assessment model. And we determine quality metrics for our mobile cloud service quality assessment model that includes 4 main metrics and 8 submetrics from quality model of ISO/IEC 9126. Because quality model in ISO/IEC 9126 is for software, we need to transform the quality model that considered features of mobile cloud service. As a result, we determined finally the metrics for our quality model after matching with features of mobile cloud service as in Figure 2 and the following are the description of the finally determined metrics.

Figure 2: Mapping of mobile cloud service features and ISO/IEC 9126 quality characteristics.
4.1. Functionality

Functionality is a set of attributes that bear on the existence of a set of functions and their specified properties. In this paper, functionality denoted by FU that is the metric for the providing degree of functionality meets expressed or suggested needs in a certain condition when the services are provided. In other words, it is a metric about accuracy and suitability to measure whether the mobile cloud services are correctly provided. As the user needs, mobile cloud service has the responsibility of serving accurate outputs and making it easy to complete the function. We choose 4 submetrics such as suitability (SU), accuracy (AC), interoperability (IO), and security (SEC).

Suitability is an attribute that bears on the presence and appropriateness of a set of functions for specified tasks. For calculating the value of this metric, we define a term SU denoting suitability as in the following equation:

Accuracy (accurateness) is an attribute that bears on the provision of right or agreed results or effects. For calculating value of this metric, we define a term AC denoting accuracy as in the following equation:

Interoperability is an attribute that bears on its ability to interact with specified systems. For calculating the value of this metric, we define a term IO denoting interoperability as in the following equation:

Security is an attribute that bears on its ability to prevent unauthorized access or alteration, whether accidental or deliberate, to programs or data. For calculating the value of this metric, we define a term SEC denoting security as in the following equation when the problem happens:

4.2. Reliability

Reliability is a set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time. Reliability denoted by RE is the metric that most mobile cloud services served on the mobile devices and all the user data will be stored in cloud storage through network. Reliability is an important metric for the service quality evaluation as mobile cloud services are depending on network conditions. Reliability has several submetrics such as maturity, fault tolerance, and recoverability. Fault tolerance is the property that enables a system to continue operating property in the event of the failure of (or one or more faults within) some of its components.

In the reliability, we define a term of FT denoting fault tolerance that can be calculated as in the following equation:

4.3. Usability

Usability is a set of attributes that bear on the effort needed for use and on the individual assessment of such use by a stated or implied set of users. Usability denoted by US is the degree to which a software or service can be used by specified users to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use. For the mobile cloud service, usability is the metric to evaluation of learnability, operability, understandability, and so forth. When consumer uses mobile cloud services, it must be easy to control and access the service and give user satisfaction. We just choose operability (OP) for mCSQAM. Operability is an attribute that bears on the users’ effort for operation and operation control. As in following equation, how operability can measure many proper functions are provided to user through their mobile cloud service operation and control:

4.4. Efficiency

Efficiency is a set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions. Efficiency is a metric to measure the relative performance for used service amount in regulated condition. And this is the metric to assess time behavior (TB) and resource utilization (RU) for mobile cloud services. Time behavior is an attribute that bears on response and processing times and on throughput rates in performance of its function. So, this metric measures a ratio of an execution time for a total invocation time. We define a term TB that denotes time behavior to calculate value of this metric as in the following equation:

Resource utilization is an attribute that bears on the amount of resource used and the duration of such use in performing its function. And this metric measures a ratio of an amount of allocated resources for the predefined resources. So, for calculating the value of this metric, we defined a term RU that denotes resource utilization as in the following equation:

When the mobile cloud service is provided, clients must satisfy the requirements such as response time and throughput by well utilizing the resources. Due to poor resource provisioning, the service quality does not need to degrade SLA. For that reason, time behavior and resource utilization are chosen for calculating value of efficiency.

4.5. Portability and Maintainability

Portability is a set of attributes that bear on the ability of software to be transferred from one environment to another. Portability is the usability of the same software in different environment. The prerequirement for portability is the generalized abstraction between the application logic and system interfaces. When software or service with the same functionality is produced for several platforms, portability is the key issue for development cost reduction. And its metrics are for evaluation of adaptability and installability in the mobile cloud service. It must be effectively adapted on various service environments and devices and it also should be easy to install and remove. Maintainability is a set of attributes that bear on the effort needed to make specified modifications. Maintainability is also important metric for the mobile cloud services. In this metric, Analyzability, Changeability, Stability, Testability and Maintainability compliance are included for its submetrics. However, the submetrics are difficult to map to mobile cloud service requirements. Thus, for that reason, we exclude portability and maintainability in quality evaluation model, mCSQAM.

5. The Proposed Method: mCSQAM

In this paper, we propose a quality assessment model to validate service quality and recommend the best service by cloud broker environments which is easier way to customer to deliver proper services among many cloud providers.

5.1. Analytic Hierarchy Process (AHP)

Analytic Hierarchy Process (AHP) [14] is known as one of the effective Multicriteria Decision Making (MCDM) methods that were formerly developed by Thomas L. Saaty. AHP provides alternatives through reasonable evaluation that provides systematic analysis and stepwise derivation with pairwise comparison about various measures. Utilizing mathematical methodology, quantitative evaluation measures cannot be only considering but also qualitative assessment measures by AHP. Furthermore, it has been widely used for decision-making with various and complex measures due to simple calculation and easy understanding.

To make a decision in an organized way to generate priorities, we need to decompose the decision into the following 4 steps:(i)Define the problem and determine the kind of knowledge sought.(ii)Structure the decision hierarchy from the top with the goal of the decision, then the objectives from a broad perspective, through the intermediate levels (criteria on which subsequent elements depend) to the lowest level which usually is a set of the alternatives.(iii)Construct a set of pairwise comparison matrixes. Each element in an upper level is used to compare the elements in the level immediately below with respect to it.(iv)Use the priorities obtained from the comparisons to weigh the priorities in the level immediately below. Do this for every element. Then, for each element in the level below add its weight values and obtain its overall or global priority. Continue this process of weighting and adding until final priorities of the alternatives in the bottom mist level are obtained.

5.2. System Model for mCSQAM

Figure 3 shows components of our system model that includes Quality Monitor (QM), Quality Assessment Performer (QAP), Quality Balancer (QB), and SLA Generator. The role of QM is to measure given quality metrics with storing to DB and to propagate the result of monitoring to QAP. After receiving the result of quality monitoring, QAP calculates service quality using AHP method. By using the assessment result, QB controls quality metrics weight for balanced service quality. After determining the quality metric weight, SLA is generated newly to be utilized between CSP and CSC. Furthermore, the newly generated SLA is also used for the next time quality assessment.

Figure 3: Components of mCSQAM.
5.3. Scenarios for Evaluation of mCSQAM

We evaluate our model and show the result of quality assessment using generating service scenarios, whose services have different quality related components as shown in Figure 4. For the case study, we assume that the service quality value () of each mobile cloud service as schematically given as shown Figure 4. And we also assume that each service has different weight () to assess in detail for the case study. Thus, FU, RE, US, and EF have different weight values as user requirements. Through the above assumptions, we can find which metric is more effective to 4 different mobile cloud services quality and compare them relatively.

Figure 4: Quality measure value for each mobile cloud service.

The following steps are showing how we conduct quality assessment procedure.

Step 1 (applying weight to each quality metric). We assigned a weight value to each submetric in Functionality (FU) as shown in Table 1.

Table 1: Weight value for submetrics of FU.

Submetrics of efficiency (EF) also are assigned as shown in Table 2.

Table 2: Weight value for submetrics of EF.

Reliability (RE) and Usability (US) have just 1 submetric, so the weight value of each submetric is 1. Although weight value can vary depending on user requirements, it is difficult to use the weight for general cases. Thus, we assume that the metrics have fixed weight at the design time in this paper. However, in order to compare the result of quality assessment in different cases and apply properly to the real case, we evaluate our model by assigning different weight values with 4 different scenarios. To generate different scenario cases, we assign weight values of 4 main metrics, FU, RE, US, and EF, such that the important one was set to 0.4 and others were set to 0.2 in each case. So now we can have 4 different weighted scenario services as shown in Figure 4. After applying weights, mCSQV (mobile cloud service quality value) is finally calculated using the product of service quality measure value () and weight value () as in the following equation:

Step 2 (calculating the result after applying weight for sub-metrics). Submetrics of FU have weight values as in Table 1. In evaluation, weight values of submetrics are randomly determined as shown (10). Our model can support adaptation of dynamic changes by user or service requirements. After applying the above settings, we have the quality value as in following equation for 4 different scenario cases. Only for consideration to functionality (FU), Service 2 provides the best quality, and the results tell that the mobile cloud services are ranked by Service 2 > Service 4 > Service 1 > Service 3:The weight of FT, submetric of RE, is 1 as having only one submetric. After the calculation of this case, each service has a value of 0.212, 0.364, 0.152, and 0.273. If a user or a service considered only reliability, the best service is Service 2. And the mobile cloud services are ranked by Service 2 > Service 4 > Service 1 > Service 3 in this case:The weight of OP, submetric of US, is 1 as a single submetric. After the calculation of this case, each service has the value as in the following equation for 4 different scenario cases. If the user or service considers only usability, the best service is 3 and the mobile cloud services are ranked by Service 3 > Service 4 > Service 2 > Service 1 in this case:The value 0.3 is assigned to TB for its weight. And the value 0.4 is also assigned to RU for its weight. After the calculation of this case, each service has the value as in the following equation for 4 different scenario cases. Considering only efficiency, Service 1 has the best quality and the mobile cloud services are ranked by Service 1 > Service 2 > Service 4 > Service 3:Figure 5 shows the quality value after applying submetric weight for 4 scenarios. We can see in Figure 5 that Service 2 is the best for FU and RE, Service 3 for US, and Service 4 for EF. However, Figure 5 shows that just one main quality metric is considered and determined the ranking of services. To get the comprehensive result of quality assessment to select the best service, we also need to consider different weight for main quality metrics, FU, RE, US, and EF.

Figure 5: mCSQV comparison after applying submetrics weight.

Step 3 (calculating the result after applying weight for main metrics). For calculating final service assessment value, make a matrix resulted in previous steps as in the following matrix. After multiplying the weight of main metrics to the matrix, we can get the final service assessment value:For calculating the final service quality assessment, we assigned different weight for each case. According to the user requirement, if there is a user who considered functionality (FU) has the highest weight for final service quality assessment. Likewise, if a user wants reliable service, set the RE’s weight as the highest. And usability and efficiency are same as the above cases.
Figure 6 shows the assigned weights for each case that the weight considered the most important is 0.4 which is twice bigger than others. And the weight of others is set equally as 0.2.

Figure 6: Main metrics weight for each case.

When the functionality is considered as the most important metric at Case_FU in Figure 6, only FU’s weight is 0.4 and others are 0.2. To get the final service quality assessment value, multiply weight vector to matrix as follows:

The result of quality assessment: the best quality Service is 2 and mobile cloud services are ranked as Service 2, Service 4, Service 1, and Service 3. In other words, if functionality is considered at the service selection process, users need to choose the service 2.

If a user considers that reliability is the most important, the weight is set at Case_RE in Figure 6. As the result, Service 2 is also the best quality assessment value as 0.306. The result of assessment: mobile cloud services are ranked as Service 2 (0.306), Service 4 (0.267), Service 1 (0.221), and Service 2 (0.205)

3rd case is for usability and the weight vector is set to Case_US in Figure 6. As the result of quality assessment, Service 2 is ranked first and received 0.283. Service 4 received 0.266, ranked second. Services 3 and 1 are ranked as third and fourth services and each service receives 0.235 and 0.215:

The last considered that efficiency is the most important case. For this metric, the weight vector is consisting of Case_EF in Figure 6. In this case, Service 2 shows the best quality in the assessment result. As a result, mobile cloud services are ranked as Service 2, Service 4, and Service 3:

From previous results including case of metric weight set to 1, we can observe that it is difficult to determine synthetically which service is the best. So, we applied different weights for each case and it derives the result of mobile cloud service quality assessment like Figure 7. As the comprehensive assessment result, Service 2 shows the best quality in the all cases.

Figure 7: Service ranking for each case.

Many quality models are proposed for measuring cloud services. So, we compared our proposed mCSQAM with other existing quality evaluation methods or models. Table 3 shows comparison results among quality models.

Table 3: Comparison of quality measuring methods.

Quality model was mentioned in [9, 10]. However, they did not include metrics definition for quality model. So, they cannot measure service quality accurately and details. Other researches [7, 8, 11, 12] defined 2~10 quality measures for their model but focus on only one characteristic. In contrast, our proposed model, mCSQAM, includes 8 submetrics within 4 main metrics.

And our proposed model, mCSQAM, considers categorizing quality measuring level to 4 main metrics and 8 submetrics. When applying the hierarchy architecture, we expect that quality assessment will be able to assess more accurate result. In contrast, there are no models that considered hierarchy architecture for quality assessment model except QoE4CLOUD [9]. In [9], QoE4CLOUD was proposed as framework that includes 4 layers which are System/Hardware QoS, Network QoS, Application QoS (QoE), and Business QoS (QoBiz). Even though the layers were separated for various QoSs, the framework is not enough to assess service quality that the metrics were not defined clearly.

There are some researches [68] that were not considered mobile cloud service environment. Even though other researches [913, 15] considered, on general, cloud services, it was only focused on SaaS or IaaS except mobile environment. So, our model is appropriate for mobile cloud services in order to assess quality evaluation model.

6. Conclusion and Future Work

Cloud computing has become an important and its paradigm is towards to mobile cloud with mobile network. Currently, there are many Cloud Service Providers who offer different services with different quality attributes with their own policy. With the growing number of cloud offerings, there are some researches for quality assessment of cloud services. However, most researches about quality assessment are not considered characteristics of mobile environment.

To solve the aforementioned problem, we determined quality metrics with properties of mobile cloud service from ISO/IEC 9126. ISO/IEC 9126 was established as international standard for software quality assessment. However, it is difficult to apply directly to use on mobile cloud services; we also propose mobile cloud service quality assessment model named mCSQAM that was transformed to ISO/IEC 9126 quality model.

In this paper, this work presents the first architecture, mCSQAM, to systematically measure quality metrics selected in Section 4 and rank the mobile cloud services based on these metrics. For verification of our quality assessment model, we proposed an Analytic Hierarchy Process (AHP) based method which can assess the mobile cloud services based on different services depending on quality requirements.

We believe the mCSQAM represents a significant next step towards enabling accurate quality measurement. And we also expect that the mCSQAM with Cloud Service Broker can provide recommendations service through appropriate mobile cloud service selection for Cloud Service Customers. However, the quality metrics in our proposed model are measured quantitatively on system side. For that reason, our model needs extension and supplement qualitative assessment in near future. So, we will consider Service Measurement Index (SMI) from Cloud Service Measurement Initiative Consortium.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This research is supported the MSIP (Ministry of Science, ICT & Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2016-(H8501-16-1015)) supervised by the IITP (Institute for Information & communication Technology Promotion).

References

  1. X. Li, H. Zhang, and Y. Zhang, “Deploying mobile computation in cloud service,” in Proceedings of the 1st International Conference on Cloud Computing (CloudCom '09), pp. 301–311, Beijing, China, December 2009.
  2. N. Fernando, S. W. Loke, and W. Rahayu, “Mobile cloud computing: a survey,” Future Generation Computer Systems, vol. 29, no. 1, pp. 84–106, 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. Telecommunications Technology Association (TTA), “Quality Factor for Establishing Cloud Computing Service Level Agreement,” 2010.
  4. International Organization for Standardization (ISO), “ISO/IEC 9126: Information Technology-Software Quality Characteristics and Metrics,” 1997.
  5. A. Ravanello, J.-M. Desharnais, L. E. B. Villalpando, A. April, and A. Gherbi, “Performance measurement for cloud computing applications using ISO 25010 standard characteristics,” in Proceedings of the Joint Conference of the 24th International Workshop on Software Measurement (IWSM '14) and the 9th International Conference on Software Process and Product Measurement (Mensura '14), pp. 41–49, Rotterdam, The Netherlands, October 2014. View at Publisher · View at Google Scholar · View at Scopus
  6. Z. Raghebi and M. R. Hashemi, “A new trust evaluation method based on reliability of customer feedback for cloud computing,” in Proceedings of the 10th International ISC Conference on Information Security and Cryptology (ISCISC '13), pp. 1–6, IEEE, Yazd, Iran, August 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. H. Jeon and K.-K. Seo, “A framework of cloud service quality evaluation system for activating cloud service ecosystem,” Advanced Science and Technology Letters, vol. 35, pp. 97–100, 2013. View at Publisher · View at Google Scholar
  8. H. Jeon, Y.-G. Min, and K.-K. Seo, “A framework of performance measurement of cloud service infrastructure system for service delivery,” in Proceedings of the Advacnced Science and Technology Letters (Cloud and Super Computing 2014 Conference), vol. 46, pp. 142–145, December 2014.
  9. E. Kafetzakis, H. Koumaras, M. A. Kourtis, and V. Koumaras, “QoE4CLOUD: a QoE-driven multidimensional framework for cloud environments,” in Proceedings of the International Conference on Telecommunications and Multimedia (TEMU '12), pp. 77–82, Chania, Greece, August 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. S. Shah and S. Buch, “Identification of cloud computing service quality indicators with its expected involvement in cloud computing services and its performance issues,” International Journal on Recent and Innovation Trends in Computing and Communication, vol. 3, no. 7, pp. 4569–4572, 2015. View at Google Scholar
  11. G. Copil, D. Trihinas, H.-L. Truong et al., “ADVISE-a framework for evaluating cloud service elasticity behavior,” in Proceedings of the 12th International Conference on Service-Oriented Computing (ICSOC '14), pp. 275–290, Paris, France, November 2014.
  12. G. Copil, H.-L. Truong, D. Moldovan et al., “Evaluating cloud service elasticity behavior,” in Proceedings of 12th International Conference on Service-Oriented Computing (ICSOC '14), pp. 275–290, November 2014.
  13. S. Al-Shammari and A. Al-Yasiri, “Defining a metric for measuring QoE of SaaS cloud computing,” in Proceedings of the 15th Annual Post Graduate Symposium on the Convergence of Telecommunications, Networking Broadcasting (PGNET '14), Liverpool, UK, June 2014.
  14. T. L. Saaty, “Decision making with the analytic hierarchy process,” International Journal of Services Sciences, vol. 1, no. 1, pp. 83–98, 2008. View at Publisher · View at Google Scholar
  15. S. Banerjee and S. Jain, “A survey on Software as a Service (SaaS) using quality model in cloud computing,” International Journal of Engineering and Computer Science, vol. 3, no. 1, pp. 3598–3602, 2014. View at Google Scholar