Mathematical Problems in Engineering

Volume 2014, Article ID 141930, 9 pages

http://dx.doi.org/10.1155/2014/141930

## Research on Virtual Machine Response Time Prediction Method Based on GA-BP Neural Network

College of Information Science & Technology Engineering, Northeastern University, Shenyang, China

Received 17 March 2014; Revised 18 May 2014; Accepted 1 June 2014; Published 17 June 2014

Academic Editor: Qingsong Xu

Copyright © 2014 Jun Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Cloud application provides access to large pool of virtual machines for building high-quality applications to satisfy customers’ requirements. A difficult issue is how to predict virtual machine response time because it determines when we could adjust dynamic scalable virtual machines. To address the critical issue, this paper proposes a prediction virtual machine response time method which is based on genetic algorithm-back propagation (GA-BP) neural network. First of all, we predict component response time by the past virtual machine component usage experience data: the number of concurrent requests and response time. Then, we could predict virtual machines service response time. The results of large-scale experiments show the effectiveness and feasibility of our method.

#### 1. Introduction

A critical issue that must be solved for the cloud application providers is to allocate and manage virtual resources because virtual machines are dynamic scalability in the cloud environment [1]. Cloud application customers expect to receive a certain level service performance as specified in SLA (service level agreement) [2]. Response time is an important indicator in the SLA. Providers dynamically adjust resources by checking whether the service response time exceeds the threshold ruled by SLA. However, it may lead to hysteresis and cause frequent adjustment of resources. Dynamic resource allocation scheme adjusts resources when multiple predicted response time values exceed the threshold of SLA. Therefore, predicting service component response time has high reference value for determining the right time to adjust resources. Traditional service component response time prediction methods do not consider the dynamic scalability of cloud environment [3]; therefore, they are not applicable to service component response time prediction in the cloud environment. It becomes an urgent task to explore an effective virtual machine service component response time prediction method.

In this paper, we adopt GA-BP neural network model to predict the virtual machine component response time. It needs not consider the relationship between component concurrent request and response time or mathematical model. The model only requires training to establish the mapping relationship on the network, similar to “black box” method. The BP neural network has several advantages: nonlinear mapping ability, reducing the error between the predicted values and real value, and increasing speed of system processing [4]. GA-BP neural network model overcomes BP algorithm’s problem, such as slow convergence and local optimal solution, and solves genetic algorithm’s problem of approximate optimal in a short period. Thus the model has not only the self-learning ability of neural network but also the global searching ability of genetic algorithm.

In this paper, we propose a virtual machine service component response time prediction model based on GA-BP neural network. We collect three kinds of data: the number of components concurrent requests, initialized configuration parameters of virtual machine, and component response time. Then, we predict the total number of components concurrent requests according to the weight of every virtual machine. Finally, we predict service component response time on the virtual machine.

The experiments indicate that GA-BP network model is more effective than traditional BP network model, and that service component response time prediction algorithm based on GA-BP network is more accurate than typical algorithms. The algorithm could be used as a reference for time selection in resource adjustment decision.

The rest of the paper is organized as follows. Section 2 gives a brief overview of existing research related to service response time model and BP neural network. Section 3 presents component response time prediction algorithm in detail. Section 4 shows the experimental results and Section 5 concludes the paper.

#### 2. Related Work

At present, the research on service response time prediction is mainly in the field of Web service. In [5], it uses time series achieved good service response time prediction results in the field of Web services and grid computing. However, it does not apply to cloud application because cloud application is dynamic scalability and the number of virtual machines and resources is constantly changing [3]. It will break the development law of service response time. What is more, the method needs to build model of response time and it will degrade performance.

In [6], it uses queuing theory to predict response time. Queuing theory is a mathematics theory to study stochastic service system [7]. In [8], it builds the target system model, simplified system, and abstract system. It assumes that the arrival time obeys Poisson process and service time obeys exponential distribution. In fact, the real environment does not completely obey assumption.

BP neural network is a common method of training artificial neural networks to predict service response time [4]. In [9], it uses BP neural network to predict time delay. The network learns from many inputs for a desired output. It is effective with nonliner approximation, but the convergence of the algorithm is very slow and may have local minimum problem. Genetic algorithm is a search heuristic that mimics the process of natural selection [10]. It overcomes BP algorithm’s problem of slow speed of convergence and local minimum.

Previous studies always used genetic algorithm and BP neural network to predict traffic and futures, but it was not used to predict virtual machine service response time in cloud. Therefore, the paper proposed response time prediction method based on GA-BP neural network. It could overcome the shortages of previous work which could not adapt to resource dynamic scalability and accuracy. Moreover, it could satisfy the service performance as specified in SLA.

#### 3. Virtual Machine Response Time Prediction Model

##### 3.1. Cloud Application Architecture

Figure 1 shows the cloud application architecture. Service can be implemented by combing several virtual machines in the cloud application. The same set of components is deployed on several virtual machines, and balanced workload virtual resource pool is combined with several virtual machines. QOS will be affected by performances of other components on the virtual machine and performances of other virtual machines on the cloud application.

##### 3.2. Response Time Prediction Model

In this section, we propose a virtual machine service component response time prediction model which is based on GA-BP neural network to solve random volatility of concurrent requests and virtual machine dynamic scalability. Figure 2 shows the response time prediction produce. First of all, we predict every component response time on the virtual machine. Then we predict virtual machine service component total response time.

Virtual machine component response time prediction process is divided into two situations.(1)The collected data could predict response time of the old virtual machine (it has been used before).(2)The collected data could not predict response time of the new virtual machine (it has been not used before).

There are three steps to predict the new virtual machine component response time.

*Step 1. *Collect the number of components concurrent requests and components response time.

*Step 2. *Use the history data to find the old virtual machine which is the most similar to the new virtual machine.

*Step 3. *The number of component concurrent requests is put into prediction model. Calculate the new virtual machine component response time.

The prediction method of the old virtual machine is similar to the new virtual machine. Because the old virtual machine has a lot of history data to predict component response time; we omit the third step. After predicting component response time in the next period, we calculate service response time by weighting every virtual machine component response time.

#### 4. Component Response Time Prediction Model

In this section, we propose a virtual machine component response time prediction model based on GA-BP neural network. In Section 4.1, we use similarity calculation model to find out the most similar virtual machine and use the most similar virtual machine to find out the relationship between component current requests and component response time. In Section 4.2, we determine the structure of BP neural network, critical parameters, and genetic operator. In Section 4.3, we set up genetic algorithm parameters to predict accurate component response time. In Section 4.4, service component response time is calculated.

##### 4.1. Virtual Machine Configuration Similarity Computation Model

There are old virtual machines. If we add a new virtual machine, the configuration data will be put into the database. We choose performance parameters to calculate similarity. The set of old virtual machines is denoted as . Performance parameters of virtual machine are denoted as , . The original virtual machine characteristic matrix is denoted as follows:

Performance parameters include CPU speed, memory capacity, I/O rate, and network bandwidth. We normalize virtual machine performance parameter configuration matrix and use weighted Euclidean distance to calculate two virtual machines similarity. Computation formula is denoted as follows: is weight of performance parameters set.

We calculate similarity to find the most similar virtual machine. We need a lot of data to train the relationship between the number of virtual machine component current requests and component response time.

##### 4.2. BP Neural Network Construction

There are components deployed on the virtual machine. There are groups sample. The relationship between component current requests and component response time is .

The algorithm is multiple inputs and outputs. The input is the number of components current requests and the output is component response time. Hidden layer has neurons and the input is , *. *The weight from input layer to hidden layer is . Hidden layer threshold is ; hidden layer weight input is ; and weight output is . The weight from hidden layer to input layer is ; output layer threshold is ; and output layer weighted value input is . Expected response time output is . Hidden layer activation function is ; input of hidden layer is ; and output is , respectively. Output layer activation function is . Input of output layer is , and output of BP network neural is

Before building BP neural network for predicting components response time, we need to determine the topology structure and critical parameters of the network.

*Layer*. Hecht-Nielson proved that a hidden layer BP neural network could approximate any continuous function of a closed interval [11]. We use a BP neural network with one hidden layer.

*Hidden Nodes*. The number of hidden layer neurons affects the accuracy and speed of solving problem [12]. We use cut-and-trial method to determine the number of hidden layer neurons, and the empirical formula is
in which and are the number of components, and is a constant.

*Activation Function*. The hidden layer uses* tansig* sigmoid function, and the output layer uses* purelin* sigmoid function.* Tansig* is denoted as follows:

Its first-order backward function is denoted as follows:

*Purelin* function is denoted as follows:

*Initial Weight and Threshold*. Initial weight and threshold affect convergence ability and training time. In this paper, initial weight and threshold are random numbers between −1 and 1.

*Learning Rate*. Learning rate decides the weight variation in the circuit training. We choose 0.01 as learning rate to ensure network stability.

*Learning Algorithms*. The standard BP algorithm has slow convergence speed problem and local minimum problem. In this paper, we use fast learning algorithm based on L-M (Levenberg-Marquardt) [13].

Error index function: is the time iterative network weight vector, and the weight adjustment formula is

Weighted differential Jacobian Matrix is *. *Network error vector is . Unit matrix is . Coefficient is , which is adaptive adjustment.

If error index function is reduced, we will use to calculate . Then we decrease ; otherwise we increase .

The set of the number of components current requests and components response time is , , . The total samples are divided into training sample and testing sample :

First of all, we train groups component current requests and response time sample to establish mapping relation. Then, we use the number of component current requests from the group to the group to predict component response time.

The training sample network error is denoted as follows:

The network total error is less than :

Testing sample of mean square error is less than :

Because we are not sure of the accurate number of concurrent requests and the sample response time. Even if is small, we still cannot ensure that is up to grade.

##### 4.3. Genetic Algorithm Parameters Setup

Genetic algorithm optimizes BP network component response time prediction weight to overcome some shortages of BP neural network, so we can accurately predict component response time. Genetic algorithm for optimizing network weight needs to design the main parameters.

*Encoding System*. There are two chromosome encoding methods: binary encoding and real encoding. Real encoding is shorter than binary encoding, and it needs not switch back and forth between encoding and decoding [14, 15]. We use real encoding to improve computation efficiency.

*Chromosome*. Chromosome is the initial connection weight threshold vector of BP neural network. The numbers of hidden layer and input layer unit threshold are . The BP network structure is . The length of chromosome is . The chromosome has real numbers, and the scope of every real number is . Mathematical expression of individual is

*Number of Initial Population*. If the number of initial population is small, it will not benefit weight threshold crossing and variation and will decrease the diversity of population. But on the contrary, the running efficiency of genetic algorithm will decrease. Therefore, we decide is 30.

*Fitness Function*. Fitness function is an index to decide the chromosome quality and is the only measure to guide the genetic algorithm search. Training overall error is as small as possible, and genetic algorithm uses the maximum objective function value as the fitness function. We calculate square of components response time error sum in the training sample according to formula (12), and chromosome adaptive value is
is chromosome number.

*Selection Operator and Selection Probability*. We select the most fitness individual to continue genetic manipulation. In this paper, we adopt roulette wheel selection method. Fitness selective probability is :

*Crossover Operator and Crossover Probability*. We randomly select two chromosomes and choose their weights and thresholds with crossover probability to form two new individuals . We use arithmetic crossover operation to generate two new individuals:
is a parameter. We adopt nonuniform arithmetic crossover operator . If the crossover probability value is large, the convergence speed will be fast. On the contrary, if the crossover probability value is small, algorithm performance will be reduced. In this paper, is 0.4.

*Mutation Operator and Mutation Probability*. We randomly select a chromosome, and mutation probability selects weight threshold to achieve mutation. Individual is . Mutation point is and the value range is . After mutation operation, we achieve a new individual . The new genetic value is denoted as
. Random number is . The number of iterations is . The max number of evolution is . The value range of is .

If the mutation probability is too large, algorithm performance will be reduced. Instead, the species diversity will be poor. In this paper, is 0.4.

*Termination Condition*. We adopt maximum evolution algebra as termination condition. In this paper, it is 100.

##### 4.4. Service Response Time Calculation

According to the previous sections, Algorithm 1 shows the steps of component response time prediction procedure.

There are components deployed on virtual machines and requests which are transmitted to component on virtual machine *.* Virtual machine response time is denoted as

Calculate weighted sum of virtual machines component response time. Therefore service response time is

#### 5. Experimental Evaluation

##### 5.1. Experimental Setup

We deploy six virtual machines on the server and two virtual CPUs on the virtual machine. Then, we deploy three procedures on the six virtual machine clients. Load balancer is deployed requests on virtual machine according to ratio and the client sends requests to the load balancer. Finally, we monitor the number of virtual machine current requests and the response time every 30 seconds and put the monitor data into database. After the server runs for several minutes, we add a new virtual machine.

##### 5.2. GA-BP Neural Network Structure

When we calculate similarity, the weight vector was , which represented CPU, memory, I/O, and bandwidth. We found out that the most similarity virtual machine was VM3, and we used the number of VM3 component concurrent requests and response time to predict VM7 (the new VM) service component response time. We predicted VM7 component concurrent time by traditional BP algorithm and GA-BP algorithm and evaluated the new algorithm. There were 300 samples on VM3. 80% of the samples were training samples and 20% of the samples were verification samples. Normalization formula is

Normalization data is , and current data is .

###### 5.2.1. BP Neural Network Structure

BP network training function is* trainlm*. Learning rate is 0.1. Performance function is MSE (mean square error). Training target error is 0.0001. The largest number of iterations is 1000. Adopt cut-and-try to determine optimal hidden layer node number. Table 1 shows experiment result. When hidden layer node number is 10, network has minimum error. The network structure is . Figure 3 shows BP neural network training performance when we train 240 samples. When BP neural network is trained 36 steps, the validation performance is 0.00012949.

###### 5.2.2. GA-BP Neural Network Structure

We set the genetic algorithm parameters as follows. Population size is 30; evolution algebra is 100; crossover probability is 0.4; the mutation probability is 0.01; BP network structure is ; performance function is MSE, and training target error is 0.0001. Figure 4 shows that when GA-BP neural network is trained 8 steps, the validation performance is 0.00012949. Convergence rate is faster than traditional BP neural network.

##### 5.3. Result Analysis of Comparative Experiment

We use GA-BP neural network model to predict 21 components on 7 virtual machines. The next service response time prediction value is 0.985 s, and the real response time is 0.997 s. Under the same experiment condition, we also use BP neural network, time series, and queuing theory to predict service response time in 60 seconds. Figure 5 shows every method prediction error. When prediction time is less than real time, the error is a negative and vice versa. Figure 6 compares GA-BP neural network prediction response time with real response time. Figure 7 shows the differences between BP neural network prediction response time and real response time. Figures 8 and 9 show traditional time series and queuing theory prediction response time in contrast to real response time. Time series and queuing theory are not as accurate as GA-BP neural network and BP neural network, because BP neural network could reduce the error between the predicted values and real value and increasing speed of system processing. What is more, GA-BP neural network overcomes local optimal solution and approximate optimal in a short period. Therefore, it is more stable than other methods in the whole prediction process.

#### 6. Conclusion and Future Work

Cloud environment dynamic scalability will break the law of service response time. Traditional method cannot satisfy prediction accuracy requirements. In this paper, we propose a novel method to predict virtual machine response time based on GA-BP neural network. We predict components response time on the VM and calculate virtual machine service response time. The extensive experimental results show the effectiveness and efficiency of our framework.

For future work, we will investigate more techniques for improving the prediction accuracy.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

This work is supported by grants from National Natural Science Foundation of China (no. 61300019).

#### References

- M. Armbrust, A. Fox, R. Griffith et al., “A view of cloud computing,”
*Communications of the ACM*, vol. 53, no. 4, pp. 50–58, 2010. View at Publisher · View at Google Scholar · View at Scopus - M. Alhamad, T. Dillon, and E. Chang, “Conceptual SLA framework for cloud computing,” in
*Proceedings of the 4th IEEE International Conference on Digital Ecosystems and Technologies (DEST '10)*, pp. 606–610, Dubai, United Arab Emirates, April 2010. View at Publisher · View at Google Scholar · View at Scopus - G. Box, G. Jenkins, and G. Reinsel,
*Time Series Analysis*, Holden-day, San Francisco, Calif, USA, 1970. - D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,”
*Nature*, vol. 323, no. 6088, pp. 533–536, 1986. View at Publisher · View at Google Scholar · View at Scopus - X. Zheng, J. Zhao, and Z. Cheng, “Web service response time dynamic prediction approach,”
*Journal of Chinese Computer System*, no. 8, pp. 1570–1574, 2011. View at Google Scholar - Y.-J. Chiang and Y.-C. Ouyang, “Profit optimization in SLA-aware cloud service with a finite capacity queuing model,”
*Mathematical Problems in Engineering*, vol. 2014, Article ID 534510, 11 pages, 2014. View at Publisher · View at Google Scholar - N. U. Bhar,
*An Introduction to Queuing Theory Model and Analysis in Application*, Birkhauser, Boston, Mass, USA, 2007. - K. Xiong and H. Perros, “Service performance and analysis in cloud computing,” in
*Proceedings of the IEEE World Congress on Services*, pp. 693–700, Los Angeles, Calif, USA, July 2009. View at Publisher · View at Google Scholar · View at Scopus - J. Yi, Q. Wang, D. Zhao, and J. T. Wen, “BP neural network prediction-based variable-period sampling approach for networked control systems,”
*Applied Mathematics and Computation*, vol. 185, no. 2, pp. 976–988, 2007. View at Publisher · View at Google Scholar · View at Scopus - K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,”
*IEEE Transactions on Evolutionary Computation*, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus - H. R. Nielson,
*Neurocomputing*, Addison Wesley Publishing Company, Boston, Mass, USA, 1991. - J. S. Judd,
*Neural Network Design & the Complexity of Learning*, California Instruction of Technology, Cambrige, UK, 1988. - N. Yamashita and M. Fukushima, “On the rate of convergence of the levenberg-marquardt method,”
*Computing*, vol. 15, pp. 239–249, 2001. View at Google Scholar - C. Z. Janikow and Z. Michalewicz, “An experimental comparisons of binary and floating point representations in genetic algorithms,” in
*Proceeding of the 9th Conference on Genertic Algorithm*, pp. 31–36, 1991. - Z. Michalewicz, C. Z. Janikow, and J. B. Krawczyk, “A modified genetic algorithm for optimal control problems,”
*Computers and Mathematics with Applications*, vol. 23, no. 12, pp. 83–94, 1992. View at Google Scholar · View at Scopus