Research Article  Open Access
Zhang Xiaonan, Yang Junfeng, Du Siliang, Huang Shudong, "A New Method on Software Reliability Prediction", Mathematical Problems in Engineering, vol. 2013, Article ID 385372, 8 pages, 2013. https://doi.org/10.1155/2013/385372
A New Method on Software Reliability Prediction
Abstract
As we all know, relevant data during software life cycle can be used to analyze and predict software reliability. Firstly, the major disadvantages of the current software reliability models are discussed. And then based on analyzing classic PSOSVM model and the characteristics of software reliability prediction, some measures of the improved PSOSVM model are proposed, and the improved model is established. Lastly, simulation results show that compared with classic models, the improved model has better prediction precision, better generalization ability, and lower dependence on the number of samples, which is more applicable for software reliability prediction.
1. Introduction
Reliability is an important software quality characteristic related to the probability that system works without failures over a period of time in a certain environment. The estimation or prediction of the reliability level is a very important task. This level can be used to plan test, deployment, and maintenance activities. To help in this task, the use of modeling and prediction of software reliability are a crucial issue.
Different types of software reliability prediction models consider different elements of the software project, such as the specification and codification of the programs, and are usually based on characteristics of the testing activity. Some of those models consider the time between failures [1–3]. Others consider the coverage of a test criterion [4–7]. A criterion can be viewed as a predicate to be satisfied by the test cases and can be used to evaluate test sets [8]. The advantage of models based on coverage is that they are independent of the operation profile. However, models based on time are most commonly used. Due to the general nonlinear function mapping capabilities, artificial neural networks have received increasing attention in time series forecasting [9–11]. These works show that ANN nonparametric models present better results than traditional ones. However, most of those works explore only models based on time. In addition, the ANN itself is filled with strong experience, the theory is not strict or easily interpreted, and then it easily converges at the local minimum point. So the application of artificial neural networks is very limited for software reliability prediction.
Recently, a novel machine learning technique, called support vector machine (SVM), has drawn much attention in the fields of pattern classification and regression forecasting. SVM was first introduced by Vapnik and his colleagues in 1995 [5]. SVM is a kind of classifier’s studying method on statistic study theory. This algorithm derives from linear classifier and can solve the problem of two kinds classifiers; later this algorithm is applied in nonlinear fields; that is to say, we can find the optimal hyperplane (large margin) to classify the samples set. It is an approximate implementation to the structure risk minimization (SRM) principle in statistical learning theory, rather than the empirical risk minimization (ERM) method [5].
Compared with traditional neural networks, SVM can use the theory of minimizing the structure risk to avoid the problems of excessive study, calamity data, local minimal value, and so on. SVM has been successfully used for machine learning with large and highdimensional data sets. These attractive properties make SVM become a promising technique. This is due to the fact that the generalization property of an SVM does not depend on the complete training data but only a subset thereof, the socalled support vectors. Now, SVM has been applied in many fields [12–14]. However, the essence of SVM training is solving convex quadratic programming problems with linear equality constraint. The classic methods for solving nonlinear programming, such as the Newton method and quasiNewton method, have large computing. So the predicted effect is not so perfect.
In order to overcome the limitations of SVM mentioned previously, the researchers apply particle swarm optimization (PSO) to the training of the SVM [15, 16]. PSO is an intuitive and easytoimplement algorithm from the swarm intelligence community. To replace the need for numeric solvers, a PSO algorithm based on chaos searching (CPSO) which improves the convergence speed and the abilities of searching for the global optima is proposed and shown to be feasible in solving the SVM quadratic programming problem, but the research is fit for the large sample set, which is ineffective for less sample data in early software reliability prediction.
In this paper, based on analyzing classic PSOSVM model and the characteristic of software reliability prediction, we propose concrete measures of the improved PSOSVM model and establish the improved PSOSVM model. This paper is organized as follows: Section 2 summarizes the classic PSOSVM model. Section 3 analyzes characteristic of software reliability prediction and PSOSVM applicability and then proposes specific improved strategy. Section 4 describes results of two compared simulation experiments. Finally, Section 5 concludes the paper.
2. Traditional PSOSVM Characteristics Analysis
Traditional PSOSVM model used PSO algorithm to optimize the model parameters and kernel parameter of SVM and improved the prediction accuracy by searching the optimal parameters value. SVM classification was first proposed for the second largest interval algorithm and then gradually extended to the field of nonlinear regression. SVM nonlinear regression prediction is similar to classification problems, which are calculated according to the given decision function, and then classify and predict. Regression problem retains the main features of the largest interval algorithm, which is to minimize a convex function, and the nonlinear function can be got by studying the linear devices in the kernel feature space; the difference is mainly reflected in a given data set.
Suppose that a given data set is , where .
The original SVM problem can be expressed as
Since the dimension of feature space is high, and the objective function is nondifferentiable, in order to facilitate the calculation, the dot product kernel function technology and Wolf dual theory are introduced, which is transformed into the dual problem. The original problem is transformed into the dual quadratic programming problem. Specific methods are firstly constructing the Lagrangian function (such as type 2) and then calculating the partial derivative of each variable; the result is substituted into the original problem:
Lagrange function requires to be minimized; thus
Function can be directly expressed as
When checking
When checking
When SVM solves nonlinear regression problems, the nonlinear mapping makes feature space mapping into highdimensional feature space and then finishes linear regression in highdimensional space. In order to reduce the sensitivity of the prediction error, the objective function of nonlinear regression model is defined insensitive loss function, and the slack variable is introduced to ignore the fitting error of less than , which ensures that the model is the existence of global minimum and reliable generalization sector optimization.
The original problem is for the regularized part, whose role is to make the function smoother to enhance the generalization ability; and reflect the training point margin of error; reflects the experience risk of the model; error penalty factor is the model parameters, which determines the balance between the empirical risk and the regularization parts.
Dual problem is the quadratic programming problem, where is called kernel function, including linear kernel, polynomial kernel function, RBF kernel function, and sigmoid kernel function. The RBF kernel function can fully reflect the software reliability nonlinear characteristics, which is used in the construction of prediction model.
Consider RBF kernel function:
When making PSO optimize and in SVM model, the population is constantly updating from the best local position to the best global locations in the iterative process. Assuming that the population size is , the d dimensional space position of the particle I is , speed is , the optimal local location is , and the best global position is . Specific methods are as follows.
Speed:
Location: where is the current iteration number, and are the acceleration factor, is own dependence on memory of particles, is the impact of other particles on the particle itself, which make each particle close to and , and and are uniform distribution random numbers in , which is used to simulate the slight disturbance.
3. Model Applicability and Improved Measures Analysis
The traditional PSOSVM has many outstanding advantages, which are adapted to software reliability prediction characteristics, which are shown in Table 1.

Although traditional PSOSVM prediction model has many advantages, because of inherent weaknesses and deficiencies of PSO and SVM algorithms, this paper proposes the correspondent improved strategy to get the optimal software reliability prediction model. The model shortcomings and correspondent improved measures are shown in Table 2.

4. Improved Model
4.1. Block Population Initialized Measure
Particle swarm optimization (PSO) is a global optimal search algorithm, so this algorithm should quickly search to obtain the optimal value. However, the traditional particle swarm is randomly generated within the region in the whole population, which cannot fully guarantee that it is dispersed throughout the search space. If we can put search space into many blocks, it will be able to improve the nonuniform status. The main idea is that each particle is almost evenly distributed; assuming that the number of particles is , then the entire search space is divided into small areas: where and are expressed in the range of values in dimension; then the initial position of particle is where is random number values in .
4.2. Adaptive Inertia Factor Measure
Inertia factor in the PSO algorithm makes the particle velocity update with historical memory, which adjusts history speed to the local and global optimal speed in order to balance the relationship between the global search ability and the local one. When the iteration begins, the larger inertia weight can enhance the global search capability; that is, the larger the search area, in the latter, the smaller the inertia weight which can be enhanced local search ability, which is conducive to better local search. If we can prolong the former and latter search times, we will improve the overall algorithm performance, so the adaptive weight update method is as follows: Suppose that are 0.9 and 0.1. The corresponding inertia weight curve is shown in Figure 1.
In the previous table, the curve expresses the relationship between the power of . and . Compared with other values, when the power is 6, the particle search time is the longest. The method makes the inertia weight a larger value in the iterative initial time, and smaller in the latter, which extends the global and local search times, strengthens the search ability, and balances the global search ability and local search ability.
4.3. Nonevolution Number of Mutation Measures
Mutation mechanism comes from the genetic algorithm, which is mainly used to overcome the problem of converging at local minimum in the iterative process. Standard PSO is easy to converge at local optimal solution in highdimensional function optimization problems, and nonevolution number of the particles can determine whether it is entering into the local optimal solution. Therefore, if nonevolution number and mutation operators can be introduced into the PSO, they will be selection criteria as the mutation time in order to overcome the local minimum problem. Specific strategies are as follows.(1)Calculate the fitness changing rate (abbreviated as FCR hereinafter): the FCR is the fitness changing rate of (history optimal particle position) between the current iteration and the previous times : (2)Count nonevolution number: in the beginning of the evolution, the nonevolution number is ; the fitness changing threshold value is ; nonevolution limit is ; mutation probability is . In the iterative process, the nonevolution number is determined by the fitness changing rate, as follows:
If nonevolution number is more than the limit , the algorithm may be stopped, and we can do mutation operation based on the mutation probability: where is the random number in . The improvement makes the particles continue to approach the global optimum when converging at local minimum in training.
4.4. LSSVM Measure
Least squares support vector machine (LSSVM) has the two main deformations. Firstly, the least squares linear system is introduced as a loss function, which makes equality constraints replace inequality ones in SVM; secondly the quadratic programming solving replaces linear equations, which avoids insensitive loss function and greatly improves the learning efficiency and the training accuracy.
The standard SVM problem can be simplified as follows:
It should be noted that the one equality constraint in LSSVM is used instead of the two inequality constraints in SVM; the corresponding objective function can also be replaced by . Based on the standard method of transforming into dual problem in SVM, LSSVM can be converted into the dual problem through the derivation
Decision function is
As solving linear equations, in the decision function can be obtained through the equation, which can greatly reduce the computation and the model is more simple.
5. The Flow Chart of the Improved PSOLSSVM Model
The flow chart of improved PSOLSSVM prediction model is shown in Figure 2; the dashed part expresses the improved process of PSOLSSVM model.
6. Simulation Comparison
In order to evaluate the performance of the new model and compare it with the traditional model, the simulation experiment is shown as follows. Here, taking a military software system as an example, thirteen module indexes and module defect number are shown in Table 3.

SN is module number; LOC is module size (the number of line codes is units); FO is module output; FI is module input; PATH is module control flow path; FAULTS is the number of module defects.
In order to evaluate the prediction accuracy of the optimization model, we carry out two experiments. Experiment 1: all 13 data samples are divided into two parts, where the first 10 are as the training set, and the last 3 are as the test set. Experiment 2: the first 6 are as the training set, and the last 3 are also as the test set to evaluate the model as a result.
After the training samples and test samples are normalized [17], respectively, we input them into the BP network model, the traditional PSOSVM model, and the optimization PSOLSSVM model. Where BP prediction model uses the momentum factors model, the hidden nodes of model are 18; the training objective is 0.00001. In accordance with the crossvalidation algorithm and the depth search algorithm, after 2 rounds of selection, the traditional PSOSVM prediction model parameter is , and the nuclear kernel parameter is . Both the traditional and the optimization PSOSVM models make RBF as kernel function. In the model training process, the error curve of BP prediction model and the optimization PSOLSSVM model are shown in Figures 3 and 4, respectively.
We can see from the tables that the improved PSOLSSVM prediction model training error decreases rapidly, and about 200 times training tends to stop; however BP prediction model can meet the training requirements after 1733 times, which is significantly higher than the improved PSOLSSVM prediction model.
After training, the three methods get corresponding prediction models applicable to sample data; therefore we can input prediction sample data into each model to forecast. Because BP prediction model is greatly influenced by the initial parameters, in order to reduce the randomness, we calculate the average of 10 consecutive operations. Calculating the average percentage prediction error, the prediction results are shown in Table 4; the comparing result is shown in Table 5.


7. Conclusion
Because of using the optimized model parameters and kernel parameters in the improved PSOLSSVM prediction model, the prediction accuracy is much higher than the traditional PSOSVM model and BP prediction model; as the number of training samples decreases, the prediction accuracy of the improved PSOLSSVM model is significantly higher than the traditional PSOSVM model and BP model owing to its good generalization performance in less training samples. Thus, the improved PSOLSSVM prediction model is better than the traditional PSOSVM and BP prediction models in both training efficiency and prediction accuracy. Due to the current situation that the prediction sample set is small and the cost is high in software reliability prediction, the proposed model has important practical significance, and it may become the preferred prediction method for the less samples prediction projects.
References
 L. C. Briand, W. L. Melo, and J. Wüst, “Assessing the applicability of faultproneness models across objectoriented software projects,” IEEE Transactions on Software Engineering, vol. 28, no. 7, pp. 706–720, 2002. View at: Publisher Site  Google Scholar
 T. M. Khoshgoftaar and R. M. Szabo, “Using neural networks to predict software faults during testing,” IEEE Transactions on Reliability, vol. 45, no. 3, pp. 456–462, 1996. View at: Google Scholar
 N. Cristianini and J. S. Taylor, An Introduction to Support Vector Machines, Zhengguo Translation, Electronic Industry Press, Beijing, China, 2004.
 X. Li and S. Yanhua, “SVM early prediction of software reliability,” Hefei University of Technology, vol. 7, no. 7, pp. 859–862, 2007. View at: Google Scholar
 F. Zhe, “SVRbased software reliability prediction model,” Computer Engineering and Applications, vol. 43, no. 13, pp. 120–123, 2007. View at: Google Scholar
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the 1995 IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995. View at: Google Scholar
 U. Paquet and A. P. Engelbrecht, “A new particle swamr optimizer for linearly constrained optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 227–233, Canberra, Australia, December 2003. View at: Google Scholar
 U. Paquet and A. P. Engelbrecht, “Training support vector machines with particle swarms,” in Proceedings of the International Joint Conference on Neural Networks, pp. 1593–1598, July 2003. View at: Google Scholar
 Xiaodong, L. Xiangdong, and W. Rui, Particle Swarm Algorithm and Its Application, 12, Liaoning University Press, Shenyang, China, 2007.
 C. Fun, “Integrated model of software reliability analysis and research,” Computer Science, vol. 36, no. 4, pp. 181–184, 2009. View at: Google Scholar
 Z. Yanyan and G. Wei, “Summary of software reliability engineering,” Computer Science, vol. 36, no. 2, pp. 20–25, 2009. View at: Google Scholar
 A. Widodo and B. S. Yang, “Application of nonlinear feature extraction and support vector machines for fault diagnosis of induction motors,” Expert Systems with Applications, vol. 33, no. 1, pp. 241–250, 2007. View at: Publisher Site  Google Scholar
 Q. Wu, “The forecasting model based on wavelet νsupport vector machine,” Expert Systems with Applications, vol. 36, no. 4, pp. 7604–7610, 2009. View at: Publisher Site  Google Scholar
 Q. Wu, “The hybrid forecasting model based on chaotic mapping, genetic algorithm and support vector machine,” Expert Systems with Applications, vol. 37, no. 2, pp. 1776–1783, 2010. View at: Publisher Site  Google Scholar
 Q. Wu, “A hybridforecasting model based on Gaussian support vector machine and chaotic particle swarm optimization,” Expert Systems with Applications, vol. 37, no. 3, pp. 2388–2394, 2010. View at: Publisher Site  Google Scholar
 L. Yushi, “Understanding the networkcentered command information system,” Command Information System and Technology, vol. 1, no. 1, pp. 1–4, 2010 (Chinese). View at: Google Scholar
 L. Yu and Z. Wenyu, “Command and control technology for unmanned combat vehicles,” Command Information System and Technology, vol. 2, no. 6, pp. 6–9, 2011 (Chinese). View at: Google Scholar
Copyright
Copyright © 2013 Zhang Xiaonan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.