Research Article  Open Access
Guo Yangming, Zhang Lu, Cai Xiaobin, Ran Congbao, Zhai Zhengjun, Ma Jiezhong, "Time Series Adaptive Online Prediction Method Combined with Modified LSSVR and AGO", Mathematical Problems in Engineering, vol. 2012, Article ID 985930, 12 pages, 2012. https://doi.org/10.1155/2012/985930
Time Series Adaptive Online Prediction Method Combined with Modified LSSVR and AGO
Abstract
Fault or health condition prediction of the complex systems has attracted more attention in recent years. The complex systems often show complex dynamic behavior and uncertainty, which makes it difficult to establish a precise physical model. Therefore, the time series of complex system is used to implement prediction in practice. Aiming at time series online prediction, we propose a new method to improve the prediction accuracy in this paper, which is based on the grey system theory and incremental learning algorithm. In this method, the accumulated generating operation (AGO) with the raw time series is taken to improve the data quality and regularity firstly; then the prediction is conducted by a modified LSSVR model, which simplifies the calculation process with incremental learning; finally, the inverse accumulated generating operation (IAGO) is performed to get the prediction results. The results of the prediction experiments indicate preliminarily that the proposed scheme is an effective prediction approach for its good prediction precision and less computing time. The method will be useful in actual application.
1. Introduction
Prediction technique is essential to ensure the operational safety of complex systems. However, the complex systems are not easy to establish their precise physical models. In the practice, time seriesbased prediction methods have attracted increased attention [1–7].
Compared with other reported methods, Support Vector Regression (SVR) is based on statistical theory and structural risk minimization principle [8–10], and it has a global optimum and exhibits better accuracy in nonlinear and nonstationary time series data prediction via kernel function. However, aiming at the large sample data, the quadratic programming (QP) problem becomes more complex. Thus, Least Squares Support Vector Regression (LSSVR) was proposed by Suykens et al. [11] In LSSVR, the inequality constrains are replaced by equality constrains, which can reduce the calculation time effectively. Thus, LSSVR has more attention in time series forecasting [12–16].
Because a great number of uncertain elements exist in practical application, online prediction scheme [17, 18] is applied to meet the actual training and prediction condition, and then achieve better prediction results. Moreover, the incompleteness, noisiness, and inconsistency existing in the raw time series sample data will reduce the prediction accuracy. In these cases, two problems should be considered simultaneously, one is how to improve prediction accuracy, the other one is how to reduce the training time and prediction time.
Some researchers combined several methods to improve prediction performance [19, 20], which can fuse their advantages and avoid their drawbacks. Using this idea, we propose a scheme to fix the above problems. In the new scheme, we integrate accumulated generating operation (AGO) [21] with a modified LSSVRbased model. It includes the following two aspects.(1)Based on the grey system theory [21], we conduct AGO with the raw time series to improve the data quality and regularity, and finally we can obtain the results using inverse accumulated generating operation (IAGO). (2)We set up a new adaptive online prediction model based on LSSVR, which utilizes incremental learning algorithm to enrich information and modifies the LSSVR model with more simple form to reduce prediction calculation time.
The remainder of the paper is organized as follows. Section 2 gives a brief introduction of LSSVR; Section 3 proposes the new scheme in detail, which includes the AGObased prediction strategy and the simple modified prediction model based on incremental learning algorithm; Section 4 shows experiments and results analysis; conclusions are given in Section 5.
2. A Brief Introduction of LSSVR
Consider a training sample data set with input data and output , where denotes the number of training samples. The goal of LSSVR is to obtain a function as follows: where the nonlinear mapping function maps the input data into a higher dimensional feature space. It means that the method makes the nonlinear fitting problem in input feature space to be replaced by a linear fitting problem in highdimensional feature space. is the weight vector and is the bias constant.
According to the structure risk minimization principle, and can be found via a constrained convex optimization as follows: where is the loss function, is the slack variables, and is a regularization parameter. Equation (2.2) can be transformed into dual form with Lagrange function as follows: where are the Lagrange multipliers.
It is obvious that the optimal solution of (2.2) satisfies KarushKuhnTucker (KKT) conditions, then the optimal conditions are shown as follows:
After eliminating and from (2.4), we can obtain the solution by the following linear equations: where is an dimensional vector of all ones, is a unite matrix, and .
In LSSVR, the optimization problem is simplified to a linear equations instead of more complex quadratic programming problem in SVR. Therefore, the computational complexity is decreased significantly. Obviously, (2.5) can be factorized into a positive definite system [22]. Thus, the solutions of and could be easily obtained. Then the LSSVR model for function estimator can be expressed as follows:
3. Proposed Adaptive Online Prediction Scheme for Time Series
3.1. AGOBased Prediction Method for Time Series
Time series prediction models employ the historical data values for extrapolation to obtain the future values. If a time series has a regular pattern, then a value of the series should be a function of previous values. If is the target value that we are trying to model and predict, and is the value of at time , then the goal is to create a prediction model as following form: where is the value of for the previous observation, is the value two observations ago, and represents noise that does not follow a predictable pattern. This way, the good prediction accuracy always depends on the high quality of time series data.
In 1982, a famous Chinese scholar Deng [21] proposed the grey system theory, which has already been used widely in all kinds of fields. As the core of the grey prediction theory, the accumulated generating operation (AGO) and inverse accumulated generating operation (IAGO) are the main methods which provide a manageable approach to treating disorganized evidence, that is, AGO has a main advantage that it can reduce the disturbance with the stochastic factors. AGO makes the rules hidden the raw time series to be presented fully and enhances the law of time series data.
Consider a raw time series , and take the AGO with as follows: Then will be the new time series.
After the new time series is formed, it can be used to predict the future values based on the previous and current values. The previous and the current values of the time series are used as input for the following prediction model: where represents the number of ahead predictions, is the prediction model, and is the size of repressor. According to (3.3), we can obtain the training sample.
In this paper, we use the following prediction strategy to predict the next datapoint value. The strategy is that the prediction data will join into the sample data set as known data. This prediction process is called incremental algorithm. The model can be constructed with onestep prediction: The regressor of the model is defined as the vector of inputs .
After getting the prediction values sequence, we can compute the real prediction results by IAGO:
3.2. Modified LSSVR Model Based Incremental Learning
For adaptive online prediction, realtime is the basic requirement, that is, in the adaptive online prediction based on incremental learning algorithm, the new predicted data is constantly added, and we hope that the total computing time of training and prediction should be less than the sampling period. In this case, the most important issue is to compute Lagrange multipliers and bias term , which has effect on the computing time because they need recalculate when the new sample data is added in. After analyzing the research results of Yaakov et al. [23] we develop a simple prediction model based on LSSVR to solve the problem.
Suppose the training sample data are described by the input . In time , give the sample data set , . As time goes on, the new sample data joins. We give a new approach to eliminate the bias term .
Let and , where is the mapping function of the th input data at time and is a constant. Then rewritten optimization problem of (2.2) as follows:
The solution of (3.6) is also obtained after constructing the Lagrange function: where is the th Lagrange multiplier. The new optimal conditions are shown as follows After eliminating and , the solution of (3.8) is given by the following set of linear equations: Let where is an identity matrix with . Equation (3.9) can be rewritten as follows: The new regression function is presented as follows: According to (3.11) and (3.12), when the new sample data is added, the calculation only needs to compute parameter .
At time , can be calculate via (3.11), that is, . Here, can be fast computed with block matrix approach [24, 25].
From above description, it is obvious that the modified prediction model is a simple model, and it is maybe applied to reduce the computing time for its less number of parameters.
4. Experiments and Results Analysis
We perform two simulation experiments and an application experiment to evaluate the proposed scheme. All the experiments adopt MatlabR2011b with LSSVMlab1.8 Toolbox (The software can be downloaded from http://www.esat.kuleuven.be/sista/lssvmlab) under Windows XP operating system.
In this paper, we use the prediction Root Mean Squared Error (RMSE) [26] as evaluation criteria, which is defined as follows: where is the number of training sample data, and are the prediction and the actual value, respectively.
4.1. Simulation Experiments and Results Analysis
In order to validate the performance of the proposed AGObased method described in Section 3.1 and the modified LSSVR modelbased incremental learning described in Section 3.2, we perform two simulation experiments. In the experiments, the sample time series data of variable has 75 sample data values, which come from one complex avionics system (shown in Figure 1, omit dimension).
The data points from 1 to 50 in time series are taken as 45 initial training sample data. The first sample data set consists of points 1 to 6, with the first 5 as the input sample vector and the 6th point as the output. The second sample data set consists of points 2 to 7, with the point 2 through 6 as input sample vector and 7th point as the output. This way we have 45 training data out of the first 50 data points. And then we predict the no. 51 to no. 75 time series data using the trained model. The performance is measured by the prediction mean RMSE (PrMR), training time (TrTime), and prediction time (PrTime). The experiments are repeated 100 times and the average results are obtained.
In the simulation Experiment I, we compare the proposed AGObased method with traditional LSSVR presented in [4]. Here, Gaussian RBF kernel is adopted as kernel function, and the parameters are jointly optimized with traditional gridding search method, where the search rang of and is . The prediction results are shown in Figure 2 (ALSSVR presents the proposed AGObased method in the figure) and Table 1.
From Table 1 and Figure 2, we can see that the proposed AGObased method has better prediction accuracy. This is may be due to that the proposed AGObased method can avoid the random disturbances existing in the raw time series and improve the regularity of the time series.
In simulation Experiment II, we compare the two incremental learningbased methods: one is the proposed modified LSSVR model, the other one is the traditional LSSVR model. The other selections are the same as Experiment I. The prediction results are reported in Table 2.

The results in Table 2 show that the modified LSSVR model can reduce the computing time more with the same prediction accuracy as the traditional LSSVR model. Compared with the results of traditional LSSVR in [4] (shown in Table 1), the incremental learningbased methods have better prediction accuracy, which is increased by 10.25%. This may be due to that the prediction information is used more fully by incremental learning algorithm.
4.2. Application Experiment and Results Analysis
In the application experiment, the experimental sample time series with 2000 sample data values, which come from a certain complex electronic system, shown in Figure 3.
We set the first 1000 time series data as training samples, and any continuous 11 are taken as a sample, where the data of the first 10 data compose an input sample vector and last one as the output vector, that is, in the application experiment, we have 990 training data. And then we predict the no. 1001 to no. 2000 time series data using the trained prediction model. The test is also repeated 100 times.
The application experiment is executed using the proposed method presented in Section 3 (called Proposed Method) and traditional incremental learning based LSSVR method (called Traditional Method). The prediction performance is measured by training time (TrTime), prediction time (PrTime), and prediction mean RMSE (PrMR). The results are shown in Figures 4 and 5 and Table 3. In addition, we also utilize traditional gridding search method to joint optimized all the parameters, and the search rang of and is [0.1, 200].

Figures 4 and 5 and Table 3 show that the proposed method has better prediction accuracy with lower computing time. This may be due to that we modify the LSSVR model to reduce the parameters of the model, and the data quality and regularity are improved by AGO. Thus, the proposed method maybe has higher application value.
5. Conclusions
In this paper, we address the issue of adaptive online prediction method based on LSSVR model. We utilize two approaches to gain better prediction accuracy. Firstly, we make the accumulated generating operation (AGO) with raw time series to reform the quality of raw time series, which avoids the random disturbances and improves the regularity of the time series. Secondly, we modify the traditional LSSVR model with simple form to simplify incremental learning, which helps to reduce the computing time in the process of adaptive online training and prediction.
We conduct three prediction experiments to demonstrate the effectiveness of the propose method. The results show that the prediction method has better prediction performance, and it is suitable for the adaptive online prediction in practice.
Acknowledgments
The authors would like to thank Professor Ming J. Zuo, the Director of Reliability Research Laboratory of University of Alberta, for his useful comments. This work is supported by the National Basic Research Program of China (973 Program), the National Natural Science Foundation of China (no. 61001023 and no. 61101004), Shaanxi Natural Science Foundation (2010JQ8005), and Aviation Science Foundation of China (2010ZD53039).
References
 W. Caesarendra, A. Widodo, P. H. Thom, B. S. Yang, and J. D. Setiawan, “Combined probability approach and indirect datadriven method for bearing degradation prognostics,” IEEE Transactions on Reliability, vol. 60, no. 1, pp. 14–20, 2011. View at: Publisher Site  Google Scholar
 D. Liu, Y. Peng, and X. Y. Peng, “Online adaptive status prediction strategy for datadriven fault prognostics of complex systems,” in Proceedings of the Prognostics and System Health Management Conference (PHM '11), pp. 1–6, Shenzhen, China, May 2011. View at: Publisher Site  Google Scholar
 M. Pecht and R. Jaai, “A prognostics and health management roadmap for information and electronicsrich systems,” IEICE Fundamentals Review, vol. 3, no. 4, pp. 25–32, 2010. View at: Google Scholar
 J. Qu and M. J. Zuo, “An LSSVRbased algorithm for online system condition prognostics,” Expert Systems with Applications, vol. 39, no. 5, pp. 6089–6102, 2012. View at: Publisher Site  Google Scholar
 M. Qi and G. P. Zhang, “Trend timeseries modeling and forecasting with neural networks,” IEEE Transactions on Neural Networks, vol. 19, no. 5, pp. 808–816, 2008. View at: Publisher Site  Google Scholar
 V. V. Gavrishchaka and S. Banerjee, “Support vector machine as an efficient framework for stock market volatility forecasting,” Computational Management Science, vol. 3, no. 2, pp. 147–160, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 Y. M. Guo, C. B. Ran, X. L. Li, and J. Z. Ma, “Adaptive online prediction method based on LSSVR and its application in an electronic system,” Journal of Zhejiang University Science C, vol. 13, no. 12, pp. 881–890, 2012. View at: Google Scholar
 V. Kecman, Learning and Soft Computing: Support Vector Machines. Neural Networks, and Fuzzy Logic Models, MIT Press, Cambridge, Mass, USA, 2001.
 J. V. Hansen and R. D. Nelson, “Neural networks and traditional time series methods: a synergistic combination in state economic forecasts,” IEEE Transactions on Neural Networks, vol. 8, no. 4, pp. 863–873, 1997. View at: Publisher Site  Google Scholar
 V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995.
 J. A. K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, and J. Vandewaller, Least Squares Support Vector Machines, World Scientific Publishing, 2002.
 G. Hebin and G. Xiaoqing, “Application of least squares support vector regression in network flow forecasting,” in Proceedings of the 2nd International Conference on Computer Engineering and Technology (ICCET '10), pp. V7342–V7345, April 2010. View at: Publisher Site  Google Scholar
 Y. H. Zhao, P. Zhong, and K. N. Wang, “Application of least squares support vector regression based on time series in prediction of gas,” Journal of Convergence Information Technology, vol. 6, no. 1, pp. 243–250, 2011. View at: Publisher Site  Google Scholar
 Y. Guo, Z. Zhai, and H. Jiang, “Weighted prediction of multiparameter chaotic time series using least squares support vector regression (LSSVR),” Journal of Northwestern Polytechnical University, vol. 27, no. 1, pp. 83–87, 2009. View at: Google Scholar
 A. Shilton, D. T. H. Lai, and M. Palaniswami, “A division algebraic framework for multidimensional support vector regression,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 40, no. 2, pp. 517–528, 2010. View at: Publisher Site  Google Scholar
 Y. M. Guo, X. L. Li, G. H. Bai, and J. Z. Ma, “Time series prediction method based on LSSVR with modified Gaussian RBF,” in Neural Information Processing, vol. 7664 of Lecture Notes in Computer Science, Part 2, pp. 9–17, 2012. View at: Publisher Site  Google Scholar
 H. Liu and Z. H. Sun, “An online algorithm for support vector machine based on subgradient projection,” Control Engineering and Applied Informatics, vol. 13, no. 3, pp. 18–24, 2011. View at: Google Scholar
 S. ShalevShwartz, Online learning: theory, algorithms, and applications [Ph.D. thesis], The Senate of the Hebrew University, July 2007.
 W. M. Tang, “New forecasting model based on grey support vector machine,” Journal of Systems Engineering, vol. 21, no. 4, pp. 410–413, 2006. View at: Google Scholar
 D. H. Zhan, Y. X. Bi et al., “Power load forecasting method based on series grey neural network,” System EngineeringTheory & Practice, no. 23, pp. 128–132, 2004. View at: Google Scholar
 J. L. Deng, The Primary Methods of Grey System Theory, Huazhong University of Science and Technology Press, Wuhan, China, 2004.
 F. Ojeda, J. A. K. Suykens, and B. De Moor, “Low rank updated LSSVM classifiers for fast variable selection,” Neural Networks, vol. 21, no. 23, pp. 437–449, 2008. View at: Publisher Site  Google Scholar
 E. Yaakov, M. Shie, and M. Ron, “Sparse online greedy support vector regression,” in Proceedings of the 13th European Conference on Machine Learning, Berlin, Germany, 2002. View at: Google Scholar
 J. K. Zhang, Linear Model’s Parameters Estimation Method and Its Improvement, National University of Defense Technology Press, Changsha, China, 1992.
 C. C. Cowen, Linear Algebra for Engineering and Science, Indiana West Pickle Press, West Lafayette, Ind, USA, 1996.
 M. Y. Ye, X. D. Wang, and H. R. Zhang, “Chaotic time series forecasting using online least squares support vector machine regression,” Acta Physica Sinica, vol. 54, no. 6, pp. 2568–2573, 2005. View at: Google Scholar
Copyright
Copyright © 2012 Guo Yangming et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.