The Scientific World Journal

Volume 2015, Article ID 589093, 14 pages

http://dx.doi.org/10.1155/2015/589093

## Chaos Time Series Prediction Based on Membrane Optimization Algorithms

^{1}School of Radio Management Technology Research Center, Xihua University, Chengdu 610039, China^{2}School of Computer Science and Technology, Sichuan Police College, Luzhou 646000, China

Received 26 June 2014; Revised 27 August 2014; Accepted 27 August 2014

Academic Editor: Shifei Ding

Copyright © 2015 Meng Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction and least squares support vector machine (LS-SVM) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE).

#### 1. Introduction

Chaotic time series is a kind of nonlinear dynamic phenomenon between certainty and randomness, in which Lyapunov exponent is adopted to decide whether a time series is chaos or not; that is, the time series is chaotic if its Lyapunov exponent is greater than zero [1]. Because it can be widely applied in real life, such as in the network traffic, earthquake prediction, and weather forecasting [2–5], chaotic time series prediction has become a hot spot, and many interesting results have been provided by a lot of researchers in recent years [6, 7].

Initially, the traditional statistical fitting methods, such as autoregressive (AR), moving average (MA), and autoregressive moving average (ARMA) models, have been used in chaotic time series prediction. However, due to the inherent linearity assumptions, the above conventional mathematical tools are not well suited for dealing with ill-defined and uncertain systems. With the recent development in chaos theory, numerous nonlinear systems have been identified to be chaotic despite their random behaviors, in which the local model is an important method for chaotic time series; the method projected chaotic time series into a multidimensional phase space, which is then divided into several subspaces where the mapping function is approximated by means of local approximation [8–10]. Chaotic time series prediction based on nonlinear systems shows in general superior performance over the traditional statistical fitting methods. As another alternative in dealing with nonlinear systems, support vector machine (SVM) was proposed in [11, 12] based on the principles of the statistical VC (Vapnik Chervonenkis) dimensional theory and structural risk minimization. SVM can better solve problems such as nonlinear, dimension disaster, and good performance for the small sample. It will be widely used in face recognition, speech recognition [13–15], and so forth. Because of its universal approximation capabilities, recently, least squares support vector machine (LS-SVM) [16] is applied to predict chaotic time series [17, 18]. In the model, firstly, the phase space reconstruction technique of chaotic theory is used to reconstruct the nonlinear data; then the least squares support vector machine regression is applied in multidimensional phase space.

Formally, phase space reconstruction method is succeeded by delay time and embedding dimension; that is, for a given time series ( is the number of the data), by using delay time and embedding dimension, the phase points after reconstruction of the time series are , where is delay time, is embedding dimension, and is the number of phase space points [19]. Accordingly, the prediction value of next time based on LS-SVM can be expressed as where is regression estimates function.

In applications, there are two key problems in the prediction model based on LS-SVM. One is the choice of delay time () and embedding dimension () in the process of phase space reconstruction. Another is the selection of kernel function and its relevant parameters [20]. The phase space reconstruction is used to express out the trace of the evolution of chaotic time series without singular; namely, chaotic time series is projected into a multidimensional phase space. Kernel function is associated with learning and modeling for the data set of phase space reconstruction to forecast accurately the future value. A large number of studies have shown that the selection of delay time () and embedding dimension () in phase space reconstruction has a direct impact on prediction results of chaotic time series [21]. If is too small in the delay neighbor element of the phase space, there will be information redundancy. If it is too big, leads to loss of information; the track of signals will occur folding phenomenon. Similarly, if is too small, it is not enough to show the detailed structure of chaotic systems. If is too big, the calculation will become complicated and cause the impact of noise.

LS-SVM learning performance is largely dependent on the choice of kernel function. A large number of studies have shown that, with the lack of a priori knowledge of specific issues, the overall performance of the radial basis kernel function model is better than other kernel function models and hence this paper selects the radial basis kernel function as the kernel function of LS-SVM. So in the model, there are two parameters (cost factor () and kernel parameter ()) that need to be identified; cost factor is generally used to control the model complexity and compromise of approximation error, which is commonly in . Kernel parameter reflects the structure of high-dimensional feature space and affects the generalization ability of the system; when the value of is too small, it will occur over-learning phenomenon and poor generalization, while the value of is too large, it will emerge less learning phenomenon; the range of is in [22]. Currently, there are mainly two ideas for optimization of the parameters of the phase space reconstruction and LS-SVM . One is that the parameters were optimized separately as shown in Figure 1, in which, firstly, optimal delay time () and embedding dimension () in the phase space are selected independently [19, 23–28] or at the same time [27, 29, 30]; then parameters and of the LS-SVM are selected by gradient descent method [31], genetic algorithm (GA) [32] or particle swarm optimization (PSO) [33], and so forth. Another idea is to optimize jointly the parameters, that is, the parameters (, , , ) as a whole to carry on the optimization [34].