Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2015, Article ID 589093, 14 pages
http://dx.doi.org/10.1155/2015/589093
Research Article

Chaos Time Series Prediction Based on Membrane Optimization Algorithms

1School of Radio Management Technology Research Center, Xihua University, Chengdu 610039, China
2School of Computer Science and Technology, Sichuan Police College, Luzhou 646000, China

Received 26 June 2014; Revised 27 August 2014; Accepted 27 August 2014

Academic Editor: Shifei Ding

Copyright © 2015 Meng Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction and least squares support vector machine (LS-SVM) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE).

1. Introduction

Chaotic time series is a kind of nonlinear dynamic phenomenon between certainty and randomness, in which Lyapunov exponent is adopted to decide whether a time series is chaos or not; that is, the time series is chaotic if its Lyapunov exponent is greater than zero [1]. Because it can be widely applied in real life, such as in the network traffic, earthquake prediction, and weather forecasting [25], chaotic time series prediction has become a hot spot, and many interesting results have been provided by a lot of researchers in recent years [6, 7].

Initially, the traditional statistical fitting methods, such as autoregressive (AR), moving average (MA), and autoregressive moving average (ARMA) models, have been used in chaotic time series prediction. However, due to the inherent linearity assumptions, the above conventional mathematical tools are not well suited for dealing with ill-defined and uncertain systems. With the recent development in chaos theory, numerous nonlinear systems have been identified to be chaotic despite their random behaviors, in which the local model is an important method for chaotic time series; the method projected chaotic time series into a multidimensional phase space, which is then divided into several subspaces where the mapping function is approximated by means of local approximation [810]. Chaotic time series prediction based on nonlinear systems shows in general superior performance over the traditional statistical fitting methods. As another alternative in dealing with nonlinear systems, support vector machine (SVM) was proposed in [11, 12] based on the principles of the statistical VC (Vapnik Chervonenkis) dimensional theory and structural risk minimization. SVM can better solve problems such as nonlinear, dimension disaster, and good performance for the small sample. It will be widely used in face recognition, speech recognition [1315], and so forth. Because of its universal approximation capabilities, recently, least squares support vector machine (LS-SVM) [16] is applied to predict chaotic time series [17, 18]. In the model, firstly, the phase space reconstruction technique of chaotic theory is used to reconstruct the nonlinear data; then the least squares support vector machine regression is applied in multidimensional phase space.

Formally, phase space reconstruction method is succeeded by delay time and embedding dimension; that is, for a given time series ( is the number of the data), by using delay time and embedding dimension, the phase points after reconstruction of the time series are , where is delay time, is embedding dimension, and is the number of phase space points [19]. Accordingly, the prediction value of next time based on LS-SVM can be expressed as where is regression estimates function.

In applications, there are two key problems in the prediction model based on LS-SVM. One is the choice of delay time () and embedding dimension () in the process of phase space reconstruction. Another is the selection of kernel function and its relevant parameters [20]. The phase space reconstruction is used to express out the trace of the evolution of chaotic time series without singular; namely, chaotic time series is projected into a multidimensional phase space. Kernel function is associated with learning and modeling for the data set of phase space reconstruction to forecast accurately the future value. A large number of studies have shown that the selection of delay time () and embedding dimension () in phase space reconstruction has a direct impact on prediction results of chaotic time series [21]. If is too small in the delay neighbor element of the phase space, there will be information redundancy. If it is too big, leads to loss of information; the track of signals will occur folding phenomenon. Similarly, if is too small, it is not enough to show the detailed structure of chaotic systems. If is too big, the calculation will become complicated and cause the impact of noise.

LS-SVM learning performance is largely dependent on the choice of kernel function. A large number of studies have shown that, with the lack of a priori knowledge of specific issues, the overall performance of the radial basis kernel function model is better than other kernel function models and hence this paper selects the radial basis kernel function as the kernel function of LS-SVM. So in the model, there are two parameters (cost factor () and kernel parameter ()) that need to be identified; cost factor is generally used to control the model complexity and compromise of approximation error, which is commonly in . Kernel parameter reflects the structure of high-dimensional feature space and affects the generalization ability of the system; when the value of is too small, it will occur over-learning phenomenon and poor generalization, while the value of is too large, it will emerge less learning phenomenon; the range of is in [22]. Currently, there are mainly two ideas for optimization of the parameters of the phase space reconstruction and LS-SVM . One is that the parameters were optimized separately as shown in Figure 1, in which, firstly, optimal delay time () and embedding dimension () in the phase space are selected independently [19, 2328] or at the same time [27, 29, 30]; then parameters and of the LS-SVM are selected by gradient descent method [31], genetic algorithm (GA) [32] or particle swarm optimization (PSO) [33], and so forth. Another idea is to optimize jointly the parameters, that is, the parameters (, , , ) as a whole to carry on the optimization [34].

Figure 1: The flow chart of chaotic time series prediction.

Membrane systems presented in [35], also called systems, are bioinspired computing models belonging to a broader family of so-called biological or natural computing [36, 37], which is a distributed and parallel computing model with hierarchy. Recently, membrane systems are widely used in many fields, such as in gasoline blending scheduling, radar emitter signals analyzing, and images skeletonizing [3840]. This paper uses a membrane computing (cell-like membrane computing optimization algorithm) to optimize simultaneously the parameters of the phase space reconstruction and LS-SVM (namely, , , , and ). It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to develop an optimal action program. Then, using the model presented in this paper to predict band occupancy rate of frequency modulation (FM) broadcasting band and interphone band.

The rest of this paper is organized as follows. Section 2 briefly reviews phase space reconstruction, LS-SVM regression, and membrane computing. Section 3 introduces specifically the algorithm of parameters joint optimization about prediction model. In Section 4 the prediction model presented in this paper will be used to predict the parameters of electromagnetic environment. Conclusions are given in Section 5.

2. Preliminaries

2.1. Phase Space Reconstruction and LS-SVM Regression

Let the time series be ; after the phase space reconstruction, the points in phase space can be expressed as [41, 42] where is the number of phase space points, denotes the delay time, and is embedding dimension.

Assume the given samples data , where is the sample input, is the sample output. The regression principle of LS-SVM can be explained as follows:where is a nonlinear mapping from the input space to the feature space, is a vector of weight coefficients, and is a bias constant.

The optimal hyperplane will be determined by the maximum geometry interval. Hence the LS-SVR problem can be transformed as follows [43]: where are the error variables and is hyperparameter. The process of finding the optimal decision function is to determine the process parameters and .

Introducing Lagrange multipliers, one can establish Lagrange functions as follows: where are the Lagrange multiplier. The conditions for optimality are given by After elimination of the variables and , a set of linear equations can be obtained: where , denotes a unit matrix, , , and .

Then, LS-SVM regression model is expressed as The mapping function can be paraphrased by a kernel function because of the application of Mercer’s theorem, which means that are any kernel functions satisfying the Mercer condition, and the Mercers condition has been applied: This finally results in the following LS-SVM model for function regression:

As shown in Figure 1, the prediction model of phase space reconstruction and LS-SVM regression mainly has two steps. First, select the delay time (), embedding dimension (), and LS-SVM parameters ( and ). The phase space reconstruction technique is used to determine the training sample pairs based on the parameters and which are determined. Assuming the time series is , the training sample set of attributes is as follows: The training sample set of labels is . Second, predict future point in the future. Select the attribute sample of the previous time as input in the phase space and use the trained LS-SVM model to obtain the predicted value of the moment.

2.2. Membrane Computing

Membrane computing (namely, systems) arises as a new model of computation, inspired by the way that cells are structured into vesicles and abstracting the chemical reactions taking place inside them [44]. It is a branch of molecular computing that aims to develop models and paradigms that are biologically motivated. There has been a flurry of research activities in this area in recent years [45]. Because of the built-in nature of maximal parallelism inherent on the models, systems have a great potential for implementing massively concurrent systems in an efficient way that would allow us to solve currently intractable problems.

A membrane system with degree can be expressed as where is an alphabet, whose elements are called objects, denotes the output alphabet, is a catalyst, which does not exhibit any change in the course of evolution, but some reaction must have its participation, is the membrane structure, which can be shown by , denotes multiple sets of objects in the membrane structure, and are the set of rules, in which and denote rule and the priority of the rule, respectively.

In general, system contains three core elements: membrane structure, object multiple sets, and evolution rules. A membrane system with given membrane structure, evolution rules, and decided objects will be performed in the form of nondeterministic and maximum parallel for the evolution rules. When all the objects are exhausted, the rules are no longer executed, the system downtime. A typical membrane system consists of cell-like membranes placed inside a unique “skin” membrane. Multisets of objects—usually strings of symbols—and a set of evolution rules are placed inside the regions delimited by the membranes. Each object can be transformed into other objects, can pass through a membrane, or can dissolve or create membranes. The evolution between system configurations is done nondeterministically by applying the rules in parallel for all objects able to evolve [46]. As shown in Figure 2, a simple membrane structure diagram can be shown by . The skin membrane, which is the outermost membrane of this structure, separates the system from its environment. Several membranes, each of which defines a region, are placed inside the skin membrane. Elementary membranes do not contain any membrane. Each region forms a different compartment of the membrane structure and contains a multiset of objects or membranes. Where and denote objects, and are rules.

Figure 2: Simple membrane structure diagram.

3. Parameters Joint Optimization Algorithm Based on Membrane Computing

The optimization algorithm based on cell-like membrane computing is an important branch of membrane computing. It is an intelligent optimization algorithm inspired by the mechanism and the function of biological cells and based on the existing framework of membrane computing. The steps generally are membrane structure establishment, the objects generation and evolution, and so forth. Shown in Figure 3 is the structure of P-LSSVM prediction model, with the initial objects as initial parameters of prediction model; these parameters are substituted into the phase space reconstruction and LS-SVM model. Then, parameters joint optimization algorithm based on membrane computing is used to decide the best combination of parameters. Algorithm specific process is as shown in Figure 3.

Figure 3: The structure of P-LSSVM prediction model.
3.1. The Establishment of the Cellular Membrane Structure and the Generation of Objects

As shown in Figure 4, this paper adopts two layers structure for membrane, a skin contains basic membrane, generate initial objects in each membrane. Generally, system uses character or character string to encode, real number encoding are adopted in here, which can reduce the trouble of decode. For instance, , where is an object and , , , and denote , , , and , respectively. We see each object as a solution of the optimization problem. Evolution of each membrane according to its own rules, all the membrane are executed in parallel. The final optimal results are output through the skin, that is, the optimal solution.

Figure 4: Membrane structure.
3.2. Construct the Fitness Function

The goal of cell-like membrane computing optimization algorithm is to find the most suitable combination of parameters (, , , and ) in order to establish the optimal forecasting model. In this paper, we used the root mean square prediction error (RMSE) to construct the fitness function. That is, , , where denotes the number of prediction points and , represent the real values and predicted values, respectively.

3.3. Operation Rules

The basic rules of cellular membrane computing optimization method are selection, crossover, mutation, and communication [47]. The specific form is as follows.

Selection rule: the rule of selection copies the objects to the next generation according to the size of the string. The size of the string is not the three-dimensional size of particles in biological cells but the value of the fitness function. Here, wheel disk method is used to select objects to the next generation.

Crossover rule: for any two objects and , use cross rule to obtain new object : where is a random number in .

Mutation rule: in evolution, according to a certain mutation probability, replace the worst objects with randomly generated objects. Mutating rule is described as follows: where denotes membrane , are objects where fitness is the smallest in membrane , and are randomly generated objects.

Communication rule: each membrane will transport the best objects out of the membrane, while the best objects of foreign membrane are brought into the membrane [48]. This rule can be expressed as follows: where denotes membrane , are the best objects in membrane , and are the best objects out of membrane .

3.4. Parameters Joint Optimization Algorithm Specific Steps in P-LSSVM Model

First of all, generate the initial objects as initial parameters of prediction model; then apply evolutionary rules to evolve until the stop conditions are met; all membranes are operating in parallel. Finally, output the fitness of the best object by the skin membrane, that is, the optimal solution. Specifically, consider the following.

Step 1. Initialize parameters and build cellular membrane structure.
Initialization: the number of elementary membranes is , the number of objects in each membrane is , the largest number of iterations is , crossover probability is , mutation probability is , and the current iteration number is , and so forth.
Create membrane structure as shown in Figure 4, generating randomly objects in each membrane; each object represents a set of parameters’ combination, expressed in decimal coding.

Step 2. Optimize each membrane in turn.
Every object in the membrane as a set of parameters (, , , and ) of P-LSSVM model; calculate the fitness of each object by training data and save the optimal object and its fitness.
Use the reproduction, crossover, and mutation rules to evolve.

Step 3. Make use of communication rules; each membrane will transport the best objects out of the membrane; at the same time, the best objects outside the membrane will be shipped into the membrane.

Step 4. Determine whether the termination condition is satisfied, that is, whether it reaches the maximum number of iterations, when the number of iterations is less than the maximum number of iterations to continue iteration or stop iteration.

Step 5. The optimal object is output from the skin membrane.

4. Electromagnetic Environment Parameters Predictions Based on P-LSSVM Model

Electromagnetic spectrum is a fundamental strategic resource to support the national economy and national defense construction, along with the rapid development of information technology and it is widely used in various fields such as economic development, national defense construction, and social life [49]. Strategic value and basic role increasingly highlight in the electromagnetic spectrum, with frequency contradictions increasingly prominent between countries, departments, and military and space businesses [50]. It is an important basis for spectrum management to control comprehensively the change trend of parameters in the electromagnetic environment of country or region [51]. It is the basis to master the frequency information for the frequency planning, frequency allocation, and sharing service frequency recovery work. The situation of electromagnetic environment can be reflected by the electromagnetic environment indicator parameters; these parameters mainly include band occupancy rate, channel occupancy rate, large-signal ratio, frequency offset, and the field strength. A large number of experiment shown that time series data with chaotic in the electromagnetic environment. Hence, we used the proposed prediction model to predict the indicator parameters of the electromagnetic environment. The experimental results show that the prediction model proposed in this paper is reasonable and effective.

Here, we chose the band occupancy rate to do the test. Band occupancy rate is calculated as follows: extracting all the signal points in the spectrum data, the signals point are merged with distance less than bandwidth by the below formula to calculate the band occupancy rate (OccupyFreband): where denotes the total number of signals judged, is necessary bandwidth in this band for the type of specified business, is the start frequency point, and is the cutoff frequency point.

4.1. Experimental Data Sources

In this paper, we adopt digital receiver EM100 which was provided by German Rohde & Schwarz Company and fixed radio monitoring station of Xihua University to collect data for the experiment. We collected data including frequency modulation (FM) broadcasting band and interphone band. As shown in Figures 5 and 6, in which the vertical axis denotes band occupancy rate, the horizontal axis represents the collection time, and left picture shows the data of band occupancy rate in FM broadcasting band, we collected for 680 hours, that is, obtaining 680 pieces of data. Right figure indicates acquisition data of band occupancy rate in interphone band; we continuously collected for 187 hours, that is, gaining 187 pieces of data. In order to facilitate narration, here we put the band data of FM broadcasting band and interphone band, denoted by “data set 1” and “data set 2,” respectively. Use the method of small amount of data to calculate the maximum Lyapunov index of two groups of data which are and , respectively, which show the time series with chaos.

Figure 5: Band occupancy rate data of FM radio band.
Figure 6: Band occupancy rate data of interphone band.
4.2. Data Preprocessing

This paper mainly uses the Grubbs criteria to deal with the abnormal data; the method is as follows: let be the sequence of the collected data, with the time interval between two data collections hour, where denote 24 hours of a day, represents date code in total days of data collection , and denotes the collected data. Using data set denoted by , for each time point , we can get the expectation and variance of data sequence ; the formula is as follows: where denotes the length of a unit.

According to the above two formulas, combined with Grubbs criteria, if the sample point meet toThe sample point should be removed, where is the critical value of Grubbs criteria; it can be obtained by looking at Grubbs table; denotes the significance level; usually significance level .

The Grubbs criteria are used to deal with “data set 1” and “data set 2,” respectively. For the “data set 1” after processing with Grubbs criteria, the remaining 653 pieces of data, we use the front 600 pieces of data as the training data, determining the best parameters combination, and the surplus 53 pieces of data as test data, testing the prediction accuracy of the model. For the “data set 2” after processing with Grubbs criteria, the remaining 180 pieces of data, we use the front 150 pieces of data as the training data, determining the best parameters combination, and the surplus 30 pieces of data as test data, testing the prediction accuracy of the model.

4.3. Reference Model and Evaluation Criteria

In order to verify the validity of the model, this paper will compare the prediction model (P-LSSVM) proposed in this paper with conventional similar prediction model. The first reference model is the parameters joint optimization based on genetic algorithm for chaos time series prediction (GA-LSSVM) [34]. The second reference model uses the mutual information method and Cao method to get the best delay time and embedding dimension , respectively. And then use grid search method to obtain LS-SVM parameters ( and ) (denoted as M-C-LSSVM) [19]. The third reference model uses the mutual information method and false nearest neighbor method to calculate the optimal delay time and embedding dimension , respectively. And then, use genetic algorithm to get the optimal combination parameters of LS-SVM ( and ) (denoted as M-F-LSSVM) [27]. The fourth reference model uses C-C method to seek simultaneously the best delay time and embedding dimension . Then the optimal parameters of LS-SVM ( and ) by using genetic algorithm (denoted as C-C-LSSVM) [27, 52].

Meanwhile, this paper uses three evaluation criteria: normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). NMSE, RMSE, and MAPE are defined, respectively, as follows: where is the number of prediction points, is the average value, and and denote the real value and the predicted value of th point, respectively.

4.4. Experimental Results

In this paper, the scope of parameters , , , and is , , , and , respectively. In the process of evolution, the other parameters are set as follows: the number of elementary membranes , the number of objects in each membrane , evolution algebra , crossover probability , and mutation probability . The optimal parameters combinations of each model are shown in Tables 1 and 2.

Table 1: The optimal parameters combination of five models for FM broadcasting band.
Table 2: The optimal parameters combination of five models for interphone band.
4.4.1. Single-Step Prediction

Selecting the first point as input to obtain first predicted value, then the real value of the first point is added to the historical data, predicting the next point. And so, obtain the predicted value of all points. Prediction results of five models are shown in Tables 3, 4, 5, 6, 7, and 8 and Figures 7, 8, 9, and 10.

Table 3: Five models predicted error based on RMSE for FM broadcasting band.
Table 4: Five models predicted error based on NMSE for FM broadcasting band.
Table 5: Five models predicted error based on MAPE for FM broadcasting band.
Table 6: Five models predicted error based on RMSE for interphone band.
Table 7: Five models predicted error based on NMSE for interphone band.
Table 8: Five models predicted error based on MASE for interphone band.
Figure 7: Five models predicted diagram for FM broadcasting band.
Figure 8: Five models predicted error diagram for FM broadcasting band.
Figure 9: Five models predicted diagram for interphone band.
Figure 10: Five models predicted error diagram for interphone band.
4.4.2. Multistep Forecast

Selecting a point as input to obtain predicted value, then the prediction value of the first point is added to the historical data, predicting next point. And so, obtain the predicted value of all points. Predicted results of five models are shown in Tables 9, 10, 11, 12, 13, and 14 and Figures 11, 12, 13, and 14.

Table 9: Five models predicted error based on RMSE for FM broadcasting band.
Table 10: Five models predicted error based on NMSE for FM broadcasting band.
Table 11: Five models predicted error based on MAPE for FM broadcasting band.
Table 12: Five models predicted error based on RMSE for interphone band.
Table 13: Five models predicted error based on NMSE for interphone band.
Table 14: Five models predicted error based on MASE for interphone band.
Figure 11: Five models predicted diagram for FM broadcasting band.
Figure 12: Five models predicted error diagram for FM broadcasting band.
Figure 13: Five models predicted diagram for interphone band.
Figure 14: Five models predicted error diagram for interphone band.
4.5. Analysis of Experimental Results

The optimal parameters combinations of five models for FM broadcasting band and interphone band are shown in Tables 1 and 2, respectively. As seen from experimental results, we can find that the parameters , , , and are very sensitive to prediction accuracy; the optimal parameters combination is P-LSSVM model; FM broadcasting bands are , , , and . Interphone bands are , , , and . It can be seen from predicted results diagram (Figures 7 to 14) that whether single-step prediction or multistep prediction five models get very good results. However the P-LSSVM model predicts curve best fit to real data and other curves relative deviation from far away. For five prediction models, respectively, run 10 times, computing the maximum, minimum, mean, and variance of error. As can be seen from predicted results in Tables 3 to 14, three kinds of models evaluation standard are RMSE, NMSE, and MAPE; the model proposed in this paper is the minimum. This shows that not only is the P-LSSVM model reasonable and correct, but prediction accuracy is also enhanced.

Comparing single-step prediction with multistep prediction, it can be found that the error of multistep prediction is larger than the single-step prediction, indicating that the effect of single-step prediction is better than multistep prediction. The reason is that errors exist in every step, and the accumulation of error will lead to decline in the overall prediction accuracy.

5. Conclusion

Modeling and prediction of chaotic time series has become a hot spot in the research field of the chaotic signal processing. In this paper, two defects were taken into consideration in the prediction model of LS-SVM for chaos time series prediction: on the one hand, ignoring the overall correlation of the parameters in prediction model and, on the other hand, considering the contact between the parameters, but the optimization methods have some limitations. For example, use genetic algorithm to solve the optimal parameter of prediction model, which itself has some limitations, such as falling into local optimum and iterative process complication. This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series prediction; the model optimizes the parameters of phase space reconstruction and LS-SVM by using membrane computing optimization algorithm. Then, we used the model to forecast band occupancy rate of FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model proposed in it with the traditional similar forecast model. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). For deficiency in multistep prediction, in the next stage, we will further improve the prediction model or study other prediction models to improve the multistep prediction.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is partially supported by the National Nature Science Foundation of China (61372187), Sichuan Key Technology Research and Development Program (2012GZ0019, 2013GXZ0155), and Graduate Innovation Foundation of Xihua University (ycjj2014038).

References

  1. R. Ren, X. J. Wang, and S.-H. Zhu, “Prediction of chaotic time sequence using least squares support vector domain,” Acta Physica Sinica, vol. 55, no. 2, pp. 555–563, 2006. View at Google Scholar · View at Scopus
  2. M. Ardalani-Farsa and S. Zolfaghari, “Chaotic time series prediction with residual analysis method using hybrid Elman-NARX neural networks,” Neurocomputing, vol. 73, no. 13–15, pp. 2540–2553, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. H.-Y. Xing and T.-L. Jin, “Weak signal estimation in chaotic clutter using wavelet analysis and symmetric LS-SVM regression,” Acta Physica Sinica, vol. 59, no. 1, pp. 140–146, 2010. View at Google Scholar · View at Scopus
  4. A. Kazem, E. Sharifi, F. K. Hussain, M. Saberi, and O. K. Hussain, “Support vector regression with chaos-based firefly algorithm for stock market price forecasting,” Applied Soft Computing Journal, vol. 13, no. 2, pp. 947–958, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. W.-Z. Cui, C.-C. Zhu, W.-X. Bao, and J.-H. Liu, “Prediction of the chaotic time series using support vector machines for fuzzy rule-based modeling,” Acta Physica Sinica, vol. 54, no. 7, pp. 3009–3018, 2005. View at Google Scholar · View at Scopus
  6. P. Melin, J. Soto, O. Castillo, and J. Soria, “A new approach for time series prediction using ensembles of ANFIS models,” Expert Systems with Applications, vol. 39, no. 3, pp. 3494–3506, 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. P. Samui and D. P. Kothari, “Utilization of a least square support vector machine (LSSVM) for slope stability analysis,” Scientia Iranica, vol. 18, no. 1, pp. 53–58, 2011. View at Publisher · View at Google Scholar · View at Scopus
  8. Y. B. Sun, V. Babovic, and E. S. Chan, “Multi-step-ahead model error prediction using time-delay neural networks combined with chaos theory,” Journal of Hydrology, vol. 395, no. 1-2, pp. 109–116, 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. V. Babovic, S. A. Sannasiraj, and E. S. Chan, “Error correction of a predictive ocean wave model using local model approximation,” Journal of Marine Systems, vol. 53, no. 1–4, pp. 1–17, 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. F.-J. Chang, Y.-M. Chiang, and L.-C. Chang, “Multi-step-ahead neural networks for flood forecasting,” Hydrological Sciences Journal, vol. 52, no. 1, pp. 114–130, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995.
  12. V. N. Vapnik, Statistical Leaning Theory, John Wiley & Sons, New York, NY, USA, 1998. View at MathSciNet
  13. L. J. Cao and F. E. H. Tay, “Support vector machine with adaptive parameters in financial time series forecasting,” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1506–1518, 2003. View at Publisher · View at Google Scholar · View at Scopus
  14. K. Chen and L. Liu, “Privacy preserving data classification with rotation perturbation,” in Proceedings of the 5th IEEE International Conference on Data Mining (ICDM '05), pp. 589–592, November 2005. View at Publisher · View at Google Scholar · View at Scopus
  15. J. W. Cai, S. S. Hu, and H. F. Tao, “Prediction of chaotic time series based on selective support vector machine ensemble,” Acta Physica Sinica, vol. 56, no. 12, pp. 6820–6827, 2007. View at Google Scholar · View at Scopus
  16. J. A. K. Suykens, “Nonlinear modelling and support vector machines,” in Proceedings of the IEEE Instrumentation and Measurement Technology Conference, vol. 1, pp. 287–294, May 2001. View at Scopus
  17. B. Jiang, H. Q. Wang, Y. Fu, X. Li, and G. Guo, “Based on the LS-SVM chaotic prediction of sea clutter,” Progress in Natural Science, vol. 17, no. 3, pp. 415–421, 2007. View at Google Scholar
  18. M. Shen, W.-N. Chen, J. Zhang, H. S.-H. Chung, and O. Kaynak, “Optimal selection of parameters for nonuniform embedding of chaotic time series using ant colony optimization,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 790–802, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Q. Luo, J. B. Xia, and H. B. Wang, “Application of chaos-support vector machine regression in traffic prediction,” Computer Science, vol. 36, no. 7, pp. 244–246, 2009. View at Google Scholar
  20. Y. Benkler and H. Nissenbaum, “Commons-based peer production and virtue,” The Journal of Political Philosophy, vol. 14, no. 4, pp. 394–419, 2006. View at Publisher · View at Google Scholar · View at Scopus
  21. X. Y. Wang and M. Han, “Multivariate chaotic time series prediction based on hierarchic reservoirs,” in IEEE International Conference on Systems, Man, and Cybernetics, pp. 14–17, October 2012.
  22. J. A. K. Suykens, J. Vandewalle, and B. de Moor, “Optimal control by least squares support vector machines,” Neural Networks, vol. 14, no. 1, pp. 23–35, 2001. View at Publisher · View at Google Scholar · View at Scopus
  23. H. S. He, J. Lu, L. F. Wu, and X. J. Qiu, “Time delay estimation via non-mutual information among multiple microphones,” Applied Acoustics, vol. 74, no. 8, pp. 1033–1036, 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. H. Ma, X. Li, G. Wang, C. Han, J. Xu, and X. Zhu, “Selection of embedding dimension and delay time in phase space reconstruction,” Journal of Xi'an Jiaotong University, vol. 38, no. 4, pp. 335–338, 2004. View at Google Scholar · View at Scopus
  25. R. Nath, “Modified generalized autocorrelation based estimator for time delays in multipath environment-A tradeoff in estimator performance and number of multipath,” Computers and Electrical Engineering, vol. 37, no. 3, pp. 241–252, 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. Z. Q. Li, H. Zheng, and C. M. Pei, “A modified cao method with delay embedded,” in Proceedings of the 2nd International Conference on Signal Processing Systems (ICSPS '10), vol. 3, pp. V3458–V3460, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  27. S. H. Sun, H. Li, and Z. F. Zhang, “Oil price predicting based on unified solving by phase space reconstruction and parameters,” Computer Engineering and Applications, vol. 49, no. 23, pp. 247–251, 2013. View at Google Scholar
  28. I. M. Carrión, E. A. Arias Antúnez, M. M. A. Castillo, and J. J. M. Canals, “Parallel implementations of the false nearest neighbors method to study the behavior of dynamical models,” Mathematical and Computer Modelling, vol. 52, no. 7-8, pp. 1237–1242, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. C. L. Wu and K. W. Chau, “Data-driven models for monthly streamflow time series prediction,” Engineering Applications of Artificial Intelligence, vol. 23, no. 8, pp. 1350–1367, 2010. View at Publisher · View at Google Scholar · View at Scopus
  30. C. Wei, M. Chen, C. S. Hui, and Y. S. Chang, “Study of basic method about WSD based on chaos,” in Proceedings of the International Conference on Measuring Technology and Mechatronics Automation (ICMTMA '09), vol. 3, pp. 883–886, April 2009. View at Publisher · View at Google Scholar · View at Scopus
  31. C. P. Liu, M. Y. Fan, G. W. Wang, and S. L. Ma, “Optimizing parameters of support vector machine based on gradient algorithm,” Control and Decision, vol. 23, no. 11, pp. 1291–1296, 2008. View at Google Scholar · View at Scopus
  32. W. Wei, T. Hui, and X.-P. Ma, “Rock burst chaotic prediction on multivariate time series and LSSVR,” in Proceedings of the 25th Chinese Control and Decision Conference (CCDC '13), pp. 1376–1381, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  33. Z. Bo and A. Shi, “LSSVM and hybrid particle swarm optimization for ship motion prediction,” in Proceedings of the International Conference on Intelligent Control and Information Processing (ICICIP '10), pp. 183–186, August 2010.
  34. C. Xiang, Z. Zhou, X. Yu, and L.-F. Zhang, “Study on chaotic time series prediction based on genetic algorithm,” Application Research of Computers, vol. 28, no. 8, 2011. View at Google Scholar
  35. G. Paun, G. Rozenberg, and A. Salomaa, Handbook of Membrane Computing, Oxford University Press, Oxford, UK, 2009.
  36. G. Păun, Membrane Computing: Main Ideas, Basic Results Application, Idea Group Publishing, London, UK, 2004.
  37. R. C. Muniyandi, A. M. Zin, and J. W. Sanders, “Converting differential-equation models of biological systems to membrane computing,” BioSystems, vol. 114, no. 3, pp. 219–226, 2013. View at Publisher · View at Google Scholar · View at Scopus
  38. J. Zhao and N. Wang, “A bio-inspired algorithm based on membrane computing and its application to gasoline blending scheduling,” Computers & Chemical Engineering, vol. 35, no. 2, pp. 272–283, 2011. View at Publisher · View at Google Scholar · View at Scopus
  39. G.-X. Zhang, C.-X. Liu, and H.-N. Rong, “Analyzing radar emitter signals with membrane algorithms,” Mathematical and Computer Modelling, vol. 52, no. 11-12, pp. 1997–2010, 2010. View at Publisher · View at Google Scholar · View at Scopus
  40. D. P. Daniel, P. C. Francisco, and M. A. Gutiérrez-Naranjo, “A parallel algorithm for skeletonizing images by using spiking neural P systems,” Neurocomputing, vol. 115, pp. 81–91, 2013. View at Publisher · View at Google Scholar
  41. N. H. Packard, J. P. Crutchfield, J. D. Farmer, and R. S. Shaw, “Geometry from a time series,” Physical Review Letters, vol. 45, no. 9, pp. 712–716, 1980. View at Publisher · View at Google Scholar · View at Scopus
  42. F. Takens, “Detecting strange attractors in turbulence,” Lecture Notes in Mathematics, vol. 898, pp. 361–381, 1981. View at Google Scholar
  43. J. A. K. Suykens, J. de Brabanter, L. Lukas, and J. Vandewalle, “Weighted least squares support vector machines: robustness and sparce approximation,” Neurocomputing, vol. 48, no. 3, pp. 85–105, 2002. View at Publisher · View at Google Scholar · View at Scopus
  44. A. Riscos-Núñez, “A Framework for Complexity Classes in Membrane Computing,” Electronic Notes in Theoretical Computer Science, vol. 225, pp. 319–328, 2009. View at Publisher · View at Google Scholar · View at Scopus
  45. O. H. Ibarra, “On membrane hierarchy in P systems,” Theoretical Computer Science, vol. 334, no. 1–3, pp. 115–129, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  46. C. Teuscher, “From membranes to systems: self-configuration and self-replication in membrane systems,” BioSystems, vol. 87, no. 2-3, pp. 101–110, 2007. View at Publisher · View at Google Scholar · View at Scopus
  47. L. Huang, I. H. Suh, and A. Abraham, “Dynamic multi-objective optimization based on membrane computing for control of time-varying unstable plants,” Information Sciences, vol. 181, no. 11, pp. 2370–2391, 2011. View at Publisher · View at Google Scholar · View at Scopus
  48. T. M. Taher, R. B. Bacchus, K. J. Zdunek, and D. A. Roberson, “Long-term spectral occupancy findings in Chicago,” in Proceedings of the IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN '11), pp. 100–107, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  49. H. Zhou, “Analysis and resolved strategy of complicated electromagnetic environment in battlefield,” Journal of the Academy of Equipment Command & Technology, vol. 18, no. 16, 2007. View at Google Scholar
  50. T. Shao, H. U. Yihua, S. H. Liang et al., “Methods for quantitative evaluation of battlefield electromagnetic environment complexity,” Electronics Optics & Control, vol. 17, no. 1, 2010. View at Google Scholar
  51. Z. Wang and S. Salous, Spectrum occupancy analysis for cognitive radio [Ph.D. dissertation], Durham University, Durham, UK, 2009.
  52. H. Yin and H. Wu, “Study on time series forecasting based on least squares support vector machine,” Computer Simulation, vol. 2, p. 28, 2011. View at Google Scholar