Research Article  Open Access
Wenhua Cui, Jiesheng Wang, Shuxia Li, "KPCAESN SoftSensor Model of Polymerization Process Optimized by BiogeographyBased Optimization Algorithm", Mathematical Problems in Engineering, vol. 2015, Article ID 493248, 10 pages, 2015. https://doi.org/10.1155/2015/493248
KPCAESN SoftSensor Model of Polymerization Process Optimized by BiogeographyBased Optimization Algorithm
Abstract
For solving the problem that the conversion rate of vinyl chloride monomer (VCM) is hard for realtime online measurement in the polyvinyl chloride (PVC) polymerization production process, a softsensor modeling method based on echo state network (ESN) is put forward. By analyzing PVC polymerization process ten secondary variables are selected as input variables of the softsensor model, and the kernel principal component analysis (KPCA) method is carried out on the data preprocessing of input variables, which reduces the dimensions of the highdimensional data. The means clustering method is used to divide data samples into several clusters as inputs of each submodel. Then for each submodel the biogeographybased optimization algorithm (BBOA) is used to optimize the structure parameters of the ESN to realize the nonlinear mapping between input and output variables of the softsensor model. Finally, the weighted summation of outputs of each submodel is selected as the final output. The simulation results show that the proposed softsensor model can significantly improve the prediction precision of conversion rate and conversion velocity in the process of PVC polymerization and can satisfy the realtime control requirement of the PVC polymerization process.
1. Introduction
Polyvinyl chloride (PVC) is one of the largest plastic products in the world. Because PVC has characteristics of high strength and good stability, it has become a widely used synthetic material in the world today. According to different purposes, by adding the different additives or plasticizers, all kinds of PVC plastic products can be produced, such as plates, doors and windows profiles, pipe fittings, foam materials, and electrical parts. These products have widespread applications in industry, agriculture, health care and daily necessities, and other fields. As the embodiment of the superiority of PVC, the PVC production and improvement of the technology get more and more attention of people.
PVC is mainly produced by vinyl chloride monomer (VCM), so the quality of VCM directly affects the quality of PVC resin, production costs, and economic benefits [1]. With the largescale development trend of polymerization kettle and the indepth research of vinyl chloride polymerization, further improving the conversion rate of polymerization kettle for productivity improvement and reducing the production cost have important significance. The different VCM conversion has a certain impact on the molecular weight of PVC resin, thermal stability, porosity, the residues of VCM, the absorptivity of plasticizers, and processing liquidity. As a result of the immature detection device, the complexity of the suspension polymerization reaction, and the strong nonlinear and strong coupling, in the actual production process, vinyl chloride conversion rate and conversion velocity are hard to acquire realtime, so it is difficult to achieve direct closedloop control [2].
Echo state network (ESN) is a new type of recursion neural network [3, 4]. In terms of network structure and learning mechanism, it is different from the previous cycle networks. ESN has better nonlinearity approximation capability, which makes it get good performance in the nonlinear prediction fields. A kind of wavelet decomposition echo state network predictive model was proposed, which adopts the wavelet decomposition method to match different ESN models at each scale with different properties. Then the least square method realizes the optimal integration of the predictive components through weight coefficients so as to reach accurate prediction of each scale and integration [5]. ESN was adopted to predict the conditions of flue gas turbine, and the singular decomposition method was used to carry on the modification of the linear regression training algorithm of ESN [6]. The autocorrelation function method was used to construct the input sequence of ESN for setting up the time series forecast method [7], which is used in the field of the mobile communication traffic prediction. The ESN prediction model based on the principal component analysis method was established in order to reduce the training time and improve the forecast speed [8]. However, the calculation of the ESN output weights based on the standard linear regression algorithm may easily lead to the pathological solutions when dealing with the practical problems. On the other hand, the output weights are often with larger amplitudes. In order to conquer the illconditioned matrix problems of traditional echo state network model, the evolutionary algorithms, such as genetic algorithms (GAs) [9, 10], particle swarm optimization (PSO) algorithm [11, 12], memetic algorithm (MA) [13–15], and ant colony algorithm (ACA) [16, 17], are applied to train the output weights of ESN.
In this paper, a kind of echo state network (ESN) softsensor model for the VCM conversion rate and conversion velocity in the PVC production process based on the biogeographic algorithm is put forward and the simulation results verify the effectiveness of the proposed method. The paper is organized as follows. In Section 2, the technique flowchart of the PVC polymerization process is introduced. The data dimension reduction based on KPCA method is presented in Section 3. In Section 4, the ESN softsensor model based on BBO algorithm is introduced. The simulation experiments and results analysis are introduced in detail in Section 5. Finally, the conclusion illustrates the last part.
2. PVC Polymerization Process
In PVC polymerization process, various raw materials and additives are added to the reaction kettle, which are full evenly dispersed under the mixing action. Then the suitable amounts of the initiators are added to the kettle and start to react. The cooling water is constantly poured into the jacket and baffle of reaction kettle to remove the reaction heat. The reaction will be terminated and the final products are obtained when the conversion ratio of the vinyl chloride (VCM) reaches a certain value and a proper pressure drop appears. Finally, after the reaction completed and VCM contained in slurry separated by the stripping technique, the remaining slurry is fed into the drying process for dewatering and drying. A typical PVC polymerization kettle technological process is shown in Figure 1 [2].
According to characteristics of polymerization process, 10 process variables related to VCM conversion rate and conversion velocity are selected as secondary variables of softsensor modeling. They, respectively, are kettle temperature (TICP101), kettle pressure (PICP102), baffle water flow (FICP101), jacket water flow (FICP102), injection water flow (FICP104), seal water flow (FICP105), inlet temperature of cooling water (for jacket water and baffles water sharing, TIP107), outlet temperature of jacket water (TIP109), outlet temperature of baffle water (TIP110), and inlet temperature of injection water and seal water (namely, the outlet temperature of the cold water tank, TICWA01).
3. Data Dimension Reduction Based on KPCA Method
According to the above process characteristics, 10 process variables as auxiliary variables are determined. But for neural network softsensor model, too large dimensions of input vector will cause dimension disaster which can lead to the fact that network topology becomes tedious and training is complicated. In this paper, the kernel principal component analysis (KPCA) method is adopted to reduce the dimensions of the input vector. KPCA, introducing the concept of kernel function into principal component analysis (PCA), is a kind of nonlinear extension of PCA. KPCA as a nonlinear form of PCA has the ability to deal with nonlinear problem better than PCA [18, 19]. Data sets composed by input variables of the model are given a kernel principal component analysis and the analysis results are shown in Table 1. The variance contribution rate of the first five principal components has reached more than 95%. Original data primary variables disposed by KPCA are selected as inputs of the neural network model, which not only retains the characteristic information of original variables but also simplifies the structure of neural network.

4. ESN SoftSensor Model Based on BBO Algorithm
4.1. Structure of SoftSensor Model
Softsensor technology is mainly composed of four parts: data acquisition and processing, choice of auxiliary variables, softsensor modeling, and online correction. The framework of the proposed multiple model softsensor modeling based on clustering is shown in Figure 2.
The dimension of the process parameters space is reduced based on KPCA method and the five input variables (kettle pressure , baffle water flow , jacket water flow , outlet temperature of baffle water , and outlet temperature of jacket water ) are selected. The means clustering method is adopted to divide the sample data into classes and each class will be as inputs of a subsoftsensor model. Thus the multiple models softsensor modeling method based on means clustering method is established. The conversion rate and velocity of VCM are the output variables. The BBO algorithm is used to optimize the ESN parameters to realize the nonlinear relationship between them. Therefore, the softsensor model of VCM conversion rate is set up. Experiment proves that this method can effectively improve the prediction accuracy of models.
4.2. Means Clustering Method
means clustering method is a kind of widely used clustering algorithm. Its main thought is that the data set is divided into different clusters through the iteration process, making the criterion function of evaluation clustering performance achieve optimization. Objects within the cluster have a high similarity, and the objects between clusters have a low similarity. The general steps of the algorithm are described as follows.
Step 1. For the sample data set , randomly select data samples as the initial cluster centers.
Step 2. Calculate the distance of each residual sample and the cluster center and assign it to the nearest cluster. The distance between the samples expresses the similarity of the two samples. The smaller the distance, the more similar the two samples, and the smaller the difference degree. The distance between the samples is calculated by Euclidean distance, and its formula is shown as follows:where and are two samples and is the sample dimension.
Step 3. Through Step 2, update and get the new clusters. Calculate the square error criterion until the overall average error function is met. Considerwhere is the clustering center and is sample data.
Step 4. The average value of all objects of clusters is chosen as a new cluster center . Repeat Steps 2 and 3 and stop the iteration until the center does not change.
According to the chosen number of clusters and cluster center, the data objects are divided into different clusters. For different clustering data, respectively, establish submodels shown in Figure 3. Through weighted summation of predicted value of each submodel, the final output forecast value is got.
4.3. Echo State Network
In recent years, neural network has been widely applied in nonlinear systems. The most common types of neural network are the feedforward neural network and recurrent neural network. Most feedforward neural networks are static neural network, without the ability of dealing with information dynamically. The recurrent neural network is joined by the dynamic mechanism of processing information on the basis of feedforward neural network, which makes the whole network with dynamic characteristics and approximates the target value better. Common recurrent neural networks are Elman network and Hopfield network. Echo state network (ESN) is a new type of recurrent neural network, whose typical structure is shown in Figure 4.
It can be seen from Figure 4 that the structure of ESN is similar to that of most neural network, which is composed of three parts: input layer, hidden layer, and output layer. Unlike other neural networks, the hidden layer of the ESN is a larger dynamic reservoir (DR) and the number of neurons in the ESN is much more than that of other neural networks. The dynamic reservoirs (DR) can unceasingly store a large number of teachers signals and have shortterm memory ability. Although there is no input signal after the training, ESN still can predict for a short period of time; thus this ability can make the network reach the approximation effect for learning system [20].
Suppose the input layer contains neurons, DR contains neurons, and the output layer contains neurons. Input sample of network is a dimensional vector , the state vector of DR is dimensional vector , and the output sample is dimensional vector . Between the input neurons and DR a link weight matrix exists, whose dimension is ; neurons of DR are connected to a sparse network, and the number of connections is (including the selfconnection neurons). The matric expresses the link weight between DR neurons, and it usually keeps the sparse connection of 1%~5%, so is the sparse matrix and the element of expresses the link weight between the neuron and the neuron in the DR. Between DR and output layer there is an output weight matrix , whose dimension is . In addition, between the output layer and DR there is a feedback connection weight , and its dimension is .
Suppose now we have samples () (), where , , respectively, are the input and output sample, and its dimensions are, respectively, and . The basic equation of echo state network can be represented as follows:where is the activation function of reservoir neuron, generally taking sigmoid function . It can be seen from (3) is associated with , , and . When , there is no definition for , so use as the output sample. In the training process, , , and always remain the same and is not involved in network training process, so the value is calculated at the end of the network training. The error of the network forecast output and actual output of test samples is smaller; the prediction performance of the network is better.
4.4. BBO Algorithm
In recent years, due to the nature’s inspiration, many scholars have proposed some optimization algorithms based on swarm intelligence to solve complex optimization problems, such as genetic algorithm, simulated annealing algorithm, and artificial fish swarm algorithm. Although these intelligent algorithms have appeared for a short time, their good ability to solve complex optimization problems makes them extensively applied into many actual production processes. Biogeographybased optimization algorithm was put forward by an American scholar Simon inspired by the biogeography [21].
Biogeography is a kind of natural science that researches species distribution, migration and extinction, and so forth. In nature, the distribution of biology population is different. The place where population lives is named as the “habitat.” And every habitat living environment is not the same, such as rainfall, temperature, humidity, and geology (appropriate index variables, SIV). The habitat suitability index (HSI) is adopted to describe whether the habitat living environment is good or bad. The HSI is higher in which the environment is suitable for the survival of species; on the contrary, it is not suitable for the survival of species. The corresponding relation between biogeography mathematical model and BBO algorithm variables is listed in Table 2.

Every habitat space is limited and the number of accommodated species is also limited. When the high HSI habitat accommodates more than the largest number of species and the habitat resources are not enough for allocation, competition between species becomes anabatic, making HSI low. At this time, some species will choose to leave the habitat and migrate to a place whose resources are relatively rich, thus improving the HSI lower habitat. So, due to species of the space with high HSI which is multifarious and relatively stable, it has a low immigration rate and high emigration rate. That is to say, low HSI habitats have higher immigration rate and low emigration rate.
4.4.1. Migration Operation
BBO algorithm adopts the migration operation to share information between habitats, making the achievement of the global optimal objective faster. In the high HSI habitats, because of the diversity of species and large number of species, the emigration rate is higher. Due to its high emigration rate, the characteristics information of better habitat is shared to lower HSI habitats. This does not mean that the HSI of better habitat reduces, but its features are copied to the habitats with lower HSI. A species migration model of a single island is shown in Figure 5. In order to illustrate the basic principle of biogeography, this model is adopted to describe BBO algorithm.
In Figure 5, is the largest number of species. Suppose ; for the th island the immigration rate is , the emigration rate is , is the biggest immigration rate, and is the biggest emigration rate. When the number of species of an island is zero (), . As the number of species in the island increases, the number of species in the habitat gradually tends to saturation value; therefor, the immigration rate decreases linearly. When the species number reaches maximum , ; that is to say, no species move in. For the emigration rate , when , no species move out; namely, . With the increase of species number, in order to find more suitable survival habitats, the emigration rate increases. When the species number reaches maximum, . When , the species number of the habitat reaches an equilibrium state. According to Figure 5, the immigration rate and emigration rate can be calculated as follows:
4.4.2. Mutation Operation
As species migrate between habitats, the number of species in each island is constantly changing. Suppose the probability of habitats including species number is ; the change value formula of the probability at time to [21] is described as follows:where and express immigration rate and emigration rate when the habitat contains species .
Assuming that is small enough, the calculation of probability is shown as the following formula:
The above formula can be abbreviated to . When the largest species number of the habitat is equal to , the probability function corresponding to different habitats is a symmetric function about equilibrium point. Individuals with larger species number and less species number all have low stable probability; that is to say, the probability is small, the number of species near the equilibrium point is relatively stable, and the existence probability is higher. On this basis, the variation rate is designed as the following equation:where is the biggest mutation rate and is the probability of habitat accommodating species .
For the mutation operation, whether the habitat needs to mutate first should be determined. If the random number is less than the mutation probability , it means the habitat needs to mutate. Then a group of randomly generated vectors are used to replace the original vector. In nature incidents (such as volcanic eruptions, tsunamis, and disease) are often unavoidable; the occurrence of these events will affect the number of species, makes the ecological environment unstable, and reduces the habitat suitability index. If islands of low suitability index are given a variation, the chance of getting a better solution will increase; if islands of higher suitability index are given a variation, it may not get a better solution, so retain islands with high suitability index and make a mutation for islands with low suitability index.
4.5. Algorithm Procedure
First, main parameters of ESN are the input matrix , the reservoir weight matrix , the output feedback weight matrix , and the output weight matrix . So optimizing the ESN is equivalent to optimizing the four matrixes. In the network learning process, , , and always remain the same, and is not involved in the network training process and its value is calculated after the end of the network training. So in this paper, the habitat of biogeographic optimization algorithm is in correspondence with the output connection weight of the ESN. Through biogeographybased optimization algorithm, the weight of ESN is optimized to realize the ideal predicted values. The algorithm flowchart of ESN prediction model optimized by BBO algorithm is shown in Figure 6.
Step 1 (initialization parameters). Initialize the following algorithm parameters: the largest species number of island , the number of island , the emigration rate and immigration rate , the maximum variation rate and the number of iterations . Initialize a group of islands, each island, namely, each habitat, all represents an individual, which is the solution of the problem.
Step 2 (calculate the fitness value). Use suitability index HSI of the island as the fitness value. Calculate the fitness value of each island function. Judge whether the termination condition is met or not. If satisfied, output the optimal solution, otherwise continue Step 3.
Step 3. According to the fitness values, the individuals are arranged in a descending order and the highest HSI individual is stored. Calculate species number corresponded by each island, the emigration rate , the immigration rate , and the mutation probability . Then the optimal individual is noted as .
Step 4 (migration operation). According to the emigration rate and the immigration rate , judge whether the islands need to perform the migration operation or not. If needed, perform the migration operations, generate new individuals, and recalculate the fitness value of island . Compare it with the optimal individual and retain the best individual, noted as .
Step 5 (mutation operation). Make a mutation operation for islands with lower HSI and recalculate the fitness value of each island. Record the best individual now. Compare with the optimal individual and keep the optimal individual noted as .
Step 6. Check the same individual and use random vectors instead of the same individuals. Reorder all individuals and maintain the individual corresponded by the highest HSI. Record the best individual at this time. If the terminating condition is satisfied, exit iteration. Otherwise, repeat from Step 3.
Step 7 (End). Return the vector corresponding to the highest HSI individual.
5. Simulation Results
In this paper, the polymerization industrial process of a chemical factory with 40000 tons/year polyvinyl chloride (PVC) production device is taken as background, whose technology is introduced by America B·F·G company, taking vinyl chloride monomer (VCM) as raw material and using suspension polymerization technology to produce polyvinyl chloride (PVC) resin. A softsensor model of the VCM conversion rate and velocity in the polyvinyl chloride (PVC) production process based on BBOAESN algorithm is put forward. After reducing ESN dimensions, the number of input variables is 5 and the output dimension is 2. In addition, suppose the reservoir size is 100, the sparse connection rate of reservoir weight matrix is 5%, the activation function of reservoir is , and the output unit adopts linear activation function. Suppose the initial values of ESN parameters: , , and .
The initialized parameters of adopted BBO algorithm are described as follows: the habitat size , the largest species number , the largest immigration rate , the largest emigration rate , the largest mutation probability , and the maximum iterations number is 200.
Before setting up the softsensor model of VCM conversion rate and velocity in the PVC polymerization process, in order to measure the performances of prediction models, several performance indicators are defined in Table 3, where is predicted value and is actual value.

The production historical data of PVC polymerization process are collected and 2 kettles including 1600 sets historical data with the uniformity and representativeness are chosen. Then after data preprocessing the data is divided into two parts, in which the front 1350 data are the training data, and the rest 250 data are used to validate the performance of softsensor models. The simulation results are shown in Figures 7–10. Figure 7 shows the output comparison curves of conversion velocity of VCM, respectively, predicted by ESN softsensor model, BBOESN softsensor model, and BBOMESN softsensor model. Figure 8 shows the predicted error curves of VCM conversion velocity. Figure 9 shows the output comparison curves of VCM conversion rate, respectively, predicted by ESN softsensor model, BBOESN softsensor model, and BBOMESN softsensor model. Figure 10 shows the predicted error curves of VCM conversion rate.
The model performances comparison is shown in Table 4. From the simulation results, the prediction accuracy of BBOMESN softsensor model proposed in this paper is higher than that of ESN and BBOESN softsensor model. Its application of predicting the VCM conversion rate and velocity in PVC polymerization process has great significance in improving the capacity of equipment and reducing the production cost.

6. Conclusions
Based on that ESN has good capability of nonlinear approximation and biogeographybased optimization algorithm (BBOA) is simple and easytoimplement, which can obtain the global extreme and avoid falling into local extreme, a BBOMESN softsensor model is proposed to predict VCM conversion rate and conversion velocity. BBOA is used to optimize the output weights of the ESN network. The simulation results show that the neural network softsensor model based on BBOMESN has higher prediction accuracy.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work is partially supported by the Program for China Postdoctoral Science Foundation (Grant no. 20110491510), the Program for Liaoning Excellent Talents in University (Grant no. LR2014008), the Project by Liaoning Provincial Natural Science Foundation of China (Grant no. 2014020177), and the Program for Research Special Foundation of University of Science and Technology of Liaoning (Grant no. 2011ZX10).
References
 S. Zhou, G. Ji, Z. Yang, and W. Chen, “Hybrid intelligent control scheme of a polymerization kettle for ACR production,” KnowledgeBased Systems, vol. 24, no. 7, pp. 1037–1047, 2011. View at: Publisher Site  Google Scholar
 S.Z. Gao, J.S. Wang, and N. Zhao, “Fault diagnosis method of polymerization kettle equipment based on rough sets and BP neural network,” Mathematical Problems in Engineering, vol. 2013, Article ID 768018, 12 pages, 2013. View at: Publisher Site  Google Scholar
 H. J. Jaeger, “Adaptive nonlinear system identification with echo state networks,” in Advances in Neural Information Processing Systems, S. Thrun and K. Obermayer, Eds., vol. 15, pp. 593–600, MIT Press, Cambridge, Mass, USA, 2002. View at: Google Scholar
 H. J. Jaeger, “Tutorial on training recurrent neural, covering BPTT, RTRL, EKF and the ‘echo state network’ approach,” GHD Report 159, German National Research Center for Information Technology, 2002. View at: Google Scholar
 Z. Q. Wang and Z. G. Sun, “Method for prediction of multiscale time series with WDESN,” Journal of Electronic Measurement and Instrument, vol. 24, no. 10, pp. 947–952, 2010. View at: Publisher Site  Google Scholar
 S. H. Wang, T. Chen, X. L. Xu et al., “Condition trend prediction based on improved echo state network for flue gas turbine,” Journal of Beijing Information Science and Technology University, vol. 4, pp. 18–20, 2010. View at: Google Scholar
 Y. Peng, J.M. Wang, and X.Y. Peng, “Researches on times series prediction with echo state networks,” Acta Electronica Sinica, vol. 38, no. 2, pp. 148–154, 2010. View at: Google Scholar
 Y. Guo, J. Sun, L. Fu, and Z. Zhai, “A new and better prediction model for chaotic time series based on ESN and PCA,” Journal of Northwestern Polytechnical University, vol. 28, no. 6, pp. 946–951, 2010. View at: Google Scholar
 D. M. Xu, J. Lan, and J. C. Principe, “Direct adaptive control: an echo state network and genetic algorithm approach,” in Proceedings of the IEEE International Joint Conference on Neural Networks, vol. 3, pp. 1483–1486, August 2005. View at: Publisher Site  Google Scholar
 G. Acampora, V. Loia, S. Salerno, and A. Vitiello, “A hybrid evolutionary approach for solving the ontology alignment problem,” International Journal of Intelligent Systems, vol. 27, no. 3, pp. 189–216, 2012. View at: Publisher Site  Google Scholar
 Q. Ge and C. J. Wei, “Approach for optimizing echo state network training based on PSO,” Computer Engineering and Design, vol. 8, pp. 1947–1949, 2009. View at: Google Scholar
 Q. Song and Z. Feng, “Stable trajectory generator—echo state network trained by particle swarm optimization,” in Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '09), pp. 21–26, December 2009. View at: Publisher Site  Google Scholar
 G. Acampora, J. M. Cadenas, V. Loia, and E. M. Ballester, “Achieving memetic adaptability by means of agentbased machine learning,” IEEE Transactions on Industrial Informatics, vol. 7, no. 4, pp. 557–569, 2011. View at: Publisher Site  Google Scholar
 G. Acampora, J. M. Cadenas, V. Loia, and E. Muñoz Ballester, “A multiagent memetic system for humanbased knowledge selection,” IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, vol. 41, no. 5, pp. 946–960, 2011. View at: Publisher Site  Google Scholar
 Z. Zhou, Y. S. Ong, M. H. Lim, and B. S. Lee, “Memetic algorithm using multisurrogates for computationally expensive optimization problems,” Soft Computing, vol. 11, no. 10, pp. 957–971, 2007. View at: Publisher Site  Google Scholar
 K. K. Lim, Y.S. Ong, M. H. Lim, X. Chen, and A. Agarwal, “Hybrid ant colony algorithms for path planning in sparse graphs,” Soft Computing, vol. 12, no. 10, pp. 981–994, 2008. View at: Publisher Site  Google Scholar
 Y.S. Ong, M.H. Lim, F. Neri, and H. Ishibuchi, “Special issue on emerging trends in soft computing: memetic algorithms,” Soft Computing, vol. 13, no. 89, pp. 739–740, 2009. View at: Publisher Site  Google Scholar
 L. J. Cao, K. S. Chua, W. K. Chong, H. P. Lee, and Q. M. Gu, “A comparison of PCA, KPCA and ICA for dimensionality reduction in support vector machine,” Neurocomputing, vol. 55, no. 12, pp. 321–336, 2003. View at: Publisher Site  Google Scholar
 J.M. Lee, C. Yoo, S. W. Choi, P. A. Vanrolleghem, and I.B. Lee, “Nonlinear process monitoring using kernel principal component analysis,” Chemical Engineering Science, vol. 59, no. 1, pp. 223–234, 2004. View at: Publisher Site  Google Scholar
 X. Lin, Z. Yang, and Y. Song, “Shortterm stock price prediction based on echo state networks,” Expert Systems with Applications, vol. 36, no. 3, pp. 7313–7317, 2009. View at: Publisher Site  Google Scholar
 C. Gallicchio and A. Micheli, “Architectural and Markovian factors of echo state networks,” Neural Networks, vol. 24, no. 5, pp. 440–456, 2011. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2015 Wenhua Cui et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.