Research Article  Open Access
Forecasting Uranium Resource Price Prediction by Extreme Learning Machine with Empirical Mode Decomposition and Phase Space Reconstruction
Abstract
A hybrid forecasting approach combining empirical mode decomposition (EMD), phase space reconstruction (PSR), and extreme learning machine (ELM) for international uranium resource prices is proposed. In the first stage, the original uranium resource price series are first decomposed into a finite number of independent intrinsic mode functions (IMFs), with different frequencies. In the second stage, the IMFs are composed into three subseries based on the finetocoarse reconstruction rule. In the third stage, based on phase space reconstruction, different ELM models are used to model and forecast the three subseries, respectively, according to the intrinsic characteristic time scales. Finally, in the foruth stage, these forecasting results are combined to output the ultimate forecasting result. Experimental results from real uranium resource price data demonstrate that the proposed hybrid forecasting method outperforms RBF neural network (RBFNN) and single ELM in terms of RMSE, MAE, and DS.
1. Introduction
Uranium resource products have been widely used in economics, military, social lives, and so on and have a revolutionary effect on the real world in many areas. Uranium resource is both the material basis for the development of nuclear energy and a kind of strategic resource. Therefore, more accurate forecasts for international uranium resource prices play an increasingly important role in the development planning and utilization of nuclear energy.
Forecasting the uranium resource prices is one of the most important and challenging tasks due to its inherent nonlinearity and nonstationary characteristics. In the past decades, price prediction has attracted increasing attention by lots of academic researchers. The forecasting approaches used in the literature can be classified into two categories: statistical models and artificial intelligence models [1, 2]. However, the statistical models cannot effectively capture nonlinear patterns hidden in price time series owing to the fact that these models are developed based on the underlying assumption that the time series being forecasted are linear and stationary [3]. In order to overcome this limitation of statistical models, a good deal of nonlinear models have been proposed, among which the artificial neural network (ANN) has attracted a growing interest by researchers due to their excellent nonlinear modeling capability [4–8]. Many studies conclude that the ANN model outperforms various conventional statistical models. However, ANN suffers from local minimum traps and the difficulty of determining the hidden layer size and learning rate [9]. A new learning algorithm for the singlehiddenlayer feedforward neural network (SLFN) called the extreme learning machine (ELM) has been proposed recently and overcomes the aforementioned disadvantages [10, 11]. In the learning process of ELM, the input weights and hidden biases are randomly chosen, and the output weights are analytically determined by using the MoorePenrose generalized inverse. ELM can learn much faster with a higher generalization performance than the traditional gradientbased learning algorithms and solves the problem of stopping criteria, learning rate, learning epochs, and local minima [10–13]. In recent years, ELM has attracted a lot of attention and has become an important method in nonlinear modeling [11–13].
When building intelligent prediction models directly using original values, it is difficult to obtain satisfactory forecast results due to the highfrequency, nonstationary, and chaotic properties of uranium resource price data. Hence, in order to further improve the prediction performance, before constructing a forecasting model, recent research efforts on modeling time series with complex nonlinearity, dynamic variation, and high irregularity are to initially utilize information extraction techniques to extract features hidden in the data, then use these extracted characteristics to construct the forecasting model [14–18]. That is to say, by some means of suitable feature extractions or signal processing methods, the useful or interesting information which may not be observed directly from the original data can be revealed in the extracted features. Thereby, an effective forecasting model possessing better prediction precision will be developed.
Empirical mode decomposition (EMD), based on HilbertHuang Transform (HHT), is very suitable for decomposing nonlinear and nonstationary time series, which adaptively represents the local characteristic of the given signal [19, 20]. By using EMD, any complicated signal can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMFs), which have simpler frequency components and stronger correlations, thus are easier and more accurate to forecast [8]. Recently, the EMD has been widely used in many fields, such as in the analysis of the atmosphere time series [21], river water turbidity forecasting [22], crude oil price prediction [23], shortterm wind power prediction, and so forth, [8, 13, 14, 16, 24].
In this study, we propose a hybrid uranium resource prices forecasting model by integrating EMD, phase space reconstruction (PSR), and extreme learning machine (ELM). Firstly, the original uranium resource price series are first decomposed into a finite number of independent intrinsic mode functions (IMFs), with different frequencies. Secondly, the IMFs are composed into three subseries based on the finetocoarse reconstruction rule. And then, based on phase space reconstruction, different ELM models are used to model and forecast the three subseries, respectively, according to the intrinsic characteristic time scales. Finally, these forecasting results are combined with the ultimate forecasting result output. Moreover, experimental results from real uranium resource price data demonstrate that the proposed hybrid forecasting method outperforms RBF neural network (RBFNN) and single ELM in terms of RMSE, MAE, and DS.
The rest of this paper is organized as follows. Section 2 gives brief overviews of EMD, PSR, and ELM. The proposed model is described in Section 3. Section 4 compares the experimental results obtained by the proposed hybrid approach and those by RBF neural network and single ELM, and this paper is concluded in Section 5.
2. Methodology
2.1. Empirical Mode Decomposition (EMD)
EMD is a new signal processing technique. Unlike wavelet decomposition, EMD is not required to determine a filter base function before decomposition. The main idea of EMD is to decompose original time series data into a sum of oscillatory functions, namely, intrinsic mode functions (IMFs). In the EMD, the IMFs must satisfy two conditions: the number of extrema (sum of maxima and minima) and the number of zero crossings must either equal or differ at most by one, and at any point, the mean value of the envelope defined by the local maxima and the envelope defined by the local minima is zero.
The essence of EMD is the sifting process which extracts IMFs from the original data. The algorithm of EMD is described as follows [19, 20, 25].
Step 1. Identify all the local extrema including the minimum values and maximum values in time series data .
Step 2. Generate the upper and lower envelopes and by a cubic spline line.
Step 3. Calculate the mean value from the upper and lower envelopes and then generate the mean envelope as
Step 4. Calculate the difference between the time series data and the mean value . The first difference is designed as a protointrinsic mode function
Step 5. Check whether the protointrinsic mode function satisfies the properties of IMF or not. Ideally, should be an IMF. However, it may generate a new extremum and shift or exaggerate the existing extrema in the sifting process.
If properties of satisfy all the requirements of an IMF, is denoted as the th IMF and substitutes the residue for the original time series data ; that is,
Otherwise, is not an IMF. Then, it substitutes for the original time series .
Step 6. Repeat from Step 1 to Step 5. The sifting process stops when the residue satisfies one of the termination criteria. First, the residue or the th component is smaller than the predetermined threshold or becomes a monotonic function such that no more IMF can be extracted. Second, the number of zero crossings and extrema is the same as that of the successive sifting step.
By using the above algorithm, the original time series data can be decomposed into modes and a residue as follows: where is the number of IMFs, represents IMFs which are nearly orthogonal to each other and periodic, and is the final residue which is a constant or a trend. By the sifting process, each IMF is independent and specific for expressing the local characteristics of the original time series data. The set of IMFs is derived from high frequency to low frequency, while represents the central tendency of data series . In addition, EMD can also be taken as a filter of high pass, band pass, or low pass.
2.2. Phase Space Reconstruction (PSR)
Takens embedding theorem [26] provides theoretical foundation for the analysis of time series generated by nonlinear dynamical systems. Later, Sauer et al. [27] show that a phase space can be reconstructed from a univariate chaotic time series. Let a univariate time series , where is the length of the time series, generated from a dimension chaotic attractor, then a phase space of the attractor can be reconstructed by using the delay coordinate defined as where is known as the embedding dimension of reconstructed phase space and is the delay constant.
The selection of the embedding dimension and the delay constant is very important for prediction modeling [3]. Therefore, for a given delay time , a time series is represented in the socalled “phase space” by a set of delay vectors (DVs) of a given embedding dimension .
(1) Determination of the Delay Time and the Minimum Embedding Dimension . The entropy ratio (ER) method [28] is a novel method for determining the set of parameters for a phase space representation of a time series. Based upon the differential entropy, both the optimal embedding dimension and time lag are simultaneously determined.
Based upon the probability density function of data, the differential entropy is defined as
Particularly convenient is the KozachenkoLeonenko (KL) estimate of the differential entropy where is the number of samples in the dataset, is the Euclidean distance of the th delay vector to its nearest neighbour, and is the Euler constant. For a given embedding dimension and time lag , let denote the differential entropies estimated for time delay embedded versions of a time series .
The KL estimates for the time delay embedded versions of the original time series and are computed using (7) for increasing and (index refers to the th surrogate). To determine the optimal embedding parameters, the ratio needs to be minimized, where denotes the average over . To penalize for higher embedding dimensions, the minimum description length (MDL) method is superimposed, yielding the “entropy ratio” (ER) where is the number of delay vectors, which is kept constant for all values of and under consideration. The minimum of the plot of the entropy ratio yields the optimal set of embedding parameters.
(2) Identification of Nonlinearity. The delay vector variance (DVV) method [29] is a novel analysis of a time series which examines a signal’s unpredictability by observing the variability of the targets belonging to sets of similar delay vectors (DVs). The DVV method can be summarized as follows for a given embedding dimension.(i)The mean and standard deviation are computed over all pairwise distances between DVs.(ii)The sets are generated, which consist of all DVs that lie closer to than a certain distance. The distances are taken from the interval , for example, uniformly spaced, where is a parameter controlling the span over which to perform the DVV analysis.(iii)For every set , the variance of the corresponding targets, , is computed. The average over all sets, divided by the variance of the time series , yields the measure of unpredictability :
In the following study, the linear or nonlinear nature of the time series is examined by performing DVV analyses on both the original and a number of surrogate time series, using the optimal embedding dimension of the original time series.
(3) Identification of Chaotic Characteristic. When the largest Lyapunov exponent of the system is larger than zero, it indicates that there is a chaotic attractor which can be used to measure the chaotic degree [30]. The largest Lyapunov exponent is computed by the Wolf method [31].
2.3. Extreme Learning Machine (ELM)
Suppose there are distinct samples , where and . The SLFN with hidden neurons can be described as where is the weight vector connecting the th hidden neuron and the input neurons, is the weight vector connecting the th hidden neuron and the output neurons, is the actual output vector, and is the threshold of the th hidden neuron. represents the activation function of hidden neuron, and the operation denotes the inner product of and .
If the SLFN can approximate the samples with a zero error then we have
Thus, there also exist parameters , , and such that
The above equations can be compactly described as where
Unlike the traditional function approximation theories which require the adjustment of input weights and hidden layer biases, the input weights and hidden biases are randomly generated. Thus, training an SLFN is simply equivalent to finding a leastsquares solution of the linear function . Consider The smallest norm leastsquares solution of the above linear system is where is the MoorePenrose generalized inverse of matrix . Owing to the MoorePenrose generalized inverse, the learning speed is dramatically increased for the single hiddenlayer feedforward neural network [12].
3. Uranium Resource Price Forecasting Method Based on EMDPSRELM
The proposed hybrid approach for international uranium resource price forecasting, namely, EMDPSRELM, combines EMD, PSR, and ELM, and it is composed of four main stages. These four stages are described as follows.
Stage 1 (EMD decomposition). The original time series , is decomposed into IMF components, , , and one residual component by using EMD.
Stage 2 (combination of the decomposition components). In this stage, owing to each IMF with different time scales, high frequency and low frequency components can be obtained by combining IMFs according to the frequency from high to low, and the residue is treated separately. The process can be performed as follows.
Step 1. The mean of each IMF is evaluated in order.
Step 2. Determining the first IMF whose mean significant deviation from zero.
Step 3. The first IMFs are added and reconstructed to the high frequency component , namely, , and the accumulation of the remaining IMFs is reconstructed to the low frequency component , namely, .
Stage 3 (ELM modeling). This stage can be subdivided into three steps as follows.
Step 1 (phase space reconstruction). Firstly, some parameters such as the delay time and the embedding dimension should be determined. And then, one dimensional time series dataset obtained by combination according to Stage 2 can carry out phase space reconstruction. Thus, the highdimensional datasets , can be constructed, where , , and , . Therefore, the input and output samples can be, respectively, represented by the matrix and , in the forms:
Step 2 (data normalization). The data needs to be represented in a normalized scale for ELM training and prediction. Thus, in this study, the dataset of three phase space domain are linearly scaled in the range of using the following expression: where represents the th dimension of th sample after normalization; is th dimension of th sample before normalization; and represent minimum and maximum values of th dimension, respectively. Then the data was divided into two sets, training set and testing data.
Step 3 (ELM prediction). Regression forecast model is set up for the high frequency component, the low frequency component, and the residue by using ELM, respectively.
Stage 4 (result composition). The final prediction results were obtained by compositing the prediction values after denormalization.
The proposed EMDPSRELM method is schematically depicted in Figure 1.
4. Experimental Results
4.1. Data Description and Evaluation Criteria
In this paper, for evaluating the performance of proposed EMDPSRELM prediction model, the real time series about international uranium resource prices data were chosen as experimental samples. The data used in this study are monthly data which are freely available from the IndexMundi website (http://www.indexmundi.com) and cover the period from October 1982 to September 2012 with a total of 360 values. Firstly, the time series are analyzed by chaos theory. Delay time and embedding dimension can be simultaneously determined (Figure 2). Secondly, nonlinearity is analyzed by using DVV method. Due to the standardization of the distance axis, these plots can be conveniently combined in a scatter diagram, where the horizontal axis corresponds to the DVVplot of the original time series and the vertical to that of the surrogate time series. If the surrogate time series yield DVVplots similar to the original, the “DVV scatter diagram” coincides with the bisector line, and the original time series is probably linear. The deviation from the bisector line is, thus, a measure of nonlinearity; they can be seen from Figures 3 and 4. Additionally, Wolf et al. method [31] is employed to compute the largest Lyapunov exponent . Therefore, we can conclude that the international uranium resource prices times series has chaotic characteristic owing to . When these parameters are gotten, the phase space can be reconstructed; that is to say, these optimal embedding dimensions and delay times are used to construct the input matrix. It is easy to know that there are in total 351 data points in the phase space. The data was divided into two sets, training set and testing data, and the first 326 data points are used as the training samples while the remaining 25 data points are used as the testing samples.
The prediction performance is evaluated using measures including mean absolute error (MAE), root mean squared error (RMSE), and directional prediction statistics (DS) [32]. These measures are as follows: where and are the actual and prediction values, respectively, is the sample size, and Obviously, the smaller RMSE, MAE, and larger DS mean better performance.
4.2. Forecasting Results
According to previous steps shown in Section 3, we carried out the prediction experiments. First, using the EMD technique, the international uranium resource price series can be decomposed into seven independent IMFs and one residue. Figure 5 shows the decomposition results for uranium resource price series using EMD.
(a) IMF1
(b) IMF2
(c) IMF3
(d) IMF4
(e) IMF5
(f) IMF6
(g) IMF7
(h) Residue
Through the analysis of the mean value of each IMF after the decomposition of international uranium resource prices time series data by using EMD, it can be seen from Figure 6 that the IMF whose mean significant deviation from zero is IMF3. Therefore, the partial reconstruction with IMF1 and IMF2 represents high frequency components of international uranium resource prices time series data, with the characteristics of small amplitudes, which contains the effects of markets’ shortterm fluctuations; and the partial reconstruction with IMF3, IMF4, IMF5, IMF6, and IMF7 represents low frequency components, which should be representative of the effect of these events [33]. The residue is treated as the longterm trend during the evolution of uranium resource prices separately. Figure 7 shows the three components of uranium resource price monthly data series from Oct. 1982 to Sept. 2012.
After reconstructing phase space, high frequency, low frequency, and residue are individually used for building ELM prediction models, and the predicted values for the next 25 months uranium resource prices are then obtained by different models. In building EMDPSRELM, the number of hidden nodes is set to 9 for ELM. The error results of the ELM model for the three components are shown in Table 1. As can be seen in Table 1, ELM prediction models perform well for all three components.

The forecasting results of the proposed EMDPSRELM model are compared with the PSRELM model and PSRRBFNN model, which uses nonEMD forecasting variables. The PSRRBFNN forecasting model with three input nodes and one output node is built by using the same phase space reconstruction method as above. The neural network toolbox of MATLAB software is adapted in this study. Mean squared error is 0.1, the spread of radial basis functions is 1.6, and the default settings of neural network toolbox are used for the remaining parameters. In PSRELM, the number of hidden nodes is set the same as the above.
Figure 8 depicts the actual prices and the predicted values from the EMDPSRELM, PSRRBFNN, and PSRELM models. From this figure, it can be observed that there is a smaller deviation between the actual and predicted values using the proposed EMDPSRELM model.
Table 2 compares the prediction results obtained with the EMDPSRELM, PSRRBFNN, and PSRELM models for the next 25 months uranium resource price.

It can be observed from Table 2 that the proposed EMDPSRELM model provides a better forecasting result than the PSRRBFNN and PSRELM models in terms of RMSE, MAE, and DS.
5. Conclusions
This study has presented a forecasting model for uranium resource price by integrating EMD, PSR, and ELM. In terms of the experimental results presented in this study, we can draw the following conclusions.(1)EMD can fully capture the local fluctuations of data and can be used as a preprocessor to decompose the complicated raw data into a finite set of IMFs and a residue, which have simpler frequency components and high correlations.(2)On the one hand, although empirical mode composition is an important tool for multiscale modeling, it suffers from mode mixing and mode splitting. To rectify this issue, the Ensemble EMD [34] can be used in the future. In addition, It is also worth trying to employ Multivariate EMD and in particular the Noise Assisted MEMD in order to calculate EMD free of any artifacts [35–37]. On the other hand, other multiscale models, such as the synchrosqueezed transform in the forecasting task, could also be considered [38].(3)The network topology of the model has an important influence on prediction performance for ELM and RBF neural networks. It is more objective to identify the chaotic characteristic of uranium resource price series and determine the embedding dimension of the reconstructed phase space by quantitative calculation, and then the determined embedding dimension can be served as the numbers of nodes in input layer for the single hiddenlayer feedforward neural network and RBF networks.(4)RMSE, MAE, and DS are used to measure the forecasting accuracy of the forecasting models. The experimental results reveal that the proposed hybrid EMDPSRELM approach outperforms the other models such as PSRRBFNN and PSRELM. Therefore, the proposed method is very suitable for prediction with nonlinear, nonstationary, and highly complex data and is an efficient method for uranium resource price prediction.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This research is in part supported by the Natural Science Foundation of Jiangxi, China (no. 20114BAB201022) and Humanities and Social Science Research Fund from the Education Bureau of Jiangxi, China (no. GL1202). The authors would also like to thank the editor and the reviewers for their constructive comments that improved the paper.
References
 J.J. Wang, J.Z. Wang, Z.G. Zhang, and S.P. Guo, “Stock index forecasting based on a hybrid model,” Omega, vol. 40, no. 6, pp. 758–766, 2012. View at: Publisher Site  Google Scholar
 B. Z. Zhu and Y. M. Wei, “Carbon price forecasting with a novel hybrid ARIMA and least squares support vector machines methodology,” Omega, vol. 41, pp. 517–524, 2013. View at: Google Scholar
 S.C. Huang, P.J. Chuang, C.F. Wu, and H.J. Lai, “Chaosbased support vector regressions for exchange rate forecasting,” Expert Systems with Applications, vol. 37, no. 12, pp. 8590–8598, 2010. View at: Publisher Site  Google Scholar
 A.S. Chen, M. T. Leung, and H. Daouk, “Application of neural networks to an emerging financial market: forecasting and trading the Taiwan Stock Index,” Computers & Operations Research, vol. 30, no. 6, pp. 901–923, 2003. View at: Publisher Site  Google Scholar
 Y. Zhang and L. Wu, “Stock market prediction of S&P 500 via combination of improved BCO approach and BP neural network,” Expert Systems with Applications, vol. 36, no. 5, pp. 8849–8854, 2009. View at: Publisher Site  Google Scholar
 C.F. Chen, M.C. Lai, and C.C. Yeh, “Forecasting tourism demand based on empirical mode decomposition and neural network,” KnowledgeBased Systems, vol. 26, pp. 281–287, 2012. View at: Publisher Site  Google Scholar
 D. P. Mandic and J. Chambers, Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability, John Wiley & Sons, 2001.
 H. Jaeger and H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science, vol. 304, no. 5667, pp. 78–80, 2004. View at: Publisher Site  Google Scholar
 A. Kazem, E. Sharifi, F. K. Hussain, M. Saberi, and O. K. Hussain, “Support vector regression with chaosbased firefly algorithm for stock market price forecasting,” Applied Soft Computing, vol. 13, pp. 947–958, 2013. View at: Google Scholar
 G.B. Huang, Q.Y. Zhu, and C.K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing, vol. 70, no. 1–3, pp. 489–501, 2006. View at: Publisher Site  Google Scholar
 F. L. Chen and T. Y. Ou, “Sales forecasting system based on Gray extreme learning machine with Taguchi method in retail industry,” Expert Systems with Applications, vol. 38, no. 3, pp. 1336–1345, 2011. View at: Publisher Site  Google Scholar
 M. Xia, Y. C. Zhang, L. G. Weng, and X. L. Ye, “Fashion retailing forecasting based on extreme learning machine with adaptive metrics of inputs,” KnowledgeBased Systems, vol. 36, pp. 253–259, 2012. View at: Google Scholar
 C. J. Lu and Y. E. Shao, “Forecasting computer products sales by integrating ensemble empirical mode decomposition and extreme learning machine,” Mathematical Problems in Engineering, vol. 2012, Article ID 831201, 15 pages, 2012. View at: Publisher Site  Google Scholar
 Y. K. Bao, T. Xiong, and Z. Y. Hu, “Forecasting air passenger traffic by support vector machines with ensemble empirical mode decomposition and slopebased method,” Discrete Dynamics in Nature and Society, vol. 2012, Article ID 431512, 12 pages, 2012. View at: Publisher Site  Google Scholar
 C.J. Lu, T.S. Lee, and C.C. Chiu, “Financial time series forecasting using independent component analysis and support vector regression,” Decision Support Systems, vol. 47, no. 2, pp. 115–125, 2009. View at: Publisher Site  Google Scholar
 K. L. Chen, C. C. Yeh, and T. L. Lu, “A hybrid demand forecasting model based on empirical mode decomposition and neural network in TFTLCD industry,” Cybernetics and Systems, vol. 43, no. 5, pp. 426–441, 2012. View at: Google Scholar
 H. Liu and J. Wang, “Integrating independent component analysis and principal component analysis with neural network to predict chinese stock market,” Mathematical Problems in Engineering, vol. 2011, Article ID 382659, 2011. View at: Publisher Site  Google Scholar
 C.J. Lu, “Integrating independent component analysisbased denoising scheme with neural network for stock price prediction,” Expert Systems with Applications, vol. 37, no. 10, pp. 7056–7064, 2010. View at: Publisher Site  Google Scholar
 N. E. Huang, Z. Shen, S. R. Long et al., “The empirical mode decomposition and the Hubert spectrum for nonlinear and nonstationary time series analysis,” Proceedings of the Royal Society A, vol. 454, no. 1971, pp. 903–995, 1998. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 N. E. Huang, M.L. C. Wu, S. R. Long et al., “A confidence limit for the empirical mode decomposition and Hilbert spectral analysis,” Proceedings of the Royal Society A, vol. 459, no. 2037, pp. 2317–2345, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 Z.Y. Xuan and G.X. Yang, “Application of EMD in the atmosphere time series prediction,” Acta Automatica Sinica, vol. 34, no. 1, pp. 97–101, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 J.D. Wang and W.G. Qi, “Prediction of river water turbidity based on EMDSVM,” Acta Electronica Sinica, vol. 37, no. 10, pp. 2130–2133, 2009. View at: Google Scholar
 Y. F. Yang, Y. K. Bao, Z. Y. Hu, and R. Zhang, “Crude oil price prediction based on empirical mode decomposition and support vector machines,” Chinese Journal of Management, vol. 7, no. 12, pp. 1884–1889, 2010. View at: Google Scholar
 L. Ye and P. Liu, “Combined model based on EMDSVM for shortterm wind power prediction,” Proceedings of the Chinese Society of Electrical Engineering, vol. 31, no. 31, pp. 102–108, 2011. View at: Google Scholar
 Y. Wei and M.C. Chen, “Forecasting the shortterm metro passenger flow with empirical mode decomposition and neural networks,” Transportation Research Part C, vol. 21, no. 1, pp. 148–162, 2012. View at: Publisher Site  Google Scholar
 F. Takens, “Detecting strange attractors in turbulence,” in Dynamical Systems and Turbulence, Warwick 1980 (Coventry, 1979/1980), vol. 898 of Lecture Notes in Math., pp. 366–381, Springer, Berlin, Germany, 1981. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 T. Sauer, J. A. Yorke, and M. Casdagli, “Embedology,” Journal of Statistical Physics, vol. 65, no. 34, pp. 579–616, 1991. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 T. Gautama, D. P. Mandic, and M. M. Van Hulle, “A differential entropy based method for determining the optimal embedding parameters of a signal,” in Proceedings of the IEEE International Conference on Accoustics, Speech, and Signal Processing, vol. 6, pp. 29–32, April 2003. View at: Google Scholar
 T. Gautama, D. P. Mandic, and M. M. Van Hulle, “The delay vector variance method for detecting determinism and nonlinearity in time series,” Physica D, vol. 190, no. 34, pp. 167–176, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 L. Li and L. ChongXin, “Application of chaos and neural network in power load forecasting,” Discrete Dynamics in Nature and Society, vol. 2011, Article ID 597634, 12 pages, 2011. View at: Publisher Site  Google Scholar
 A. Wolf, J. B. Swift, H. L. Swinney, and J. A. Vastano, “Determining Lyapunov exponents from a time series,” Physica D, vol. 16, no. 3, pp. 285–317, 1985. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 L. Yu, S. Wang, and K. K. Lai, “A novel nonlinear ensemble forecasting model incorporating GLAR and ANN for foreign exchange rates,” Computers & Operations Research, vol. 32, no. 10, pp. 2523–2541, 2005. View at: Publisher Site  Google Scholar
 X. Zhang, K. K. Lai, and S.Y. Wang, “A new approach for crude oil price analysis based on Empirical Mode Decomposition,” Energy Economics, vol. 30, no. 3, pp. 905–918, 2008. View at: Publisher Site  Google Scholar
 Z. Wu and N. E. Huang, “Ensemble empirical mode decomposition: a noiseassisted data analysis method,” Advances in Adaptive Data Analysis, vol. 1, no. 1, pp. 1–41, 2009. View at: Publisher Site  Google Scholar
 N. Rehman and D. P. Mandic, “Filter bank property of multivariate empirical mode decomposition,” IEEE Transactions on Signal Processing, vol. 59, no. 5, pp. 2421–2426, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 N. Rehman and D. P. Mandic, “Multivariate empirical mode decomposition,” Proceedings of the Royal Society A, vol. 466, no. 2117, pp. 1291–1302, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 N. ur Rehman, C. Park, N. E. Huang, and D. P. Mandic, “EMD via MEMD: multivariate noiseaided computation of standard EMD,” Advances in Adaptive Data Analysis. Theory and Applications, vol. 5, no. 2, Article ID 1350007, 25 pages, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 A. Ahrabian, C. C. Took, and D. P. Mandic, “Algorithmic trading using phase synchronization,” IEEE Journal of Selected Topics in Signal Processing, vol. 6, pp. 399–404, 2012. View at: Google Scholar
Copyright
Copyright © 2014 Qisheng Yan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.