About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2010 (2010), Article ID 373648, 20 pages
http://dx.doi.org/10.1155/2010/373648
Research Article

Stacked Heterogeneous Neural Networks for Time Series Forecasting

Faculty of Automatic Control and Computer Engineering, Technical University “Gheorghe Asachi” of Iaşi, Boulevard Mangeron 53A, 700050 Iaşi, Romania

Received 31 January 2010; Accepted 21 February 2010

Academic Editor: Cristian Toma

Copyright © 2010 Florin Leon and Mihai Horia Zaharia. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A hybrid model for time series forecasting is proposed. It is a stacked neural network, containing one normal multilayer perceptron with bipolar sigmoid activation functions, and the other with an exponential activation function in the output layer. As shown by the case studies, the proposed stacked hybrid neural model performs well on a variety of benchmark time series. The combination of weights of the two stack components that leads to optimal performance is also studied.

1. Introduction

Many processes found in the real world are nonlinear. Therefore, there is a need for accurate, effective tools to forecast their behavior. Current solutions include general methods such as multiple linear regression, nonlinear regression, artificial neural networks, but also specialized ones, such as SETAR, Self-Exciting Threshold Auto-Regression [1], MES, multivariate exponential smoothing [2], and FCAR, Functional Coefficient Auto-Regressive models [3]. The DAN2 model [4] is a dynamic architecture for artificial neural networks (ANNs) for solving nonlinear forecasting and pattern recognition problems, based on the principle of learning and accumulating knowledge at each layer and propagating and adjusting this knowledge forward to the next layer.

Recently, more hybrid forecasting models have been developed, integrating neural network techniques with conventional models to improve their accuracy. A well-known example is ARIMA, the Auto-Regression Integrated Moving Average [5]. ARFIMA, Auto-Regressive Fractionally Integrated Moving Average, is a time series model that generalizes ARIMA by allowing nonlinear values in modeling events with long memory [6]. The SARIMABP model [7] combines SARIMA, Seasonal ARIMA, and the back-propagation training algorithm to predict seasonal time series for machinery production and soft drink time series. Another hybrid model, KARIMA [8], combines Kohonen’s self-organizing map and ARIMA to predict short-term traffic flow.

Other techniques include a combination of the radial basis functions, RBF, neural networks, and the Univariant Box-Jenkins, UBJ, model [9]. Another model that combines radial basis functions neural networks with a nonlinear time-varying evolution particle swarm optimization, PSO, is developed by other authors [10]. It displays dynamically adaptive optimization for the inertia and acceleration coefficients, accelerating convergence, and shows good performance on the time series prediction of a power system.

The study of nonlinear phenomena is especially important in the field of system dynamics. Research in this area proved that nonlinear differential equations are capable of generating continuous mathematical functions similar to pulse sequences [11]. Also, an extension to the Fourier/Laplace transform needed for the analysis of signals represented by traveling wave equations was proposed [12], along with a mathematical technique for the simulation of the behavior of large systems of optical oscillators.

Artificial neural networks are one of the most accurate and widely used forecasting models, with applications in social, economical, engineering, foreign exchange, stock problems, and so forth. Neural network have some characteristics that make them particularly valuable for forecasting [13].(i)Unlike traditional model-based methods, NNs are data-driven and self-adaptive, needing little a priori information about the studied problems.(ii)NNs are known to have good generalization capabilities. After learning the data presented to them, they can correctly predict unseen data, even in the presence of noise.(iii)NNs are universal approximators [14]; that is, they can approximate any continuous real function to any degree of accuracy.(iv)NNs are nonlinear models, and therefore better suited to capture the true nature of many natural processes.

2. The Proposed Neural Network Architecture

The proposed model is composed of two neural networks, each with one hidden layer, as shown in Figure 1.

373648.fig.001
Figure 1: The architecture of the stacked neural network.

The inputs of the model are the recent values of the time series, depending on the size of the sliding window s. Basically, the stack model predicts the value of the time series at moment t depending on the latest s values: The neurons in the hidden layers of both networks that compose the stack are normal multiplayer perceptron (MLP) neurons, with bipolar sigmoid, or hyperbolic tangent, activation functions: The difference between the two neural networks lies in the output layer of the individual neural networks that compose the stack. The “normal” network, represented at the top in Figure 1, has the same activation function, the bipolar sigmoid in the output layer. The “exponential” network has a simple exponential function instead: Each network respects the basic information flow of an MLP, where the values represent the thresholds of the neurons: Finally, the stack output is computed as a weighted average of the two outputs, and : with and .

The training of the networks is performed with the classical back-propagation algorithm [15], with a momentum term [16] and adaptive learning rate [17]. Besides training the two components of the stack, the goal of the model is to find the optimal weights and , such that the error of the whole stack is minimized.

3. Case Studies

3.1. Test Methodology

In the following sections, we consider four classical benchmark problems and one original, super-exponential growth problem on which we test the performance of our model. In each case, we divide the available data into 90% for training and 10% for testing. We separately consider a sliding window size of 5 and 10, respectively.

3.2. The Sunspot Series

Wolfer’s sunspot time series records the yearly number of spots visible on the surface of the sun. It contains the data from 1700 to 1987, for a total of 288 observations. This data series is considered as nonlinear and non-Gaussian and is often used to evaluate the effectiveness of nonlinear models [13, 18].

With a window size of 5 points and considering 28 points ahead, the performance of the model on the training set is displayed in Figure 2.

373648.fig.002
Figure 2: The proposed model performance on the sunspot training data (window size 5).

The forecasting capabilities of the model are displayed in Figure 3.

373648.fig.003
Figure 3: The proposed model predictions for the sunspot data (window size 5).

In Figure 4, the effect of the weights on the mean square error of the stack on the testing data is displayed. One can see that the optimal weights are = 100 and = 0, where is the weight of the neural network with sigmoid activation functions, and = 1 – is the weight of the second neural network, whose output neuron has an exponential activation function.

373648.fig.004
Figure 4: The evolution of MSE when varies (sunspots, window size 5).

Table 1 shows the errors both for the training and for testing. It separately presents the mean square errors for the first, “normal” network, for the second, “exponential” network, and for their stack, respectively. Since the range of the datasets is very different, we also display the MSE on the normalized data between 0 and 1, in order to better compare the performance of our model on data with different shapes.

tab1
Table 1: The errors of the model for the sunspot data (window size 5).

Next, we increase the size of the sliding window to 10 points. The corresponding performance of the model on the training set is displayed in Figure 5.

373648.fig.005
Figure 5: The proposed model performance on the sunspot training data (window size 10).

The prediction capabilities of the model are displayed in Figure 6.

373648.fig.006
Figure 6: The proposed model predictions for the sunspot data (window size 10).

The evolution of the mean square error of the stack on the testing data is displayed as a function of the normal NN weight, , in Figure 7. In this case, as in the previous one, the optimal weights are = 100 and = 0.

373648.fig.007
Figure 7: The evolution of MSE when varies (sunspots, window size 10).

Table 2 shows the mean square errors obtained for the increased window size.

tab2
Table 2: The errors of the model for the sunspot data (window size 10).

In both cases, we see that the normal neural network can approximate the time series better than the exponential network. When the window size is 10, the model seems to slightly overfit the data compared to the case when the window size is 5, yielding better errors for the training set, but a little worse errors for the testing set.

3.3. The Canadian Lynx Series

This series contains the yearly number of lynx trapped in the Mackenzie River district of Northern Canada [19]. The data set has 114 observations, corresponding to the period of 1821–1934.

With a window size of 5 points and considering 11 points ahead, the performance of the model on the training set is displayed in Figure 8.

373648.fig.008
Figure 8: The proposed model performance on the lynx training data (window size 5).

The forecasting capabilities of the model are displayed in Figure 9.

373648.fig.009
Figure 9: The proposed model predictions for the lynx data (window size 5).

The evolution of the mean square error of the stack on the testing data is displayed as a function of the normal NN weight, , in Figure 10. Here, the optimal weights are = 23 and = 77.

373648.fig.0010
Figure 10: The evolution of MSE when varies (lynx, window size 5).

Table 3 shows the mean square errors obtained for a window size of 5.

tab3
Table 3: The errors of the model for the lynx data (window size 5).

When size of the window is increased to 10 points, the performance of the model on the training set is that shown in Figure 11.

373648.fig.0011
Figure 11: The proposed model performance on the lynx training data (window size 10).

The testing performance of the model is displayed in Figure 12.

373648.fig.0012
Figure 12: The proposed model predictions for the lynx data (window size 10).

The optimal weights for this stack are = 99 and = 1, as can be observed from Figure 13.

373648.fig.0013
Figure 13: The evolution of MSE when varies (lynx, window size 10).

Table 4 shows the mean square errors obtained for a window size of 10.

tab4
Table 4: The errors of the model for the lynx data (window size 10).

For this dataset, the exponential network can contribute to the stack result. When the window size is 5, its weight even dominates the stack, and its contribution decreases for a larger window size. It is possible that this phenomenon appears because for a smaller window size, the model may look exponential when learning the high peaks. For a larger window, the model may have a wider perspective, which includes the peaks, and the problem may seem to become more linear. The errors for a window size of 10 are also much smaller for the training set, and larger for the testing set, compared to the errors found for a window set of 5.

3.4. Ozone

This data represents monthly ozone concentrations in parts per million from January 1955 to December 1972 made in downtown Los Angeles [20].

We consider a window size of 5 points and 24 points ahead for prediction. The performance of the model on the training set is displayed in Figure 14.

373648.fig.0014
Figure 14: The proposed model performance on the ozone training data (window size 5).

The prediction performance of the model results from Figure 15.

373648.fig.0015
Figure 15: The proposed model predictions for the ozone data (window size 5).

The evolution of the mean square error of the stack on the testing data is displayed as a function of the normal NN weight, , in Figure 16. Here, the optimal weights are = 14 and = 86.

373648.fig.0016
Figure 16: The evolution of MSE when varies (ozone, window size 5).

Table 5 shows the mean square errors obtained for this time series.

tab5
Table 5: The errors of the model for the ozone data (window size 5).

In the case when the size of the window is increased to 10 points, the performance of the model on the training set is that shown in Figure 17.

373648.fig.0017
Figure 17: The proposed model performance on the ozone training data (window size 10).

The forecasting capabilities of the model are displayed in Figure 18.

373648.fig.0018
Figure 18: The proposed model predictions for the ozone data (window size 10).

The optimal weights are = 96 and = 4, as it can be seen in Figure 19.

373648.fig.0019
Figure 19: The evolution of MSE when varies (ozone, window size 10).

Table 6 shows the mean square errors obtained for a window size of 10.

tab6
Table 6: The errors of the model for the ozone data (window size 10).

The behavior of the model on this time series is very similar to that of the lynx time series, regarding both the change in the weights and the comparison of training and testing errors.

3.5. UK Industrial Production

This data series contains the index of industrial production in the United Kingdom, from 1700 to 1912 [20].

We first consider the performance of the model on the training set with a window size of 5 points and 21 points ahead, as displayed in Figure 20.

373648.fig.0020
Figure 20: The proposed model performance on the UK industrial production training data (window size 5).

The forecasting capabilities of the model are shown in Figure 21.

373648.fig.0021
Figure 21: The proposed model predictions for the UK industrial production data (window size 5).

The evolution of the mean square error of the stack on the testing data is displayed as a function of the normal NN weight, , in Figure 22. The optimal weights are = 0 and = 100.

373648.fig.0022
Figure 22: The evolution of MSE when varies (UK industrial production, window size 5).

Table 7 shows the mean square errors obtained for a window size of 5.

tab7
Table 7: The errors of the model for the UK industrial production data (window size 5).

When the size of the window is increased to 10 points, the performance of the model on the training set is the one shown in Figure 23.

373648.fig.0023
Figure 23: The proposed model performance on the UK industrial production training data (window size 10).

The prediction capabilities of the model are displayed in Figure 24.

373648.fig.0024
Figure 24: The proposed model predictions for the UK industrial production data (window size 10).

Just like in the previous case, the optimal weights are = 0 and = 100, as one can see in Figure 25.

373648.fig.0025
Figure 25: The evolution of MSE when varies (UK industrial production, window size 10).

Table 8 shows the mean square errors obtained for a window size of 10.

tab8
Table 8: The errors of the model for the UK industrial production data (window size 10).

Unlike the previous problems, the exponential nature of this time series makes it difficult for a normal neural network. Therefore, the exponential network dominates the stack, independent of the size of the window. One can notice that although the normal network can approximate the training set fairly well, with errors comparable to those of the exponential network, there is a clear difference in performance for the prediction phase, where only the exponential network can find a good trend for the time series.

3.6. Super-Exponential Growth

In order to test the limits of our model, we devised a function given by the following equation: The first term of this function is chosen in such a way that both the base and the exponent increase with x, thus producing a super-exponential growth. The second term is a kind of sinusoid that would account for fluctuations to the regular growth model. The values of the coefficient are chosen in such a way that at first, for low values of x, the oscillatory behavior is more important, and then, as x increases, the super-exponential term begins to dominate the value of the function.

Since the function is super-exponential, and the activation function of the second neural network is only exponential, it is expected that our stack model will learn the training data well, but will fail to extrapolate to the prediction set. This drawback can be compensated by allowing different activation functions, such as a double-exponential function , to the output layer of the neural network.

With a window size of 5 points and considering 21 points ahead, the performance of the model on the training set is displayed in Figure 26.

373648.fig.0026
Figure 26: The proposed model performance on the super-exponential growth training data (window size 5).

The forecasting capabilities of the model are displayed in Figure 27.

373648.fig.0027
Figure 27: The proposed model predictions for the super-exponential growth data (window size 5).

The evolution of the mean square error of the stack on the testing data is displayed as a function of the normal NN weight, , in Figure 28. Here, the optimal weights are = 0 and = 100.

373648.fig.0028
Figure 28: The evolution of MSE when varies (super-exponential growth, window size 5).

Table 9 shows the mean square errors obtained for a window size of 5.

tab9
Table 9: The errors of the model for the super-exponential growth data (window size 5).

When size of the window is increased to 10 points, the performance of the model on the training set is that shown in Figure 29.

373648.fig.0029
Figure 29: The proposed model performance on the super-exponential growth training data (window size 10).

The forecasting capabilities of the model are displayed in Figure 30.

373648.fig.0030
Figure 30: The proposed model predictions for the super-exponential growth data (window size 10).

In a similar way to the case with a window size of 5, the optimal weights are = 0 and = 100, as it can be seen in Figure 31.

373648.fig.0031
Figure 31: The evolution of MSE when varies (super-exponential growth, window size 10).

Table 10 shows the mean square errors obtained for a window size of 10.

tab10
Table 10: The errors of the model for the super-exponential growth data (window size 10).

This time series poses similar problems as the previous one. The only difference is that the super-exponential nature of the proposed function exceeds the prediction possibilities of the exponential network. For this kind of problems, other types of activation functions can be used. The stacked model proposed here is flexible enough to accommodate different types of neural networks, with different activation functions.

3.7. Comparison with Other Forecasting Models

We compare the model fitting performance of our stack neural network with several other models, using the implementations in the Statistical Analysis System (SAS) 9.0 software package [21]. We compare the mean square error of our model for the two different window sizes with the error of the best model in SAS. Table 11 presents this comparison.

tab11
Table 11: Comparative performance of the proposed model with other forecasting models.

The best error for a problem is shown in bold letters. It can be seen that our model outperforms the models implemented in SAS for all but the last benchmark problems. The reason for the greater error for the super-exponential growth problem is the inherent limitation of choosing an exponential instead of a super-exponential activation function.

4. Conclusions

Despite its simplicity, it seems that the stacked hybrid neural model performs well on a variety of benchmark problems for time series. It is expected that it can have good results for other important problems that show dynamical and predictive aspects. The model can be easily extended to incorporate other activation functions that can be suitable for a particular problem, such as a double-exponential function . It is also possible to include nondifferentiable functions in the model, if one adopts an evolutionary algorithm for training the neural networks, instead of the classical back-propagation algorithm.

Acknowledgment

This work was supported in part by CNCSIS grant code 316/2008, Behavioral Patterns Library for Intelligent Agents Used in Engineering and Management.

References

  1. H. Tong, Threshold Models in Nonlinear Time Series Analysis, vol. 21 of Lecture Notes in Statistics, Springer, New York, NY, USA, 1983. View at Zentralblatt MATH · View at MathSciNet
  2. D. Pfeffermann and J. Allon, “Multivariate exponential smoothing: method and practice,” International Journal of Forecasting, vol. 5, no. 1, pp. 83–98, 1989. View at Publisher · View at Google Scholar
  3. J. L. Harvill and B. K. Ray, “A note on multi-step forecasting with functional coefficient autoregressive models,” International Journal of Forecasting, vol. 21, no. 4, pp. 717–727, 2005. View at Publisher · View at Google Scholar
  4. M. Ghiassi and S. Nangoy, “A dynamic artificial neural network model for forecasting nonlinear processes,” Computers and Industrial Engineering, vol. 57, no. 1, pp. 287–297, 2009. View at Publisher · View at Google Scholar
  5. P. J. Brockwell and R. A. Davis, Time Series: Theory and Methods, Springer Series in Statistics, Springer, New York, NY, USA, 2nd edition, 1991. View at Publisher · View at Google Scholar · View at MathSciNet
  6. N. Ravishanker and B. K. Ray, “Bayesian prediction for vector ARFIMA processes,” International Journal of Forecasting, vol. 18, no. 2, pp. 207–214, 2002. View at Publisher · View at Google Scholar
  7. F. M. Tseng, H. C. Yu, and G. H. Tzeng, “Combining neural network model with seasonal time series ARIMA model,” Technological Forecasting and Social Change, vol. 69, no. 1, pp. 71–87, 2002. View at Publisher · View at Google Scholar
  8. M. V. D. Voort, M. Dougherty, and S. Watson, “Combining Kohonen maps with ARIMA time series models to forecast traffic flow,” Transportation Research Part C, vol. 4, no. 5, pp. 307–318, 1996. View at Publisher · View at Google Scholar
  9. D. K. Wedding II and K. J. Cios, “Time series forecasting by combining RBF networks, certainty factors, and the Box-Jenkins model,” Neurocomputing, vol. 10, no. 2, pp. 149–168, 1996. View at Publisher · View at Google Scholar
  10. C.-M. Lee and C.-N. Ko, “Time series prediction using RBF neural networks with a nonlinear time-varying evolution PSO algorithm,” Neurocomputing, vol. 73, no. 1–3, pp. 449–460, 2009. View at Publisher · View at Google Scholar
  11. G. Toma, “Specific differential equations for generating pulse sequences,” Mathematical Problems in Engineering, vol. 2010, Article ID 324818, 11 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. E. G. Bakhoum and C. Toma, “Mathematical transform of traveling-wave equations and phase aspects of quantum interaction,” Mathematical Problems in Engineering, vol. 2010, Article ID 695208, 15 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. M. Khasei and M. Bijari, “An artificial neural network (p,d,q) model for timeseries forecasting,” Expert Systems with Applications, vol. 37, no. 1, pp. 479–489, 2010. View at Publisher · View at Google Scholar
  14. K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, 1989. View at Publisher · View at Google Scholar
  15. A. E. Bryson and Y. C. Ho, Applied Optimal Control, Blaisdell, New York, NY, USA, 1969.
  16. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, D. E. Rumelhart and J. L. McClelland, Eds., vol. 1, pp. 318–362, MIT Press, Cambridge, Mass, USA, 1986.
  17. F. M. Silva and L. B. Almeida, “Acceleration techniques for the backpropagation algorithm,” in Neural Networks, L. B. Almeida and C. J. Wellekens, Eds., pp. 110–119, Springer, Berlin, Germany, 1990.
  18. M. Ghiassi and H. Saidane, “A dynamic architecture for artificial neural networks,” Neurocomputing, vol. 63, pp. 397–413, 2005. View at Publisher · View at Google Scholar
  19. L. Stone and D. He, “Chaotic oscillations and cycles in multi-trophic ecological systems,” Journal of Theoretical Biology, vol. 248, no. 2, pp. 382–390, 2007. View at Publisher · View at Google Scholar · View at PubMed · View at MathSciNet
  20. G. Janacek, Practical Time Series, Oxford University Press, Oxford, UK, 2001.
  21. SAS Institute, “SAS (Statistical Analysis System),” http://www.sas.com/.