Abstract

A hybrid model for time series forecasting is proposed. It is a stacked neural network, containing one normal multilayer perceptron with bipolar sigmoid activation functions, and the other with an exponential activation function in the output layer. As shown by the case studies, the proposed stacked hybrid neural model performs well on a variety of benchmark time series. The combination of weights of the two stack components that leads to optimal performance is also studied.

1. Introduction

Many processes found in the real world are nonlinear. Therefore, there is a need for accurate, effective tools to forecast their behavior. Current solutions include general methods such as multiple linear regression, nonlinear regression, artificial neural networks, but also specialized ones, such as SETAR, Self-Exciting Threshold Auto-Regression [1], MES, multivariate exponential smoothing [2], and FCAR, Functional Coefficient Auto-Regressive models [3]. The DAN2 model [4] is a dynamic architecture for artificial neural networks (ANNs) for solving nonlinear forecasting and pattern recognition problems, based on the principle of learning and accumulating knowledge at each layer and propagating and adjusting this knowledge forward to the next layer.

Recently, more hybrid forecasting models have been developed, integrating neural network techniques with conventional models to improve their accuracy. A well-known example is ARIMA, the Auto-Regression Integrated Moving Average [5]. ARFIMA, Auto-Regressive Fractionally Integrated Moving Average, is a time series model that generalizes ARIMA by allowing nonlinear values in modeling events with long memory [6]. The SARIMABP model [7] combines SARIMA, Seasonal ARIMA, and the back-propagation training algorithm to predict seasonal time series for machinery production and soft drink time series. Another hybrid model, KARIMA [8], combines Kohonen’s self-organizing map and ARIMA to predict short-term traffic flow.

Other techniques include a combination of the radial basis functions, RBF, neural networks, and the Univariant Box-Jenkins, UBJ, model [9]. Another model that combines radial basis functions neural networks with a nonlinear time-varying evolution particle swarm optimization, PSO, is developed by other authors [10]. It displays dynamically adaptive optimization for the inertia and acceleration coefficients, accelerating convergence, and shows good performance on the time series prediction of a power system.

The study of nonlinear phenomena is especially important in the field of system dynamics. Research in this area proved that nonlinear differential equations are capable of generating continuous mathematical functions similar to pulse sequences [11]. Also, an extension to the Fourier/Laplace transform needed for the analysis of signals represented by traveling wave equations was proposed [12], along with a mathematical technique for the simulation of the behavior of large systems of optical oscillators.

Artificial neural networks are one of the most accurate and widely used forecasting models, with applications in social, economical, engineering, foreign exchange, stock problems, and so forth. Neural network have some characteristics that make them particularly valuable for forecasting [13].(i)Unlike traditional model-based methods, NNs are data-driven and self-adaptive, needing little a priori information about the studied problems.(ii)NNs are known to have good generalization capabilities. After learning the data presented to them, they can correctly predict unseen data, even in the presence of noise.(iii)NNs are universal approximators [14]; that is, they can approximate any continuous real function to any degree of accuracy.(iv)NNs are nonlinear models, and therefore better suited to capture the true nature of many natural processes.

2. The Proposed Neural Network Architecture

The proposed model is composed of two neural networks, each with one hidden layer, as shown in Figure 1.

The inputs of the model are the recent values of the time series, depending on the size of the sliding window s. Basically, the stack model predicts the value of the time series at moment t depending on the latest s values: The neurons in the hidden layers of both networks that compose the stack are normal multiplayer perceptron (MLP) neurons, with bipolar sigmoid, or hyperbolic tangent, activation functions: The difference between the two neural networks lies in the output layer of the individual neural networks that compose the stack. The “normal” network, represented at the top in Figure 1, has the same activation function, the bipolar sigmoid in the output layer. The “exponential” network has a simple exponential function instead: Each network respects the basic information flow of an MLP, where the values represent the thresholds of the neurons: Finally, the stack output is computed as a weighted average of the two outputs, and : with and .

The training of the networks is performed with the classical back-propagation algorithm [15], with a momentum term [16] and adaptive learning rate [17]. Besides training the two components of the stack, the goal of the model is to find the optimal weights and , such that the error of the whole stack is minimized.

3. Case Studies

3.1. Test Methodology

In the following sections, we consider four classical benchmark problems and one original, super-exponential growth problem on which we test the performance of our model. In each case, we divide the available data into 90% for training and 10% for testing. We separately consider a sliding window size of 5 and 10, respectively.

3.2. The Sunspot Series

Wolfer’s sunspot time series records the yearly number of spots visible on the surface of the sun. It contains the data from 1700 to 1987, for a total of 288 observations. This data series is considered as nonlinear and non-Gaussian and is often used to evaluate the effectiveness of nonlinear models [13, 18].

With a window size of 5 points and considering 28 points ahead, the performance of the model on the training set is displayed in Figure 2.

The forecasting capabilities of the model are displayed in Figure 3.

In Figure 4, the effect of the weights on the mean square error of the stack on the testing data is displayed. One can see that the optimal weights are = 100 and = 0, where is the weight of the neural network with sigmoid activation functions, and = 1 – is the weight of the second neural network, whose output neuron has an exponential activation function.

Table 1 shows the errors both for the training and for testing. It separately presents the mean square errors for the first, “normal” network, for the second, “exponential” network, and for their stack, respectively. Since the range of the datasets is very different, we also display the MSE on the normalized data between 0 and 1, in order to better compare the performance of our model on data with different shapes.

Next, we increase the size of the sliding window to 10 points. The corresponding performance of the model on the training set is displayed in Figure 5.

The prediction capabilities of the model are displayed in Figure 6.

The evolution of the mean square error of the stack on the testing data is displayed as a function of the normal NN weight, , in Figure 7. In this case, as in the previous one, the optimal weights are = 100 and = 0.

Table 2 shows the mean square errors obtained for the increased window size.

In both cases, we see that the normal neural network can approximate the time series better than the exponential network. When the window size is 10, the model seems to slightly overfit the data compared to the case when the window size is 5, yielding better errors for the training set, but a little worse errors for the testing set.

3.3. The Canadian Lynx Series

This series contains the yearly number of lynx trapped in the Mackenzie River district of Northern Canada [19]. The data set has 114 observations, corresponding to the period of 1821–1934.

With a window size of 5 points and considering 11 points ahead, the performance of the model on the training set is displayed in Figure 8.

The forecasting capabilities of the model are displayed in Figure 9.

The evolution of the mean square error of the stack on the testing data is displayed as a function of the normal NN weight, , in Figure 10. Here, the optimal weights are = 23 and = 77.

Table 3 shows the mean square errors obtained for a window size of 5.

When size of the window is increased to 10 points, the performance of the model on the training set is that shown in Figure 11.

The testing performance of the model is displayed in Figure 12.

The optimal weights for this stack are = 99 and = 1, as can be observed from Figure 13.

Table 4 shows the mean square errors obtained for a window size of 10.

For this dataset, the exponential network can contribute to the stack result. When the window size is 5, its weight even dominates the stack, and its contribution decreases for a larger window size. It is possible that this phenomenon appears because for a smaller window size, the model may look exponential when learning the high peaks. For a larger window, the model may have a wider perspective, which includes the peaks, and the problem may seem to become more linear. The errors for a window size of 10 are also much smaller for the training set, and larger for the testing set, compared to the errors found for a window set of 5.

3.4. Ozone

This data represents monthly ozone concentrations in parts per million from January 1955 to December 1972 made in downtown Los Angeles [20].

We consider a window size of 5 points and 24 points ahead for prediction. The performance of the model on the training set is displayed in Figure 14.

The prediction performance of the model results from Figure 15.

The evolution of the mean square error of the stack on the testing data is displayed as a function of the normal NN weight, , in Figure 16. Here, the optimal weights are = 14 and = 86.

Table 5 shows the mean square errors obtained for this time series.

In the case when the size of the window is increased to 10 points, the performance of the model on the training set is that shown in Figure 17.

The forecasting capabilities of the model are displayed in Figure 18.

The optimal weights are = 96 and = 4, as it can be seen in Figure 19.

Table 6 shows the mean square errors obtained for a window size of 10.

The behavior of the model on this time series is very similar to that of the lynx time series, regarding both the change in the weights and the comparison of training and testing errors.

3.5. UK Industrial Production

This data series contains the index of industrial production in the United Kingdom, from 1700 to 1912 [20].

We first consider the performance of the model on the training set with a window size of 5 points and 21 points ahead, as displayed in Figure 20.

The forecasting capabilities of the model are shown in Figure 21.

The evolution of the mean square error of the stack on the testing data is displayed as a function of the normal NN weight, , in Figure 22. The optimal weights are = 0 and = 100.

Table 7 shows the mean square errors obtained for a window size of 5.

When the size of the window is increased to 10 points, the performance of the model on the training set is the one shown in Figure 23.

The prediction capabilities of the model are displayed in Figure 24.

Just like in the previous case, the optimal weights are = 0 and = 100, as one can see in Figure 25.

Table 8 shows the mean square errors obtained for a window size of 10.

Unlike the previous problems, the exponential nature of this time series makes it difficult for a normal neural network. Therefore, the exponential network dominates the stack, independent of the size of the window. One can notice that although the normal network can approximate the training set fairly well, with errors comparable to those of the exponential network, there is a clear difference in performance for the prediction phase, where only the exponential network can find a good trend for the time series.

3.6. Super-Exponential Growth

In order to test the limits of our model, we devised a function given by the following equation: The first term of this function is chosen in such a way that both the base and the exponent increase with x, thus producing a super-exponential growth. The second term is a kind of sinusoid that would account for fluctuations to the regular growth model. The values of the coefficient are chosen in such a way that at first, for low values of x, the oscillatory behavior is more important, and then, as x increases, the super-exponential term begins to dominate the value of the function.

Since the function is super-exponential, and the activation function of the second neural network is only exponential, it is expected that our stack model will learn the training data well, but will fail to extrapolate to the prediction set. This drawback can be compensated by allowing different activation functions, such as a double-exponential function , to the output layer of the neural network.

With a window size of 5 points and considering 21 points ahead, the performance of the model on the training set is displayed in Figure 26.

The forecasting capabilities of the model are displayed in Figure 27.

The evolution of the mean square error of the stack on the testing data is displayed as a function of the normal NN weight, , in Figure 28. Here, the optimal weights are = 0 and = 100.

Table 9 shows the mean square errors obtained for a window size of 5.

When size of the window is increased to 10 points, the performance of the model on the training set is that shown in Figure 29.

The forecasting capabilities of the model are displayed in Figure 30.

In a similar way to the case with a window size of 5, the optimal weights are = 0 and = 100, as it can be seen in Figure 31.

Table 10 shows the mean square errors obtained for a window size of 10.

This time series poses similar problems as the previous one. The only difference is that the super-exponential nature of the proposed function exceeds the prediction possibilities of the exponential network. For this kind of problems, other types of activation functions can be used. The stacked model proposed here is flexible enough to accommodate different types of neural networks, with different activation functions.

3.7. Comparison with Other Forecasting Models

We compare the model fitting performance of our stack neural network with several other models, using the implementations in the Statistical Analysis System (SAS) 9.0 software package [21]. We compare the mean square error of our model for the two different window sizes with the error of the best model in SAS. Table 11 presents this comparison.

The best error for a problem is shown in bold letters. It can be seen that our model outperforms the models implemented in SAS for all but the last benchmark problems. The reason for the greater error for the super-exponential growth problem is the inherent limitation of choosing an exponential instead of a super-exponential activation function.

4. Conclusions

Despite its simplicity, it seems that the stacked hybrid neural model performs well on a variety of benchmark problems for time series. It is expected that it can have good results for other important problems that show dynamical and predictive aspects. The model can be easily extended to incorporate other activation functions that can be suitable for a particular problem, such as a double-exponential function . It is also possible to include nondifferentiable functions in the model, if one adopts an evolutionary algorithm for training the neural networks, instead of the classical back-propagation algorithm.

Acknowledgment

This work was supported in part by CNCSIS grant code 316/2008, Behavioral Patterns Library for Intelligent Agents Used in Engineering and Management.