Computational Intelligence and Neuroscience

Volume 2016, Article ID 4742515, 14 pages

http://dx.doi.org/10.1155/2016/4742515

## Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

^{1}School of Science, Beijing Jiaotong University, Beijing 100044, China^{2}School of Economics and Management, Beijing Jiaotong University, Beijing 100044, China

Received 9 June 2015; Accepted 30 August 2015

Academic Editor: Sandhya Samarasinghe

Copyright © 2016 Jie Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

#### 1. Introduction

Predicting stock price index is difficult due to uncertainties involved. In the past decades, the stock market prediction has played a vital role for the investment brokers and the individual investors, and the researchers are on the constant look out for a reliable method for predicting stock market trends. In recent years, the artificial neural networks (ANNs) have been applied to many areas of statistics. One of these areas is time series forecasting. References [1–3] reveal different time series forecasting by ANNs methods. ANNs have been also employed independently or as an auxiliary tool to predict time series. ANNs are nonlinear methods which mimic nerve system. They have functions of self-organizing, data-driven, self-study, self-adaptive, and associated memory. ANNs can learn from patterns and capture hidden functional relationships in a given data even if the functional relationships are not known or difficult to identify. A number of researchers have utilized ANNs to predict financial time series including backpropagation neural networks, back radial basis function neural networks, generalized regression neural networks, wavelet neural networks, and dynamic artificial neural network [4–9]. Statistical theories and methods play an important role in financial time series analysis because both financial theory and its empirical time series contain an element of uncertainty. Some statistical properties for the stock market fluctuations are uncovered in the literatures such as power-law of logarithmic returns and volumes, heavy tails distribution of price changes, volatility clustering, and long-range memory of volatility [1, 10–12].

The backpropagation neural network (BPNN) is a neural network training algorithm for financial forecasting, which has powerful problem-solving ability. Multilayer perceptron (MLP) is one of the most prevalent neural networks, which has the capability of complex mapping between inputs and outputs that makes it possible to approximate nonlinear function. Reference [13] employs MLP in trading and hybrid time-varying leverage effects and [14] in forecasting of time series. The two architectures have at least three layers. The first layer is called the input layer (the number of its nodes corresponds to the number of explanatory variables). The last layer is called the output layer (the number of its nodes corresponds to the number of response variables). An intermediary layer of nodes, the hidden layer, separates the input from the output layer. Its number of nodes defines the amount of complexity which the model is capable of fitting. In previous studies, forward networks have frequently been used for financial time series prediction, while, unlike forward networks, recurrent neural network uses feedback connections to model spatial as well as temporal dependencies between input and output series to make the initial states and the past states of the neurons capable of being involved in a series of processing. References [15–17] show the applications in different areas of recurrent neural network. This ability makes them applicable to time series prediction with satisfactory prediction results [18]. As a special recurrent neural network, the Elman recurrent neural network (ERNN) has been used in the present paper for prediction. ERNN is a time-varying predictive control system that was developed with the ability to keep memory of recent events in order to predict future output.

The nonlinear and nonstationary characteristics of the stock market make it difficult and challenging for forecasting stock indices in a reliable manner. Particularly, in the current stock markets, the rapid changes of trading rules and management systems have made it difficult to reflect the markets’ development using the early data. However, if only the recent data are selected, a lot of useful information (which the early data hold) will be lost. In this research, a stochastic time effective neural network (STNN) and the corresponding learning algorithm were presented. References [19–22] introduce the corresponding stochastic time effective models and use them to predict financial time series. Particularly, [23] has shown a random data-time effective radial basis function neural network, which is also applied to the prediction of financial price series. The present paper has optimized the ERNN model which is different with the above models; also at first step of the procedures we employ different input variables from [23]. At the last section of this paper, two new error measure methods are first introduced to evaluate the better predicting results of the proposed model than other traditional models. For this improved network model, each of historical data is given a weight depending on the time at which it occurs. The degree of impact of historical data on the market is expressed by a stochastic process, where a drift function and the Brownian motion are introduced in the time strength function in order to make the model have the effect of random movement while maintaining the original trend. In the present work, we combine MLP with ERNN and stochastic time effective function to develop a stock price forecasting model, called ST-ERNN.

In order to display that the ST-ERNN can provide a higher accuracy of the financial time series forecasting, we compare the forecasting performance with the BPNN model, the STNN model, and the ERNN model by employing different global stock indices. Shanghai Stock Exchange (SSE) Composite Index, Taiwan Stock Exchange Capitalization Weighted Stock Index (TWSE), Korean Stock Price Index (KOSPI), and Nikkei 225 Index (Nikkei225) are applied in this work to analyze the forecasting models by comparison.

#### 2. Proposed Approach

##### 2.1. Elman Recurrent Neural Network (ERNN)

The Elman recurrent neural network, a simple recurrent neural network, was introduced by Elman in 1990 [24]. As is well known, a recurrent network has some advantages, such as having time series and nonlinear prediction capabilities, faster convergence, and more accurate mapping ability. References [25, 26] combine Elman neural network with different areas for their purposes. In this network, the outputs of the hidden layer are allowed to feedback onto themselves through a buffer layer, called the recurrent layer. This feedback allows ERNN to learn, recognize, and generate temporal patterns, as well as spatial patterns. Every hidden neuron is connected to only one recurrent layer neuron through a constant weight of value one. Hence the recurrent layer virtually constitutes a copy of the state of the hidden layer one instant before. The number of recurrent neurons is consequently the same as the number of hidden neurons. To sum up, the ERNN is composed of an input layer, a recurrent layer which provides state information, a hidden layer, and an output layer. Each layer contains one or more neurons which propagate information from one layer to another by computing a nonlinear function of their weighted sum of inputs.

In Figure 1, a multi-input ERNN model is exhibited, where the number of neurons in inputs layer is and in the hidden layer is and one output unit. Let () denote the set of input vector of neurons at time , denotes the output of the network at time , () denote the output of hidden layer neurons at time , and () denote the recurrent layer neurons. is the weight that connects the node in the input layer neurons to the node in the hidden layer. , are the weights that connect the node in the hidden layer neurons to the node in the recurrent layer and output, respectively. Hidden layer stage is as follows: the inputs of all neurons in the hidden layer are given byThe outputs of hidden neurons are given bywhere the sigmoid function in hidden layer is selected as the activation function: . The output of the hidden layer is given as follows:where is an identity map as the activation function.