Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 124523, 9 pages
http://dx.doi.org/10.1155/2014/124523
Research Article

Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain

1School of Information Management and Engineering, Shanghai University of Finance and Economics, 777 Guoding Road, Shanghai 200433, China
2Shanghai Financial Information Technology Key Research Laboratory, 777 Guoding Road, Shanghai 200433, China
3School of Management, Fudan University, 220 Handan Road, Shanghai 200433, China

Received 30 August 2013; Accepted 10 March 2014; Published 23 March 2014

Academic Editors: J. Shu and F. Yu

Copyright © 2014 Yonghui Dai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP) neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market.

1. Introduction

The stock market is filled with the coexistence of high-risk and high-yield characteristics. As a barometer of the stock market, the stock index is an important reference for investors to make investment strategies. However, the stock price index is influenced by many factors such as the economic situation, policy changes, and emergency. Although faced with complicated challenges, the forecast of stock index has still attracted the attention of many industrial experts and scholars. Lendasse et al. used a nonlinear time series model to forecast the tendency of the Bel 20 stock market index [1]. Lee et al. forecasted Korean Stock Price Index (KOSPI) by three forecasting models including back-propagation neural network model (BPNN), Bayesian Chiao’s model (BC), and the seasonal autoregressive integrated moving average model (SARIMA) [2]. Fan and Gao proposed “Grey Neural Network model (GNNM(1, N))” and argued that the combined model could improve the prediction accuracy and reduce the computation [3].

Up to now, stock prediction has still been a hot topic. In this field, there have been a lot of methods, such as artificial neural networks [4, 5], time series model [6, 7], decision trees [8], Bayesian belief networks [9], evolutionary algorithms [10], fuzzy sets [11], and Markov model [1214]. However, the single method is usually limited to achieving an ideal precision in the dynamic market due to complicated influencing factors. In recent years, some new hybrid models have shown the potential superiorities [1517]. Especially, the approach based on adaptive modeling and conditional probability transfer may be suitable for matching the problem’s characteristics.

In order to explore the new solution for improving the forecast precision, this paper presented a new method based on BP neural network and Markov chain, studied its modeling and computing technology with the data of Chinese Growth Enterprise Market, and hereafter conducted an empirical analysis of the prediction results. This paper is arranged as the following five sections: Section 1 is the introduction of research background and the most related literature; Section 2 expounds the methodology and technology as well as the combined model based on BP neural network and Markov chain; Section 3 discusses the modeling and computing technology of the presented method; Section 4 is the empirical analysis of prediction results; and the conclusion and discussion are finally in Section 5.

2. Methodology and Technology

2.1. BP Neural Network (BPNN)

BPNN is a one-way propagation of BP algorithm based on multilayer network. It is based on gradient descent method which minimizes the total of the squared errors between the actual and the desired output values. The structure of three layers BPNN includes the input layer, the hidden layer, and the output layer. The BP learning algorithm of three layers can be described as follows [18, 19].

Step 1. Initialize all the values of , , , to small random values within the range , where means the connection weights between neurons in the input layers and neurons in the hidden layers during the th learning process, represents the connection weights between neurons in the hidden layers and neurons in the output layers during the th learning process, means the threshold value in the hidden layers, and means the threshold value.

Step 2. Select sample data and then apply the input vector and desired output vector .

Step 3. Compute the outputs in every hidden layer, and compute the outputs in output layer; here, or is adapted activation function:

Step 4. Calculate the error terms for the output nodes: where represents desired output.

Step 5. Calculate the error terms for the hidden nodes:

Step 6. Update weights on the output layer: where

Step 7. Update weights on the hidden layer: where

Step 8. Calculate error; repeat Steps  2–8 until the error falls below a predefined threshold: where means the number of output node.

Although BP algorithm is successful, it has some disadvantages such as lower convergence speed and easy to get into local minima points. Therefore, improved BP algorithm was applied in our study. Our improved method is based on the additional momentum and adaptive learning rate combined. The formula with the momentum factor weight adjusting is as follows: where represents network weight, is number of training, is the learning rate, is the momentum coefficient, , and is the error function.

In addition, adaptive learning rate method can be described as follows: where is the learning rate, is the number of training, is the error function, , is actual output value, and is anticipative output value; usually , , and [20].

2.2. Markov Chain

Discrete-time Markov chain can be described as a sequence of random variables , where and state space . For any time and any state and positive integer step , when this sequence of variables has the following attributes: We call such stochastic variable sequence Markov chains, where is the transition probability from state to state . These transition probabilities satisfied , , and the matrix is the transition matrix of the chain. If the transition probabilities in (12) do not depend on the time parameter , it will be called “time-homogeneous Markov chains.”

Since the state space is countable, we can label the states by integers, such as . Under this label, the transition matrix can be described as follows:

2.3. Modeling of Forecast Based on Improved BPNN and Markov Chain

The modeling process can be described as follows.

Step 1. Construct improved BPNN model.

Step 2. Initialize forecasting by using model of Step  1.

Step 3. Normalize the error of prediction. The normalized formula is as follows:

Step 4. Set Markov state zoning by normalized upper and lower thresholds.

Step 5. Divide the Markov state region by using the sample average-mean square deviation method. Five ranges are divided as follows [21]: , , , , and , where means average and is sample standard deviation; usually and are range [1.0, 1.5] and and are range .

Step 6. Define the initial state and calculate the state transition probability matrix.

Step 7. Markov chain test: use chi-square statistics test for Markov property.

Step 8. Forecast. Get the state vector of step from formula (13) and forecast based on this model.

3. Modeling and Computing

3.1. Sample Data

In this paper, we select “Chinese Growth Enterprise Market Index (GEMI, 399006.SZ)” for the data set to empirical study, and then we will finish short-term Chinese GEM index price prediction based on this data set. The data set is total of 58 days, which is from 2013-5-24 to 2013-8-16 of trading data. Among them, divided into in-sample and out-of-sample, the first 41 days of data are in-sample as training data and then the data from 42 days are out-of-sample and used as prediction. Due to closing index price, the most important indicator for investment reference, our study focuses on the closing index price forecasting. The daily trading data including opening price, highest price, lowest price, closing price, and trading volume are used for modeling. The sample data of Chinese GEM index are shown in Table 1.

tab1
Table 1: Sample data.

3.2. Modeling
3.2.1. Construct BP Neural Network Model

(i) Definition of Layer Number. According to Kolmogorov theorem, three layers can approach any continual function. Therefore, an input layer, a hidden layer, and an output layer are selected in this model.

(ii) Activation Function and Training Target. Here, the activation function of hidden layer neuron is tansig, the output layer neurons traditional function is purelin.

The training function is traingdx.

The end of training conditions is the mean square error of the accuracy of .

The circulation is 10000 times.

In this model, the initial learning rate is 0.1.

The initial momentum factor value is 0.9.

(iii) Number of Neural Node. The input layer node number is five, namely, items of opening price, highest price, lowest price, closing price, and trading volume. Meanwhile, data of “day 1” were regarded as the first input data in input layer.

In this model, the output layer node number is set to one; meanwhile, “closing price” of “day 2” was regarded as the first output data in output layer.

Numbers of hidden layer node depend on experience and repeated training, how many of the nodes depend on the network error; the number corresponding to the minimum network error in training will be chosen as the number of the hidden layer nodes.

The network errors which correspond to different number of neurons are shown in Table 2. It can be seen that this neural network has the minimum network error of 0.2689 when the neuron number is eleven. Therefore, we select eleven as the number of hidden layer nodes. The data in Table 2 indicate that network error cannot be reduced even if we contiune to increase the number of hidden layer nodes.

tab2
Table 2: Error of repeated training.
3.2.2. Training

Training of the network is completed in MATLAB software. First, the training sample data is selected; then, the data is normalized. Normalization means to limit the data in a certain interval. Here, in order to limit training data in , the function is called. After normalization, start training network with a training set of 41 sample data; the learning rate is 0.1 and the momentum is 0.9. The network was in training till the Mean Squared Error (MSE) was less than 0.005. Finally, we get the ideal model after training the neural network. The dependence of MSE on epochs is shown in Figure 1.

124523.fig.001
Figure 1: The dependence of MSE on epochs.

It can be seen from Figure 1 that the network MSE reaches the expected MSE after 8078 steps of training, in which the training MSE is less than 0.005.

3.2.3. Forecast

(i) Initial Forecasting Based on Improved BPNN. According to trained network and sample data, we used rolling forecasting method to predict the closing index price. Part of the code in MATLAB software is shown in Algorithm 1.

alg1
Algorithm 1: Part of the code of MATLAB.

The Chinese GEM index of daily closing price of simulation is shown in Figure 2. Both actual value and predicted value are shown when trading day from 42 days to 56 days.

124523.fig.002
Figure 2: Simulation of actual value and predicted value.

(ii) Computing of Normalization

Step 1. Calculate the absolute residual rate of prediction days. The calculation formula is as follows: where is the actual value of closing index price, is the predicted value of closing index price, and is the absolute residual rate of day.

Step 2. Normalize the data set of absolute residual rate in MATLAB software; the function is called. The absolute residual rate and normalized results are shown in Table 3.

tab3
Table 3: Normalization of absolute residual rate.
3.2.4. Empirical Markov Model

(i) State Definition. According to the normalization value of Table 3, sample average-mean square deviation was used in state classification. Usually five intervals are divided: , , , , and , where is average, is sample standard deviation, and belong to range , and and belong to range .

Taking into account the fact that the data is not that much, Markov state was divided into four ranges according to , , , and ; therefore, Markov state ranges are (1) , (2) , (3) , and (4) . Then, Markov state transition was built as shown in Table 4.

tab4
Table 4: Markov state transition.

(ii) Computing of State Transition Probability Matrix. It can be seen from Table 4 that from the state (1) to (1) it has 2 times, from the state (1) to (2) it has 3 times, from the state (1) to (3) it has 0 times, and from the state (1) to (4) it has 0 times; then, the sate transition probability can be calculated as follows: Similarly,

Thus, the state transition probability matrix can be described as follows:

The probability matrix has the Markov property after chi-square statistics test.

(iii) The Step State Vector of Prediction. According to the state transition probability matrix and the Markov forecast model, the step state vector of prediction can be calculated as follows: Thus, the step state vector of prediction can be described as shown in Table 5.

tab5
Table 5: Probability of step state vector.

4. Empirical Analysis

According to the step state vector of prediction of Markov model, prediction result from 2013-7-25 to 2013-8-15 was shown in Table 6. Among them, is adjustment value and , where means the maximum probability of someday in fifth column and means the average of interval of fourth column.

tab6
Table 6: Prediction result.

It can be seen from column “error of absolute residual rate” in Table 6, during sixteen trading days, that most of the prediction results by this model are better than a single improved neural network prediction except during the first day and the fifth day.

5. Conclusion and Discussion

Due to the complicated influencing factors in dynamic stock market, the comprehensive method with hybrid models throws off more superiorities than a single method in the forecast of stock index. This paper presented a new method based on the combination of improved back-propagation (BP) neural network and Markov chain, which took the advantages of neural network and Markov model, and obtained the results better than that of the single improved BPNN method. This method could provide a good reference for the investment in stock market.

As an open complex adaptive system constantly affected by all kinds of emergency events and people’s psychological and behavioral effects, although many scholars including the famous financial experts pointed out that the changes of stock market cannot be predicted, we had to break those traditional ideas which rely only on the financial theory models and explore new combined methods such as the TDF (Theory-Data-Feedback) modeling and analyzing framework [22] and the spread model of emotions and behaviors caused by emergency events [23]. We believe that the change of the stock market has also its characteristics and inherent rules, and the forecast is possible at least in the short-term prediction.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by National Natural Science Foundation of China (no. 41174007), Graduate Innovation Fund Program (no. CXJJ-2013-445) of Shanghai University of Finance and Economics, and High Level Academic Research Program (no. 2012FDFRCGD02) of Financial Research Center, Fudan University, China.

References

  1. A. Lendasse, E. de Bodt, V. Wertz, and M. Verleysen, “Non-linear financial time series forecasting-application to the Bel 20 stock market index,” European Journal of Economic and Social Systems, vol. 14, no. 1, pp. 81–91, 2000. View at Google Scholar
  2. K. J. Lee, A. Y. Chi, S. Yoo, and J. J. Jin, “Forecasting Korean stock price index (kospi) using back propagation neural network model, Bayesian Chiao's model, and ASRIMA model,” Academy of Information and Management Sciences Journal, vol. 11, no. 2, pp. 53–62, 2008. View at Google Scholar
  3. Y. Fan and F. Gao, “A stock index forecasting model based on grey relation theory and GNNM (1, N),” in Proceedings of the International Conference in Electrics, Communication and Automatic Control Proceedings, pp. 1625–1633, 2011.
  4. C. W. Li and J. Zhang, “ANN-based mid-term stock forecasting,” Computer Engineering & Science, vol. 28, no. 5, pp. 115–117, 2006. View at Google Scholar
  5. M. Hanias, P. Curtis, and E. Thalassinos, “Time series prediction with neural networks for the athens stock exchange indicator,” European Research Studies Journal, vol. 15, no. 2, pp. 23–32, 2012. View at Google Scholar
  6. W. Liu and B. Morley, “Volatility forecasting in the hang seng index using the GARCH approach,” Asia-Pacific Financial Markets, vol. 16, no. 1, pp. 51–63, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. P. Srinivasan, “Modeling and forecasting the stock market volatility of S&P 500 index using GARCH models,” The IUP Journal of Behavioral Finance, vol. 8, no. 1, pp. 51–69, 2011. View at Google Scholar
  8. B. B. Nair, V. P. Mohandas, and N. R. Sakthivel, “A decision tree—rough set hybrid system for stock market trend prediction,” International Journal of Computer Applications, vol. 6, no. 9, pp. 1–6, 2010. View at Google Scholar
  9. J. Ying, L. Kuo, and G. S. Seow, “Forecasting stock prices using a hierarchical Bayesian approach,” Journal of Forecasting, vol. 24, no. 1, pp. 39–59, 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. M. A. Kaboudan, “Genetic programming prediction of stock prices,” Computational Economics, vol. 16, no. 3, pp. 207–236, 2000. View at Google Scholar · View at Scopus
  11. H. Hwang and J. Oh, “Fuzzy models for predicting time series stock price index,” International Journal of Control, Automation and Systems, vol. 8, no. 3, pp. 702–706, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. A. Alizadeh and N. Nomikos, “A markov regime switching approach for hedging stock indices,” Journal of Futures Markets, vol. 24, no. 7, pp. 649–674, 2004. View at Publisher · View at Google Scholar · View at Scopus
  13. S. Li and X. Hui, “A new stock index fuzzy stochastic prediction model developed by introducing a Markov chain,” Journal of Harbin Engineering University, vol. 32, no. 8, pp. 1086–1090, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Gong and C. H. Ma, “A hidden Markov chain modeling of shanghai stock index,” Finance, vol. 2, pp. 45–49, 2012. View at Google Scholar
  15. M. J. Kim, I. Han, and K. C. Lee, “Hybrid knowledge integration using the fuzzy genetic algorithm: prediction of the Korea stock price index,” Intelligent Systems in Accounting, Finance and Management, vol. 12, no. 1, pp. 43–60, 2004. View at Google Scholar
  16. P.-F. Pai and C.-S. Lin, “A hybrid ARIMA and support vector machines model in stock price forecasting,” Omega, vol. 33, no. 6, pp. 497–505, 2005. View at Publisher · View at Google Scholar · View at Scopus
  17. M. R. Hassan, B. Nath, and M. Kirley, “A fusion model of HMM, ANN and GA for stock market forecasting,” Expert Systems with Applications, vol. 33, no. 1, pp. 171–180, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986. View at Google Scholar · View at Scopus
  19. J. P. Yang, The Research of Improved BP Algorithm Based on Self-Adaptive Learning Rate, Tianjin University, Tianjin, China, 2008.
  20. G. L. Su and F. P. Deng, “On the improving back propagation algorithms of the neural networks based on MATLAB language: a review,” Bulletin of Science and Technology, vol. 19, no. 2, pp. 130–135, 2003. View at Google Scholar
  21. M. M. He, The Application on Some Economic Prediction With Markov Chain Model, Harbin Institute of Technology, Harbin, China, 2008.
  22. W. H. Dai, “The public cognitive mechanism of emotion on city emergency events and coping strategy,” Urban Management, no. 1, pp. 34–37, 2014. View at Google Scholar
  23. W. H. Dai, X. Q. Wan, and X. Y. Liu, “Emergency event: internet spread, psychological impacts and emergency management,” Journal of Computers, vol. 6, no. 8, pp. 1748–1755, 2011. View at Publisher · View at Google Scholar · View at Scopus