Applied Neural Intelligence to Modeling, Control, and Management of Human Systems and Environments
View this Special IssueResearch Article  Open Access
Shipra Banik, Mohammed Anwer, A. F. M. Khodadad Khan, "Modeling Chaotic Behavior of Chittagong Stock Indices", Applied Computational Intelligence and Soft Computing, vol. 2012, Article ID 410832, 7 pages, 2012. https://doi.org/10.1155/2012/410832
Modeling Chaotic Behavior of Chittagong Stock Indices
Abstract
Stock market prediction is an important area of financial forecasting, which attracts great interest to stock buyers and sellers, stock investors, policy makers, applied researchers, and many others who are involved in the capital market. In this paper, a comparative study has been conducted to predict stock index values using soft computing models and time series model. Paying attention to the applied econometric noises because our considered series are time series, we predict Chittagong stock indices for the period from January 1, 2005 to May 5, 2011. We have used wellknown models such as, the genetic algorithm (GA) model and the adaptive network fuzzy integrated system (ANFIS) model as soft computing forecasting models. Very widely used forecasting models in applied time series econometrics, namely, the generalized autoregressive conditional heteroscedastic (GARCH) model is considered as time series model. Our findings have revealed that the use of soft computing models is more successful than the considered time series model.
1. Introduction
The stock index values play an important role in controlling dynamics of the capital market. As a result, the appropriate prediction of stock index values is a crucial factor for domestic/foreign stock investors, buyers and/or sellers, fund managers, policy makers, applied researchers (who want to improve the model specifications of this index), and many others. Many researchers, for example, [1–4] and others have found that the empirical distribution of stock is significantly nonnormal and nonlinear. Stock market data are also observed in practice chaotic and volatile by nature (e.g., see [5–8]). That is why stock values are hard to predict. Traditionally, the fundamental BoxJenkins analysis has been the mainstream methodology that is used to predict stock values in applied literature. Due to continual studies of stock market experts, the use of soft computing models (such as artificial neural networks, fuzzy set, evolutionary algorithms, and rough set theory.) have been widely established to forecast stock market. Evidence [9, 10] suggests that the BoxJenkins approach often fails to predict time series when the behavior of series is chaotic and nonlinear. Thus, soft computing systems have emerged to increase the accuracy of chaotic time series predictions. The reason is that these systems have the potential to provide a viable solution through the versatile approach to selforganization. Thus, in forecasting literatures [11–14], it has been found that soft computing systems yield better results compared to the statistical time series approaches when the series is chaotic. This paper compares forecasts of stock prices from soft computing forecasting models and the model introduced by [15]. Our motivation for this comparison lies in the recent increasing interest in the use of soft computing models for forecasting purposes of economic and finance variables. Thus, soft computing models are used to learn the nonlinear and chaotic patterns in the stock system. Several studies [7, 11] have compared soft computing models and the traditional BoxJenkins model. However, there are only a few comparative analyses (according to our knowledge) between soft computing models and standard time series statistical models [13] in case of Bangladeshi stock indices. In this paper, we examine the performance of the daily Chittagong stock market indices using soft computing models and time series model. See [13] for prediction of the daily Dhaka stock market index values. Thus, we hope that findings of the study will be interesting to fund managers, many businesses investors, policy makers, academics, and others who are involved in this volatile market. The structure of the paper is as follows: data and forecasting model is given in the next section. Statistical properties and various econometric noises are discussed in Section 3. A brief description of the considered forecasting models is described in Section 4. Performances of the different evaluation criterion are explained in Section 5. Finally, concluding remarks with some proposed future research are given in the final section.
2. Data and Forecasting Model
2.1. Data
The indices, the Chittagong Stock Exchange has been maintaining since October 10 1995, are the stock prices for all companies (cseall) and for 30 selected companies (cse30). Thus, we considered the daily cseall and cse30 [data source: http://www.cse.com.bd/] prices for the available periods (January 1, 2005 to May 5, 2011). For a description of the indices, see the above website.
It is true people trend to invest in stock market because it has high returns over time. Stock markets are generally affected by economic, social, political, and even psychological factors. These factors interact with each other in a very complicated manner. That is why stock data are observed, chaotic and volatile by nature. It is well known that chart is the best way to visualize trend and chaotic behavior if present in any price series. Thus, to understand the behaviors of considered indices, cseall and cse30 are plotted against time in Figure 1. It is very clear that there is a decreasing trend with respect to time. There are some reasons why these sorts of trend exist. See http://www.cse.com.bd/ for details. It is also observable from this plot the behaviors of these selected price series are not linear. This means series can appear volatile with moves that look chaotic. Some sort of nonlinearity can also be present in the selected series.
2.2. Forecasting Models
Since our series are time series, so we have selected the most commonly used time series model, namely, the autoregressive (AR) model of order . The model is defined for each of considered series as follows: where is an intercept, is the deterministic trend, is the time variable, are the lag orders of the AR() model, and ~. The appropriate lags of the series are selected by the Bayesian Information Criterion (BIC). Other information criterions, for example, Akaike Information Criterion (AIC), Schwarz Information Criterion (SIC), and others can also be used to select the lag order of the AR components of our selected models. See Table 1 for data size and proposed AR() model.

3. Statistical Properties of Data
3.1. Numerical Summary
To understand the characteristics of the selected indices, summary statistics are tabulated in Table 2. It is clear from the above table, most of times, that cseall and cse30 indices are observed 8691 and 7259.9, respectively. Standard deviation measures confirm to us that the considered prices are not equal to 8691 and 7259.9. The expected range of cseall and cse30 prices can be estimated as and , respectively. Skewness measures indicate to us that stock market indices display right skewed distributions. It means that most of prices are below the average prices. Kurtosis measures also indicate to us that price indices are not normal.

3.2. Time Series Properties
It is now a wellestablished stylized fact that most time series are nonstationary and contain a unit root (e.g., see [14]). The conventional approach of time series is based on the implicit assumption that the underlying data series is stationary. This assumption was rarely questioned until the early 1970s and numerical analysis proceeded if alltime series were stationary. Numerous studies (e.g., [14–16] and many others) have suggested that most time series are nonstationary and therefore, the assumption of stationarity is unrealistic. Thus, prior to model specifications and the estimations, the stationary property of the data series is routinely tested. Otherwise, the study can yield unrealistic results. That is why to select appropriate forecasting model for our study, we have tested first stationarity property of the considered series.
3.2.1. Stationarity Tests
There are many stationarity tests available in time series literature. For details, see [17–19] and others. To test the nonstationarity behavior of our considered models (1)(2), we have used most commonly applied unit root tests, namely, the Augmented Dickey Fuller (ADF) test proposed by Said and Dickey [20] and the test proposed by Phillips and Perron (PP) [21]. For test procedures, see [17]. MATLAB commands that Adftest and pptest are used to compute the ADF and the PP statistics and results are reported in Table 3. Note that under the null hypothesis of the ADF and PP tests the series assumed nonstationary and that under the alternative hypothesis the series is stationary. Results show us all series are nonstationary (because ). Thus, null hypothesis of the tests is accepted. Then we have taken the first difference of the series to remove nonstationarity and applied then again the ADF test and the PP test. These test results show us in first differences that considered series are stationary. These results are not reported for spaces but are available on request. The effect of these tests will be shown when our forecasting model is used, the model is used in first differences.
 
“p” and “L” indicate lag order to remove serial correlation. Decision rule: If value < level of significance (), then accept null hypothesis. 
3.3. Linearity Test
There are many statistical techniques available in literature to test whether the series is linear or nonlinear. To select appropriate forecasting method, we have tested also linearity of the considered models (1) and (2). These tests are based on the ordinary least square method residuals. The statistical test proposed by Engle [5] is used to test the presence of nonlinear dependence. For details of the test procedure, see [22]. Linearity test results are tabulated in Table 4. Results show that at the 5% level of significance (), nonlinearity is present (check the ) in our considered series. Just a note here under the null hypothesis, the series are considered linear and under the alternative hypothesis, the series are considered nonlinear. Table 4 results show us is less than which indicates rejection of the null hypothesis. So Table 4 results confirm to us that our considered series are nonlinear.

4. Models Used for Prediction
Considered statistical tests results show that our selected series are nonstationary, nonlinear, and chaotic (Figure 1). To remove nonstationarity, we used the series in first differences. We have selected nonlinear forecasting models to forecast Chittagong stock indices, which has also the ability to capture chaotic behavior. We have chosen the following models to forecast considered indices. A brief description of the considered forecasting models is given below.
4.1. Soft Computing Model
We have chosen two very popular and widely used models, namely, the genetic algorithm (GA) model and the neurofuzzy model.
4.1.1. GA Model
Holland [23] introduces this technique. It is a technique based on the “Darwin’s Principle of Natural Selection” and is used to solve optimization problems. The basic idea is to select the best, discard the rest. To handle the complex multidimensional behaviors of a system, this approach has been used efficiently in forecasting literature (e.g., [23–26] and others). See Figure 2, for the flowchart that illustrates the basic steps in a GA. See Table 5, for the standard GA options. A brief explanation of each step is as follows.

Step 1. Create an initial population consisting of random chromosomes. To understand the GA process, for example, for an AR(2) model, consider the following random population of 6 chromosomes with 4 parameters each: (0.13, 0.01, 0.84, 0.68), (0.20, 0.74, 0.52, 0.37), (0.19, 0.44, 0.20, 0.83), (0.60, 0.93, 0.67, 0.50), (0.27, 0.46, 0.83, 0.70), and (0.19, 0.41, 0.01, 0.42). Population size is chosen usually from 100–500. Larger population may produce more robust solution.
Step 2. Fitness scaling is used to provide a measure of how selected chromosomes perform in the problem domain. The AR(2) model fitness is evaluated through a criterion like RMSE (CC, MAE can be also used). For the AR(2) model, we get RMSE: 26.19, 32.09, 53.75, 20.18, 18.67, and 66.64. Using the linearranking process, for example, (details, see Pohlheim [27]), Fit(RMSE): 1.2, 0.8, 0.4, 1.6, 2.0, 0.
Step 3. Based on Step 2 results, choose parents for the next generation. To understand it, consider the distribution found in Tables 5 and 6.
It is observed that chromosome 5 is the fittest chromosome, because it occupies the largest interval, whereas chromosome 3 is the second least fit chromosome as the smallest interval. Chromosome 6 is the least fit interval that has a fitness value of 0 and gets no chance for reproduction. Using for example, the roulette wheel method (purpose is to eliminate the worst chromosomes and to regenerate better substitutes), selected 4 parents are: (0.20, 0.74, 0.52, 0.37), (0.13, 0.01, 0.84, 0.68), (0.27, 0.46, 0.83, 0.70), and (0.13, 0.01, 0.84, 0.68).
Next step is to produce offspring from selected parents by combining entries of a pair of parents (known as crossover) and also by making random changes to a single parent (known as mutation).

Step 4 (GA operator1). Basic operator for producing new (improved) chromosomes is known as crossover (a version of artificial mating). It produces offspring that have some parts of both parents genetic material. Offspring are produced using the intermediate crossover method, because this is a method proposed to recombine for parents with realvalued chromosomes (see details, Pohlheim [27]). Thus, crossover offspring are (0.16, 0.16, 0.85, 0.57), (0.13, 0.22, 0.76, 0.43), (0.13, 0.15, 0.83, 0.69), and (0.26, 0.45, 0.84, 0.68).
Step 5 (GA operator2). Offspring are mutated after producing crossover offspring and this GA operator increases the chance that the algorithm will generate better fittest RMSE than the Step 4. GA creates 3 types of offspring: elite offspring (number of best RMSE values in the current generation that are guaranteed to survive to the next generation), crossover offspring, and mutation offspring. To understand it, consider an example: suppose that the population size is 20 and the elite count is 2. If crossover fraction is 0.8, then the distribution of offspring is 2 elites, 14 (18*0.8) are crossover offspring, and the remaining 4 are mutation offspring. Just to know, a crossover fraction of 1 means all offspring other than elite are crossover offspring, while a crossover fraction of 0 means that all offspring are mutation offspring. How offspring are produced under the mutation process, see Pohlheim [27]. The mutation offspring are found (0.16, 0.17, 0.85, 0.56), (0.13, 0.22, 0.76, 0.43), (0.13, 0.14, 0.83, 0.69), (0.26, 0.45, 0.84, 0.68), respectively.
Step 6. Once offspring have been produced using Steps 4–5, offspring fitness (i.e., RMSE values) must be determined (procedure similar to Step 2). We get improved RMSE: 23.37, 28.13, 24.11,18.62.
If less offspring are produced than the original population size, then to maintain size, offspring have to be reinserted into the old population. By this step, it is determined which chromosomes will be replaced by offspring. Using for example, the fitnessbased reinsertion method, the following RMSE are found: 20.18, 18.67, 18.62, 23.37, 24.11, and 28.13.
If termination criteria are not defined, GA returns to Step 3 and continues up to Step 6. It is satisfied when either maximum number of generations is achieved or when all chromosomes in the population are identical (i.e., converge). The creator sets this number before running GA, which ensures that GA does not continue indefinitely.
4.1.2. Adaptive Network Fuzzy Integrated System (ANFIS) Model
The second widely used soft computing model we have selected is the ANFIS model. In computing the literature, Jang [8] proposed this model. This model is a combination of two intelligence systems: neural network (NN) system and fuzzy inference system (FIS). This is also known as NNfuzzy integrated system, where the NN learning algorithm is used to determine parameters of FIS. NNs are nonlinear statistical data modeling tools and can capture and model any inputoutput relationships. FIS is the process of formulating the mapping from a given input to an output using the fuzzy logic. This mapping provides a basis from which decisions can be made or patterns can be discerned. The process of FIS involves membership functions (mfs), fuzzy logic operators, and ifthen rules. The structure of ANFIS (see Figure 3 for its architecture) has 5 layers: (a) 1 input layer, (b) 3 hidden layers that represents mfs and fuzzy rules, and (c) 1 output layer. ANFIS uses the Sugenofuzzy inference model to be the learning algorithm. As an example, the fuzzy ifthen rules for the firstorder Sugenofuzzy model can be expressed as follows.
Rule 1. If (input 1) is and (input 2) is , then (output) .
Rule 2. If (input 1) is and (input 2) is , then (output) .
The learning algorithm of ANFIS is a hybrid algorithm, which combines the gradient descent (GD) method and the least square estimation (LSE) for an effective search of parameters. ANFIS uses a twopass of learning algorithm to reduce error: forward pass and backward pass. The hidden layer is computed by the GD method of the feedback structure and the final output is estimated by the LSE method (for details, see [8]).
4.2. Time Series Model: GARCH Model
Since our considered series are time series, to compare the performances of the soft computing models, a very popular time series model, namely, generalized autoregressive conditional heteroscedastic (GARCH) model is selected from time series econometrics literature. A brief description of this model is discussed.
In 1986, Bollerslev invented the GARCH model. To understand it clearly, consider an AR(1) model: where is the observed cseall and cse30 prices. Here, the current volatility depends not only on the past errors, but also on the past volatilities. Suppose , where with , and , known as an GARCH process of orders and . This model is widely used to know the volatility nature that exists generally in the time series data. For details, see [6].
5. Discussion of Results
Forecasting with 100% accuracy may be impossible, but we can do our best to reduce forecasting errors. Thus, to find the error level between observed and predicted stock series that means the forecasting performances of the considered models are evaluated against the following widely used statistical measures, namely, root mean square error (RMSE), correlation coefficient (CC), and coefficient of determination .
Note that a smallest value of RMSE indicates higher accuracy in forecasting, and higher value indicates better prediction. All computational works were carried out using the programming code of MATLAB (version 7.0). We have selected January 1, 2005 to January 1, 2009 as the training periods and rest of periods as the testing periods. See Table 7 for computational parameters for all selected forecasting models. Tables 8 and 9 summarize the performances of different considered forecasting models where the training and testing data are achieved for prediction of stock values using the considered error measures RMSE, CC, and . In terms of all measures, our training results show that for the cseall price series, the GA forecasting model performed better (noted smallest RMSE values, highest CC, and values) than other forecasting models, followed by the ANFIS forecasting model and the GARCH forecasting model. For the cse30 series, we found that the ANFIS forecasting model performed better than the other forecasting models. After the models are built using the training data, considered series forecasted over the testing data and performances are reported in Table 9. The testing results when compared to our considered forecasting models show again that the daily cseall price series forecasting ability of the GA forecasting model is higher than the other forecasting models. We noted for the cse30 price index, the ANFIS forecasting model performed (lowest value of RMSE, highest values of CC and ) better than the other forecasting models.

 
Note: We considered January 1, 2005 to January 1, 2009 as a training period. 
 
Note: We considered February 1, 2005 to May 5, 2011 as a testing period. 
6. Conclusion and Future Works
It is well known that soft computing models pay particular attention to nonlinearties which in turn help to improve complex data predictions. In this paper, we forecasted Chittagong stock price index for all companies and stock price index for 30 selected companies for the period from January 1, 2005 to May 5, 2011. Recent time series literature suggests that most stock price series are nonstationary, contains a unit root. For this reason to make appropriate predictions, using unit root tests, first we tested nonstationarity properties because our considered series are time series. Our test results suggested that the series are nonstationary. To remove this noise from the series, we used the series in first differences. Then we tested linearity of the series using the statistical linear test. Test result showed us that the two considered series are nonlinear. Thus, we selected two very wellknown soft computing models, namely, the GA forecasting model and the ANFIS forecasting model. To compare the performances of these two models, we also selected most popular nonlinear time series forecasting model. According to our findings, we would like to conclude that applied workers should select the GA forecasting model to forecast future daily stock price index for all selected companies. In case of daily stock price index for 30 selected companies, the ANFIS forecasting model is more successful than the other considered forecasting models. We believe our findings will be helpful for researchers who are planning to make appropriate decisions with this complex variable. Our next step is to improve and compare the predictions with other recently proposed model, for example, rough set theory and other. This is left for future research.
Acknowledgments
The authors are grateful to the participants of the 17th International Mathematics conference held on 22–24 December 2011 and organized by the Bangladesh Mathematical Society and the Department of Mathematics, Jahangirnagar University, Dhaka, Bangladesh. They greatly acknowledge comments and very useful suggestions from anonymous referees, which improved greatly the presentation of the paper. They are also very thankful to the Editor YiChi Wang and editorial staff Badiaa Sayed for their very valuable cooperation. They greatly acknowledge it.
References
 B. Mandelbrot, “The variation of certain speculative prices,” Journal of Business, vol. 36, pp. 394–419, 1963. View at: Publisher Site  Google Scholar
 E. F. Fama, “The behavior of stock market prices,” Journal of Business, vol. 38, pp. 34–105, 1965. View at: Publisher Site  Google Scholar
 D. A. Hsu, R. B. Miler, and D. W. Wichern, “On the stable paretian behavior of stock market prices,” Journal of the American Statistical Association, vol. 69, pp. 108–113, 1974. View at: Google Scholar
 D. Kim and S. J. Kon, “Alternative models for the conditional heteroskedasticity of stock returns,” Journal of Business, vol. 67, pp. 563–598, 1994. View at: Publisher Site  Google Scholar
 R.F. Engle, “Autoregressive conditional heteroscedasticity with estimates of the variance of UK Inflation,” Econometrica, vol. 50, pp. 987–1008, 1982. View at: Publisher Site  Google Scholar
 T. Bollerslev, “Generalized autoregressive conditional heteroskedasticity,” Journal of Econometrics, vol. 31, no. 3, pp. 307–327, 1986. View at: Google Scholar
 S. Banik, M. Anwer, K. Khan, R. A. Rouf, and F. H. Chanchary, “Neural network and genetic algorithm approaches for forecasting bangladeshi monsoon rainfall,” in Proceedings of the 11th International Conference on Computer and Information Technology (ICCIT '08), December 2008. View at: Google Scholar
 J. S. R. Jang, “ANFIS: adaptivenetworkbased fuzzy inference system,” IEEE Transactions on Systems, Man and Cybernetics, vol. 23, no. 3, pp. 665–685, 1993. View at: Publisher Site  Google Scholar
 D. F. Cook and M. L. Wolfe, “A backpropagation neural network to predict average air temperatures,” AI Applications in Natural Resource Management, vol. 5, no. 1, pp. 40–46, 1991. View at: Google Scholar
 A. Abraham, N. S. Philip, and P. Saratchandran, “Modeling chaotic behavior of stock indices using intelligent paradigms,” International Journal of Neural, Parallel and Scientific Computations, vol. 11, no. 12, pp. 143–160, 2003. View at: Google Scholar
 J. Kamruzzaman and R. A. Sarker, “Comparing ANN based models with ARIMA for prediction of forex rates,” Bulletin of the American Schools of Oriental Research, vol. 22, no. 2, pp. 2–11, 2003. View at: Google Scholar
 G. E. P. Box and G. Jenkins, Time Series Analysis: Forecasting and Control, Cambridge University Press, Cambridge, UK, 1970.
 S. Banik, F. H. Chanchary, R. A. Rouf, and K. Khan, “Modeling chaotic behavior of Dhaka Stock Market Index values using the neurofuzzy model,” in Proceedings of the 10th International Conference on Computer and Information Technology (ICCIT '07), pp. 80–85, December 2007. View at: Google Scholar
 C. R. Nelson and C. R. Plosser, “Trends and random walks in macroeconmic time series. Some evidence and implications,” Journal of Monetary Economics, vol. 10, no. 2, pp. 139–162, 1982. View at: Google Scholar
 W. F. Mitchell, “Testing for unit roots and persistence in OECD unemployment rates,” Applied Economics, vol. 25, no. 12, pp. 1489–1501, 1993. View at: Google Scholar
 R. S. McDougall, “The seasonal unit root structure in New Zealand macroeconomic variables,” Applied Economics, vol. 27, pp. 817–827, 1995. View at: Google Scholar
 W. H. Greene, Econometric Analysis, Prentice Hall, Upper Saddle River, NJ, USA, 7th edition, 2008.
 S. Banik, Testing for Stationarity, Seasonality and Long Memory in Economic and Financial Time Series [Ph.D. thesis], School of Business, La Trobe University, Bundoora, Australia, 1999, Unpublished.
 S. Banik and P. Silvapulle, “Testing for seasonal stability in unemployment series: international evidence,” Empirica, vol. 26, no. 2, pp. 123–139, 1999. View at: Publisher Site  Google Scholar
 S. E. Said and D. A. Dickey, “Testing for unit roots in autoregressivemoving average models of unknown order,” Biometrika, vol. 71, no. 3, pp. 599–607, 1984. View at: Publisher Site  Google Scholar
 P. C. B. Phillips and P. Perron, “Testing for a unit root in time series regression,” Biometrika, vol. 75, no. 2, pp. 335–346, 1988. View at: Publisher Site  Google Scholar
 R. L. Thomas, Modern Econometrics: An Introduction, AddisionWesley, New York, NY, USA, 1997.
 J. N. Holland, Adaptation in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor, Mich, USA, 1975.
 Y. H. Lee, S. K. Park, and D. E. Chang, “Parameter estimation using the genetic algorithm and its impact on quantitative precipitation forecast,” Annales Geophysicae, vol. 24, no. 12, pp. 3185–3189, 2006. View at: Google Scholar
 J. R. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press, Cambridge, Mass, USA, 1992.
 Z. Wei, W. U. Zhiming, and Y. GenKe, “Genetic programmingbased chaotic time series modeling,” Journal of Zhejiang University, vol. 5, no. 11, pp. 1432–1439, 2004. View at: Publisher Site  Google Scholar
 H. Pohlheim, Documentation for Genetic and Evolutionary Algorithm Toolbox for Use with MATLAB, 2005.
Copyright
Copyright © 2012 Shipra Banik et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.