Abstract

After reviewing the vast body of literature on using FTS in stock market forecasting, certain deficiencies are distinguished in the hybridization of findings. In addition, the lack of constructive systematic framework, which can be helpful to indicate direction of growth in entire FTS forecasting systems, is outstanding. In this study, we propose a multilayer model for stock market forecasting including five logical significant layers. Every single layer has its detailed concern to assist forecast development by reconciling certain problems exclusively. To verify the model, a set of huge data containing Taiwan Stock Index (TAIEX), National Association of Securities Dealers Automated Quotations (NASDAQ), Dow Jones Industrial Average (DJI), and S&P 500 have been chosen as experimental datasets. The results indicate that the proposed methodology has the potential to be accepted as a framework for model development in stock market forecasts using FTS.

1. Introduction

The statistical complex system model investigation of financial market index and return is an issue to understand and model the distribution of financial price fluctuation, which has long been an effort of economic study. As the stock markets are becoming deregulated globally, the stock market system modeling and forecast are becoming more complex in the risk management and derivatives rating. The development of novel statistical analyzing methods of stock returns delivers numerous observed indications of old random-walk hypothesis, demanding the invention of new financial models to describe price movements in the market [1]. However, one of the key aspects of complex statistical model in stock market is accurate forecasting that could yield significant profits and it could also decrease investment risks [24]. Considering the stock prediction, the most frequently used forecasting methods are nonlinear models, for example, neural network [57], genetic algorithm [8, 9], hybrid models [1012], fuzzy logic [13], and support vector machine [14]. However, fuzzy time-series method has been developed as one of novel forecasting methods in this area. So far, various FTS have been applied successfully to handle stock index forecasting [1523]. Since this study is focused on applying FTS on stock data prediction, the following paragraphs provide a brief review of FTS models.

Song and Chissom [24, 25] first applied a FTS model by using fuzzy relation equations and approximate reasoning. There are two classes of FTS: time-variant and time-invariant. Chen [26] presented a method to forecast student enrolment at the University of Alabama that takes less time computing max-min composition operations than Song and Chissom’s model [24, 25].

The length of intervals influences forecast accuracy in FTS. Consequently, determining optimal length of interval in FTS is the central issue in studies. Along these lines, Huarng [27] proposed distribution- and average-based length to determine the effective length of intervals in FTS. In addition, Sheng and Yeh [28], at their work, presented a novel approach to handle the issue of finding the effective length by applying the natural partitioning technique, which can recursively partition the universe of discourse level by level in a natural way. They indicated that the model could be used to handle high-order FTS as well. Experimental results on the enrolment data of the University of Alabama proved that the results of forecasting model could forecast the data effectively and efficiently. Yu [29] proposed a refined fuzzy time-series model to further refine the lengths of intervals. Their model could improve the lengths of intervals during the formulation of fuzzy relationships and hence established the fuzzy relationships more appropriately. Using genetic algorithms, Chen and Chung [30] presented a method that modified the length of each interval in the universe of discourse to deal with the forecasting complications based on high-order fuzzy time. Moreover, they used historical enrolments of the University of Alabama to illustrate the forecasting process of their proposed method. Cheng et al. [16, 31] proposed two approaches for overcoming the problems of determining the universe of discourse, the length of intervals, and membership functions of FTS. Huarng and Yu [32] proposed ratio-based lengths of intervals to improve FTS forecasting. In their research, algebraic growth data, such as enrolments and the stock index, and exponential growth data, such as inventory demand, were selected as the forecasting targets. The empirical examination recommended that the ratio-based lengths of intervals could also be used to improve FTS forecasting. Li and Cheng [33] proposed a deterministic forecasting model to accomplish the issues of controlling uncertainty in forecasting, partitioning the intervals effectively, and consistently achieving forecasting accuracy with different interval lengths. In addition, in their work, an important parameter, the maximum length of subsequence in a FTS resulting in a certain state, was deterministically quantified. Moreover, their model followed the consistency principle that a shorter interval length leads to more accurate results. Later, Li et al. [34] proposed a novel forecasting model to enhance forecasting functionality and to allow processing of two-factor forecasting problems. Moreover, that model applied fuzzy c-means (FCM) clustering to deal with interval partitioning, which considered the nature of data and formed unequal-sized intervals. In a recent study, Wang and Chen [35] presented a method to forecast the temperature and the Taiwan Futures Exchange (TAIFEX) based on automatic clustering techniques and a two-factor, high-order FTS. Aladag et al. [36] proposed another approach which used a single-variable constrained optimization to determine the ratio for the length of intervals. Their method was successfully applied to the two case studies, which are the enrolment data at the University of Alabama and the inventory demand data. Su et al. proposed a model for the forecasting process, which combined two granulating methods (the minimize entropy principle approach and the cumulative probability distribution approach) and a rough set algorithm [37]. Their model surpassed the conventional fuzzy time-series models and a multiple regression model (MLR) in forecast accuracy. In a different study, Egrioglu et al. [38] proposed a new method which used MATLAB function that employs an algorithm based on golden section search to optimize a function with single-variable constraint for finding the effective length of intervals in high-order FTS. Forecasting number of enrolments in Alabama University showed a great improvement in accuracy by this method.

A remarkable development in forecasting stock market has been materialized by using adaption models. Few studies dealt with this issue. Cheng et al. [39], for instance, introduced a fuzzy time-series model which combines the adaptive expectation model into forecasting processes to adjust forecasting errors. Liu et al. [20] presented a multiple attribute FTS method, which integrates a clustering method and adaptive expectation model. Teoh et al. [21] proposed a hybrid model based on multiorder FTS, which employs rough sets theory to mine fuzzy logical relationship from time-series and adaptive expectation model to modify forecasting results in order to increase forecasting accuracy. Chen et al. [15] proposed a model that could adjust the forecasting results with the possibility of minimal error in the training dataset.

Based on the above information, most of forecasting literatures to date have focused on the development of specific algorithms. In addition, in some of these studies, the effect of data preprocessing has been disregarded; that is, researchers directly utilized unprocessed stock data for forecasting purposes. This gives the impression that they are not willing to spend time doing data preprocessing. However, as it will be noted in the following sections, forecast accuracy will be improved by appropriate data preprocessing.

The second issue is the shortage of forecasts modification based on recent observations. As stated above, adaption of forecast has major impacts on forecast accuracy; nonetheless, it has received consideration in few studies.

Still another vague issue in previous studies is determining universe of discourse and establishing linguistic variables. In order to forecast stock market using FTS, we need to determine the length of each interval to establish linguistic variables. Even though researchers proposed many approaches to reconcile this problem, almost all of them ignored to show how universe of discourse must be exactly defined. Besides distinguishing the length of intervals, another issue that must be taken into account is determining the starting point of universe of discourse. If the role of starting points is neglected in FTS algorithms, it will be difficult to judge whether a particular FTS model exactly produces robust forecast or not. To address the importance of the starting point as an important gap in previous studies, we perform certain experiments. The results demonstrate that while the lengths of intervals are identical, the degree of accuracy is considerably different with different starting points (notice Table 1). For instance, notice the difference in years 1991, 1992, and 1998. In this table, the starting point corresponding to each year is .

Last but not least, there is lack of studies on the combination of different algorithms having positive role in stock market prediction. In other words, the hybridization of constructive findings of earlier studies appears to receive less attention. Based on their interest, researchers focus on certain subjects such as developing FTS algorithms or developing algorithms for finding the effective length of intervals; however, there is not any systematic model to motivate them to combine or ensemble several positive features to progress forecasts in this field of study.

In this study, our approach differs from those reviewed in the literature. Earlier studies were tied to a particular algorithm. However, in this study, the aim is not to propose new algorithm; instead, we propose a systematic, descriptive, and well-structured framework model, which is constructed of some meaningful layers that play an independent role throughout the forecast process. Each layer has a responsibility to resolve specific problems such as those mentioned above. The proposed methodology is model-based rather than algorithm-based.

The rest of the paper proceeds as follows. The next section presents related works to FTS models. In Section 3, the framework of proposed multilayer stock forecasting model is documented. Section 4 provides information about stock databases that used in this study. In Section 5 empirical experiments are presented. Section 6 provides our discussions and findings. The final section displays conclusions and future works.

This section provides definitions of fuzzy time series. Furthermore, weighted fuzzy time-series algorithm is explained.

2.1. Fuzzy Time-Series Definitions and Algorithms

Song and Chissom first presented the concepts of fuzzy time series [24, 25, 41], where the values in a fuzzy time series are presented by fuzzy sets [42]. Let be the universe of discourse, where . A fuzzy set of is defined as follows: where is the membership function of the fuzzy set , ; is a generic element of fuzzy set is the degree of belongingness of to and .

Definition 1 (see [25]). Let , a subset of real numbers , be the universe of discourse by which fuzzy sets are defined. If is a collection of , then is called a fuzzy time series defined on .

Definition 2 (see [25]). If there exists a fuzzy relationship , such that , where “” is an arithmetic operator, then is said to be caused by . The relationship between and can be denoted by .

Definition 3 (see [25]). Suppose that is calculated only by and . For any , if is independent of , then is considered a time-invariant fuzzy time series. Otherwise, is time-variant. Assuming and , a fuzzy logical relationship can be defined as , where and are called the left-hand side (LHS) and right-hand side (RHS) of the fuzzy logical relationship, respectively.

2.2. The Algorithm of Yu’s Model

Since in this study we employ weighted FTS proposed by Yu [22], in this section we stated it with details as follows.

Step 1. Defining the universe of discourse and intervals for observations: according to the problem domain, the universe of discourse for observations is defined as . The length of the intervals, , is then determined, where can be partitioned into equal-length intervals . Each interval can be considered to be and each matching midpoint as where .

Step 2. Defining fuzzy sets for observations: each linguistic observation can be defined by the intervals . . Each can be denoted as .

Step 3. Fuzzifying each observations in training dataset.

Step 4. Establishing FLRs: two successive fuzzy sets, (at ) and (at ), can be used to create the FLR .

Step 5. Establishing fuzzy relationships: the FLRs with similar LHS establish FLRGs.

Step 6. Forecasting: supposing the FLRG . , if , then the forecast of is .

Step 7. Defuzzifying: assume that the forecast of is . The defuzzified matrix is equivalent to a matrix with midpoints , where denotes the defuzzified forecast of .

Step 8. Assigning weights: assume the forecast of is . The corresponding weights for , for example, , are stated. Before making the weight matrix with these , the weight matrix must satisfy the condition . Henceforth, these weights should be standardized. Then achieve the next weight matrix: where is the corresponding weight for . Furthermore, the weight matrix is monotonic; therefore, it also satisfies the following condition:
One intuitive weight scheme based on Yu’s study is here:
Hence, the th item in (2) can be denoted as

Step 9. Calculating results: in the weighted model, the final forecast is equal to the product of the defuzzified matrix and the transpose of the weight matrix: , where “” is the matrix product operator, is a matrix, and is a matrix.

3. The Framework of the Proposed Multilayer Stock Forecasting Model

Having discussed the key points in the introduction section, we proposed a multilayer model that could be beneficial for stock market forecasting by using FTS methods. The proposed model contains five logical meaningful layers as displayed in Figure 1.

Each layer has its specific task to assist forecast process by reconciling certain problems. The details about these layers are as follows.

Layer 1. Data preprocessing layer: in this layer, the aim is to transform original data to new domain with less fluctuation or volatility. For instance, detrendization of data assists in the forecastability increase [43, 44]. This layer is supposed to stabilize variance and mean of data that have major impact on forecasting. Likewise, detecting and handling outliers, filtering inconsistent data, and reducing noises are performed in this layer.

Layer 2. Universe of discourse and portioning: in this layer, the universe of discourse should be recognized. In addition, the number of linguistic variables and the number of intervals or the length of each interval used in FTS must be exactly determined. There are some advanced research works in this area of study, for example, [27, 32, 37, 38, 45, 46], and so forth. Previous studies imply the fact that development in this layer functionally has positive influence on forecast.

Layer 3. FTS: this layer is about deciding on the proper FTS method for stock data prediction. So far, different FTS algorithms have been adopted to forecast stock data, for example, [1523, 37] and so forth. The more appropriate the FTS method is selected or developed, the more enhanced the whole model will be.

Layer 4. Initial forecasting: in this layer, initial forecast is calculated. The possibility to improve initial forecasts in this layer occurs by either using novel defuzzification methods [45], or employing additional information inside training datasets [23], or applying expertise knowledge [47], or giving appropriate weights in forecast process [22], and so forth.

Layer 5. Adaptation: in stock markets, investors usually make their investment decisions based on recent stock evidence, for example, late market news, stock technical indicators, or price fluctuations. Thus, it is logical that investors will modify their forecasts with the latest prediction errors [39]. The aim of this layer is using recent periods of forecasting errors to modify the forecast for the future stock index by employing adaptive expectation models.

In theory, the developments in all forecast procedure in the whole model will be promised by concentrating on and then advancing each layer. The ideal circumstance happens when every single layer could be categorically standing by itself. In other words, development in each layer can influence the improvement in forecast accuracy. In practice, interdependencies cannot be removed from among layers completely, since layers interact with each other. Nonetheless, the proposed multilayer model attempts to implement a perfect model as much as possible. Considering this hypothesis and based on researchers’ interest, they are able to focus on improvement performance of layers exclusively. For instance, imagine that layers , , , , and were already proposed, corresponding to five layers of perfect model, respectively. Suppose that new FTS algorithm is developed as corresponding to FTS layer; thus, in theory, applying new sequence of layers, that is , , , , and together, can lead to further improvement in forecasting compared with the previous one. By this hypothesis, we perform huge experiments to check reliability and predictability strength of the proposed model.

4. Data

To illustrate the proposed method, 10 years of closing prices data of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index), NASDAQ (National Association of Securities Dealers Automated Quotations) from 1990 to 1999, DJI (Dow Jones Industrial Average), and S&P 500 from 2000 to 2009 were chosen as experimental datasets. The first ten months (January–October) of each year were considered as training datasets and the remaining last months (November and December) as testing datasets.

5. Empirical Works

As noted above, one of the key assumptions for model development is improving a particular layer separately. In this section, we use our experience, knowledge, and previous findings to propose suggestions to positively improve the layers gradually. The experiment process involves certain stages as follows.

(1) Data Preprocessing. As stated in Section 4, there are several motives to perform data preprocessing. This part used Return on Investment, ROI, concept for data preprocessing. It is the efficiency of an investment and use to compare the efficiency of a number of different investments:

Consider a time series such as which denotes a stock price of particular item ; then, we define daily ROI for this item as follows:

Equation (7) provides strong criteria for investors to decide whether investment on item is gainful or not. In all stock markets, the condition is similar; for example, TAIEX, NASDAQ, DJI, and S&P 500 indexes can be considered as reflection of overall market movement, because these indexes present the movement average of many individual stocks such as item . By this brief introduction, we start our proposed data preprocessing. Assume is time series of our interest in stock databases, namely, TAIEX, NASDAQ, DJI, or S&P 500; then, we define

Hence, our proposed data preprocessing gives us new time series, that is, in new domain with less volatility and noisy effects. Notice and compare Figures 2 and 3.

Thus, for one-step-ahead forecasting, for example, at time , we employ and compare both and to emphasis positive influence of proposed data preprocessing on forecasting.

(2) Universe of Discourse and Partitioning. to demonstrate improvement in forecast process stepwise, we initially employ the effective length of intervals based on Huarng’s [27] findings. This study utilizes average-based length (notice Tables 2 and 3). These lengths are denoted by in all tables, and then to refine the accuracy further by minimizing error in fitting values, we try to find optimum length in the range for each year. Next, we use optimum length for forecasts. The optimum lengths are displayed by s. After performing data preprocessing based on (8), we use Sturges’s [48] formula for calculating the effective lengths (see Tables 2 and 3), which is simply defined by where is the number of members.

Finally, like the above process, we find optimum length in the , which are denoted by s, and use them to improve forecasts.

(3) FTS. for model illustration, due to space limitations, we just choose Yu’s (2005) FTS first-order algorithm which is frequently used in stock market forecasting area of researches [15, 21, 22, 49].

(4) Initial Forecasting. in our experiments, if data preprocessing is performed, then initial forecast retrieved by converting operation will be as follows:

Otherwise, initial forecast is calculated as usual.

In the above equation, is initial forecast at time and is forecasted value of at time .

(5) Adaption. in the following experiments, two types of forecast modification are utilized. These equations are employed to adapt the initial forecast to promote better forecasts. Type I adaption is adopted from Cheng’s [39] study which is defined as and type II is retrieved from Chen et al.’s [15] study which is derived by where , , is forecasted value and is actual value at time.

5.1. Illustrative Experiments

To examine how the proposed model causes improvement in forecast procedure stepwise and to show to what extent the existence of each layer is meaningful, we design certain experiments in chronological order. However, the main idea in these experiments is to restrict attention to the role of three layers, namely, data preprocessing, universe of discourse, and adaption, in forecasts improvement functionally in the proposed multilayer model. In these experiments, the influence of the rest of layers on developing model is being of secondary concern.

In empirical works, first, we used unprocessed data without using adaption and then we employed both processed data and adaption layers. To simplify demonstration, all experiments are categorized in groups a and b as follows:(a)databases: TAIEX, NASDAQ, DJI, and S&P 500;’data preprocessing: Ne;universe of discourse: (for TAIEX and DJI, D1 = 10 and for NASDAQ and S&P 500, D1 = 2);length of interval: both s and s are employed to compare.adaption: Ne, type I, and type II;results are collected in Table 4;

(b)databases: TAIEX, NASDAQ, DJI, and S&P 500:data preprocessing: according to (8);universe of discourse: ;length of interval: both s and s are employed to compare;adaption: Ne, type I and type II;results are collected in Table 5.

6.   Remarks, Findings, and Discussions of the Proposed Model

The root mean square error (RMSE) is a frequently used criterion in comparison results, which we also utilized in our proposed model. This criterion is defined as follows:

In this section, six key results of this empirical study are presented as follows.(1)The proposed model provides changes in stock market forecasting using FTS from the algorithm-oriented to the model-oriented, which is a further advanced view.(2)The results presented here show that the proposed data preprocessing together with the proposed effective length of intervals, that is, s, has major positive effects on forecast accuracy in comparison with using unprocessed (original) data together with average-based length, that is, s. Consequently, as expected, presence of this layer in the proposed model was logical. For instance, while the average of RMSEs for 10 years TAIEX was 119.7 (Table 4, first row) by using unprocessed data, the average of RMSEs was interestingly dropped by 26.3 from 119.7 to 93.4 (Table 5, first row) by applying processed data and the effective length. The rest of the dropping amounts for all cases were 15.6, 15.7, 25.1, 16.4, 16.5, 1.6, 0.7, 0.7, 1.8, 1, 1.2, 47, 21.6, 21.3, 47, 25.3, 24.8, 2.6, 1.1, 1.2, 3.5, 2.1, and 2.2. In whole, in 80% of all experiments, the proposed data preprocessing together with the proposed length showed better forecasts.(3)Between two adaption types used in this research (typs I and II), type I provided superior RMSE performance. In 50% of all experiments, type I promote better forecast than type II. In 28% of all experiments, two methods produced similar results and only in 22% of all experiments, type II presented superior performance.(4)While applying preprocessed data, the results from these experiments, which are presented in Tables 4 and 5, reveal that employing Sturgis’s formula for calculating effective length of intervals improves forecast accuracy.(5)Reviewing RMSEs in Tables 4 and 5 emphasize that using adaption layer in the proposed multilayer model has positive influence on forecast accuracy; therefore, the presence of this layer in the proposed model was reasonable.(6)Employing optimal length (in our study s and s) slightly reduce RMSEs. Just in 56% of related experiments, using optimal lengths contributed to improve forecasts. Since finding these values is time-consuming, it is up to users whether they approve to pay extra costs to obtain them or to use s or s directly.

7. Conclusions and Future Works

In this study, a five-layer model was proposed for stock market forecasting. The model was established based on this assumption that thinking of, improving, and advancing each layer separately will guarantee the development in the whole model. To check whether the proposed model is reliable and can promote enhancement in stock market forecasting, we designed experiments and performed enhancing in each layer gradually; however, the goal was to highlight the role of data preprocessing, the proposed effective length of intervals, and adaption layer. After comparing 480 different results, it was proved that the multilayer model is proper for model development and can therefore be used for stock market forecasting purposes. In short, although presenting a new model is not a definite proposition because not everyone will agree on the principles followed, the results show that the proposed model can be considered as a standard systematic model whereby it is possible to develop stock predictions by using FTS.

For future research, considering the behavior of each layer discretely as much as possible will lead to the development of layers more rapidly, because these roles might be captured by some specifications of the externally observable subsystems. In this way, many questions remain to be answered and many other problems remain to be researched to develop an improved version of the proposed model. Some critical questions for further studies, for instance, are in which order the development of layers should be carried out, which layers have more contribution to enhance the whole model, how it is possible to add more significant component to this model, and where should research efforts be directed to develop this model. In short, based on proficiencies and interests, researcher can develop performance of specific layer.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.