Abstract

This work shown as the fuzzy-EGARCH-ANN (fuzzy-exponential generalized autoregressive conditional heteroscedastic-artificial neural network) model does not require continuous model calibration if the corresponding DE algorithm is used appropriately, but other models such as GARCH, EGARCH, and EGARCH-ANN need continuous model calibration and validation so they fit the data and reality very well up to the desired accuracy. Also, a robust analysis of volatility forecasting of the daily S&P 500 data collected from Yahoo Finance for the daily spanning period 1/3/2006 to 20/2/2020. To our knowledge, this is the first study that focuses on the daily S&P 500 data using high-frequency data and the fuzzy-EGARCH-ANN econometric model. Finally, the research finds that the best performing model in terms of one-step-ahead forecasts based on realized volatility computed from the underlying daily data series is the fuzzy-EGARCH-ANN (1,1,2,1) model with Student’s t-distribution.

1. Introduction

Mathematical models can provide a cost-effective, objective, and flexible approach for assessing management decisions, mainly when these decisions are strategic alternatives. In some instances, a mathematical model is the only means available for evaluating and testing alternatives. However, for this potential to be realized, models must be valid for the application and must provide credible and reliable results. The process of ensuring validity, credibility, and reliability typically consists of model validation and calibration.

Model calibration is the process of determining to what extent the model user can or is required to modify the default input parameter values that describe the underlying mechanics to obtain the better values of the problem. Model users can often not determine the impact that input parameter values have on the selected model. This may arise from several causes, including a lack of understanding of principles, a lack of understanding of the model, poor model documentation, or a combination of these [1]. Moreover, given observed data, adjust model parameters until the model reproduces observation “closely enough.” In fact, perfect accuracy is not possible since some errors cannot be reduced in the model, e.g., measurement errors inherent in well test results (see Hill and Tiedeman for more on this topic [2]).

Model validation is the process of determining the degree to which a theory, an approach, or a model is a “good enough” representation of the reality from the viewpoint of the intended uses of the theory, the approach, or the model [35]. It is concerned with quantifying the accuracy of the model by comparing model data to the observed data [6]. It is a complex process because the concept of “good enough” includes subjective judgments of what constitutes a reasonable degree of “good enough,” and it differs from the point of view of different individuals. Subjective judgments prevent to make a general validation approach of theories, approaches, or models. Therefore, absolute validation is philosophically not possible because it requires an infinite number of tests [3].

A theory, a methodology, an approach, or a mathematical model can never be proven to be valid; instead, we can say that there is not enough evidence to reject them. Therefore, as long as a methodology or an approach does not have sufficient evidence to reject it, we can accept it. Validation relative to a specific series of tests may be legitimate as a basis for making decisions [7]. In these situations, relative validation can be possible when validation involves a comparison of observed events with those predicted by models [8, 9].

Finally, the main objective of this paper is to deal with model calibration and validation of the fuzzy-EGARCH-ANN model. A lot of scholars tried to develop a hybrid model which fits the given data in reality very well. Even if most of them are better than the base models, they still need continuous model calibration and validation. For these cases, one can look at the references listed from [1013]. Therefore, we will see whether the fuzzy-EGARCH-ANN model needs continuous model calibration and validation or not.

This paper is organized as follows. Introduction is given in Section 1. Research methodology is presented in Section 2. In Section 3, details of results and discussion are given. In Section 4, a conclusion part of this study is presented.

2. Research Methodology

2.1. Data Source

To achieve the objectives of the study, daily S&P 500 data were collected from Yahoo Finance. As explained in the literature, the study split the given dataset into an in-sample and out-sample period. The in-sample dataset was used for initial parameter estimation and model selection. The out-sample dataset was used to evaluate forecast performance. In this study, the in-sample period ranges from 1/3/2006 to 3/5/2019 (800 observations), whereas the out-of-sample period ranges from 4/5/2019 to 20/2/2020 (201 observations).

2.2. Model Specification and Econometric Tests
2.2.1. Unit Root Test of the Variables

For determining whether a time series is stationary or nonstationary, there are many tests. This study used the augmented Dickey–Fuller (ADF) test [14] and the Phillips–Perron (PP) test [15]. Ljung–Box statistics and Lagrange multiplier (LM) tests are the appropriate ARCH effect tests and were used in this study. The Ljung–Box test statistic was used to assess the independency among the residuals [16]. When dealing with type models, we first examine the characteristics of the unconditional distribution of the stock price. This will enable us to explore and explain some stylized facts that exist in the financial time series. In statistics, the Bera and Jarque test is a test of departure from normality based on the sample skewness and kurtosis [17].

2.2.2. Fuzzy-EGARCH-ANN Model

As proposed by the authors, the fuzzy-EGARCH-ANN model is described by a collection of fuzzy rules in the form of If-Then statements in order to describe the stock fluctuations with volatility clustering overlooked by EGARCH-ANN model via fuzzy rules and the asymmetric responses of volatility to positive and negative shocks via an EGARCH-ANN model [18].

The rule of the fuzzy system for EGARCH-ANN model is described by the following.

: if is and is , thenwhere

As given by the authors, is the input vector with at instance . is the fuzzy set to describe the stock market return and volatility for , is the number of rules, and is the premise variable. The authors assumed that the residual series distribution follows either the Gaussian normal or Student’s t-distribution. If it is not following the Gaussian normal distribution, it follows Student’s t-distribution directly.

The authors had described the output of this fuzzy-EGARCH-ANN model as the weighted average of each individual rule, and it is obtained by using FIS fundamental steps as in [18, 19] as follows.

Step 1 (fuzzification layer). using the Gaussian membership function, find the grade of membership of the input in as follows:where is the center and is the spread of rule membership function corresponding to the premise variable.

Step 2 (firing strength layer). find the firing strength of rule by assuming the product T-norm of the antecedent fuzzy sets as

Step 3 (normalization layer). find the normalized firing strengths, i.e., the ratio of the rule’s firing strength to the sum of all rule’s firing strengths:

Step 4 (consequent and defuzzification layer). combine the normalized firing strengths and the consequent corresponding rule to produce the model output as the weighted average of each individual rule:where is the output of the rule. That is,This implies thatThe collection of the R rules assembles a model as a combination of local models. As a result, the exponential of this weighted average value gives us the predicted stock market return volatility using the fuzzy-EGARCH-ANN model [18].

2.3. Choosing the Optimal Lag Length and Model Selection Criterion

The optimal lag length for ARIMA and EGARCH-ANN models was chosen using the ACF, PACF plot, and information criteria. Some of the most popular information criteria are AIC and BIC, which are used to select the appropriate models. The PACF of a time series is a function of its ACF and is a useful tool for determining the order p of an AR model because PACF cuts off at lag p for an AR (P) process. For MA models, ACF is useful in specifying the order because ACF cuts off at lag q for an MA (q) [20].

2.4. Parameter Estimation

As briefly explained by Jingqiao Zhang and Sanderson, the ordinary least squares (OLS) method works great (assuming we meet some initial conditions), but one assumption that must be made for OLS to work is that the disturbance term is homoscedastic [21]. However, this is not always a realistic assumption in real life since variance is not always constant. Under the presence of ARCH effects, OLS estimation is not efficient. Therefore, the study employed maximum likelihood estimation (MLE) for estimating unknown parameters in EGARCH-ANN models.

For the estimation of the fuzzy-EGARCH-ANN model parameters without suffering from local optimum, the differential evolution (DE) algorithm with Archive is employed [21]. Furthermore, the step-by-step self-explanatory summary of the algorithm employed in this study is given in Table 1. For easy understanding, the software we have used throughout this study is MATLAB R2018a.

The Lehmer mean is given by

3. Results and Discussion

3.1. Calibration and Validation of the Fuzzy-EGARCH-ANN Model

This work investigates a global optimization algorithm for the calibration of stochastic volatility models. It is shown that commonly used gradient-based optimization procedures may not lead to a good solution and often converge to a local optimum. Many evolutionary algorithms have been introduced to explore the whole parameter domain thoroughly. The number of iterations, population size, and other user-defined inputs influence solution quality and computation time. Practical information from the literature shows that the evolutionary algorithm approach outperforms the standard gradient-based method.

When using optimization techniques to find model parameters that minimize the error between the stock’s model prices and market prices, the calibration’s success can substantially depend on the optimization algorithm. The calibration performed with an unsuitable algorithm can lead to parameter instability. As will be demonstrated, the optimization method used to calibrate a model can become as crucial as the model itself. A loss function describes the difference between empirical and model prices. For the same reason mentioned by the authors [18], this study selected the most commonly used loss function, the so-called mean squared forecast error (MSFE), and used it for the fuzzy-EGARCH-ANN model calibration and validation.

Generally, numerical models and field data are imperfect representations of the real world. The goal is to obtain a reasonably accurate representation that is at least internally consistent. This consistency is achieved by calibrating or adjusting the model until errors are minimized, as measured by qualitative or quantitative means. Figure 1 describes the general model calibration and validation of the GARCH-type models.

3.1.1. Simulation Model Setup

This step consists of the definition of study scope and purpose, determination of the performance measure, field data collection and reduction, and network coding [1, 9].

3.1.2. Initial Evaluation

This step tests whether the default calibration parameters are acceptable by comparing the distribution of a selected performance measure [1, 9].

Since the parameter estimation technique for both EGARCH and EGARCH-ANN models is MLE which is a gradient-based optimization method [18], it might not give us a globally optimal solution to these volatility models’ parameter estimation problems. That is why the study is required to perform model calibration, that is, model calibration using evolutionary algorithms.

From the estimation of EGARCH-ANN model parameters, as explained by the authors, the study is only interested in the best orders of the EGARCH-ANN model but not to the parameters’ values model [18]. As a result, at the stage of the EGARCH-ANN model parameter estimation, we are not required to perform model calibration for EGARCH-ANN. This is because we are required to perform model calibration to adjust parameters to the one that can take us to reality after we have identified the parameter’s order. Hence, performing model calibration for the EGARCH-ANN model cannot change the fitted EGARCH-ANN model’s order. Therefore, for our case, there is no need to perform model calibration under the EGARCH-ANN model fitting.

As explained by the authors, the parameter estimation method for the fuzzy-EGARCH-ANN model was one of the evolutionary algorithms, the so-called differential evolution (DE) algorithm, which has advantages over the other evolutionary algorithms [18, 21]. By itself, it gives a globally optimum solution to the optimization problem of the fuzzy-EGARCH-ANN model parameter estimation [18]. Its accuracy depends upon some user-provided inputs like the number of iterations, population size, and so on. The recommended amount of iteration for JADE is 500, and the population size is between 5D and 10D, where D is the dimension of the given parameter space of the fuzzy-EGARCH-ANN model [18, 21, 22]. Regarding the recommended number of iterations and population size employed, the JADE with Archive algorithm can provide us globally optimum parameters of the fuzzy-EGARCH-ANN model. Therefore, besides using the recommended amount of iteration and population size, there is no need to perform continuous model calibration for the fuzzy-EGARCH-ANN volatility model. Hence, for justification of this, we decided to apply this model to one of the well-known financial time series data.

3.2. The Fuzzy-EGARCH-ANN Model Applications to the S&P 500 Data

We applied the fuzzy-EGARCH-ANN model onto the S&P 500 time series data to show that the fuzzy-EGARCH-ANN model with its parameter estimation method does not need model calibration and validation.

The plot of daily S&P 500 data for the period of 1/3/2006 to 20/2/2020 is shown in Figure 2. From this figure, it is evident that unconditional mean and variance are changing over time, and the series has an increasing trend over time. The changing mean and variance of daily S&P 500 data over time is an indication of the nonstationarity of the level series. This implies that it is challenging to model ARIMA and ARCH/GARCH-type models for nonstationary series. Therefore, to achieve stationarity, logarithmic transformation was applied.

Table 2 presents the findings of the ADF test and PP test and formally confirms that the transformation of natural logarithms is stationary.

Table 2 portrays the ARCH effect tests on the residuals of ARIMA (1, 0, 0) model. These results show the presence of ARCH effects in the daily S&P 500 data. This suggests that modeling ARCH/GARCH types is appropriate for the dataset. As a result, the obtained basic statistical properties of these data are given in Table 2.

Note thatMean median usually flags left skewness.Negative skewness means left skewed data.Kurtosis means larger peakedness than Gaussian.Arch test result with very small p value implies existence of strong heteroscedasticity.Jarque–Bera test result with very small p value implies that the residual series strongly follows Student’s t-distribution.

Furthermore, Figures 25 are the corresponding figures of the daily S&P 500 data series, log return series, residual series, and histogram of the residual series, respectively.

Now by using the first 800 (that is, for T = 800) simulated data points, the study has obtained the following results. Since the study has identified that the residual series follows Student’s t-distribution, then by using this distribution type, model is obtained as the best fitted model to these selected data. Moreover, its fitness summary is presented in Table 3.

With addition information like log likelihood: 2956, Akaike Information Criterion: -5900 Bayesian Information Criterion: -5872 Hannan-Quinn Information Criterion: -5889.

Now, to estimate the fuzzy-EGARCH-ANN model parameters, the number of rules (R) and the realized volatility are the ones to be determined. For our case, as mentioned earlier, R is going to be determined by the subtractive clustering algorithm (SCA), and the realized volatility at time is the square root of the square of residual series at time . R = 2 is obtained, and the graph of the realized volatility is given in Figure 6.

Therefore, for , , , and , by JADE with Achieve, the estimated parameter vector is given by Table 4. In addition, the corresponding mean squared forecast error (MSFE) value is , and MAE is .

Figure 7 presents a summary and analysis of the MSFE obtained by JADE with Archive during the parameter estimation of the proposed model.

Finally, Figures 8 and 9 present a summary of estimated volatility by the proposed model for T = 800 and validation for T = 1000, respectively. Here, the validation dataset refers to the last 200 data points.

Note. The mean squared forecast error (MSFE) for this validation is 2.6 045e − 05, which is very near to that of the mean squared forecast error (MSFE) obtained during the parameter estimation. In addition, its corresponding MAE value is 0.0036. Here, the model has fitted the data very well. As a result, the following forecasted values were obtained for different periods, as shown in Table 5.

4. Conclusions

This study provides a robust analysis of volatility forecasting of daily S&P 500 data using weekly data spanning from 1/3/2006 to 19/2/2020. The forecasting performance of the semiparametric nonlinear fuzzy-EGARCH-ANN model is investigated based on forecasting performance criteria such as MSE and MAE-based tests and alternative measures of realized volatility. It is evident that fuzzy-EGARCH-ANN(1,1,2,1) with Student’s t-distribution is found to perform best in terms of one-step-ahead forecasting. To our knowledge, this is the first study that focuses on daily S&P 500 data using high-frequency data and the fuzzy-EGARCH-ANN econometric model. From the results and discussion section of this study and the concepts of calibration that have been stated in this study, it is concluded that there is no need for model calibration for the proposed model as far as the practitioner uses the corresponding DE algorithm suitably. Finally, except the computational time, the model is novel.

Data Availability

The daily S&P 500 data used to support the findings of this study were taken from Yahoo Finance.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

Geleta T. Mohammed was responsible for methodology. Jane A. Aduda and Ananda O. Kube were responsible for conceptualization and supervision. All authors have read and approved the final manuscript.