Abstract

The objective of this paper is to test for nonlinear dependence in the GARCH residuals of a number of asset classes using nonlinear dynamic tools. The equity and bond market samples appear to be independent once GARCH has been applied, but evidence of nonlinear dependence in the CDS GARCH residuals is found. The sensitivity of this result is analysed by changing the specifications of the GARCH model, and the robustness of the result is verified by applying additional tests of nonlinearity. Evidence of nonlinear dependence in the GARCH residuals of CDS contracts has implications for the accurate modeling of the marginal distribution of the CDS market, for pricing of CDS contracts, for estimating risk neutral default probabilities in the bond market, as well as for bond market hedging strategies.

1. Introduction

The objective of this paper is to test for non linear dependence in the GARCH residuals of a number of financial time series including credit default swaps. It has been suggested that the residuals of the GARCH model will follow a defined stochastic data generating process [1]. As a result of this, GARCH has been applied to financial time series before the application of quantitative risk estimation techniques such as value at risk [2]. The focus of this paper is particularly on credit default swaps (CDS), as they have been highlighted as a potential source of systemic risk [3], and as such, the analysis of the marginal distribution of the credit default swap market merits further analysis. The lack of univariate analysis of the CDS market is noted, as is the lack of explicit testing of the GARCH residuals of financial data, particularly in the CDS market, for nonlinear dependence. For an example of testing for nonlinear dependence in the exchange rate market, see Kyrtsou and Serletis [4] or Serletis et al. [5], or in the oil futures markets, see Moshiri [6]. The results are mixed with some evidence of nonlinear dependencies remaining in the data. For an example of the analysis of the CDS GARCH residuals, see Li and Mizrach [7]. Li and Mizrach [7] apply a univariate model to CDS data using the GARCH model but do not test the residuals for iid characteristics.

In general, the existing literature often places the CDS in a dependency model and analyses inter-asset or cross-asset dependencies such as correlations, [812]. The copula function is one such model; this model assumes that after a transformation, the variables under analysis follow a multivariate probability distribution, most commonly the Gaussian [13], the Student 𝑡, the Gumbel [14, 15], or the exponential probability distribution [7]. Should the marginal distribution of the CDS data be shown to be significantly nonlinear and non-iid, even after the application of a GARCH model, we can no longer assume that the data can be described by a probability distribution function without further modeling.

It is common place in industry to use risk neutral default probabilities from bond spreads to price CDS contracts and, in recent times, to use CDS prices to imply risk neutral default probabilities in the bond market [16]. CDS contracts are also commonly used to hedge bond positions. This paper will analyse the distribution of both bond and CDS contracts and shows significant differences in the dependency structures of each. This result questions the efficacy of current methods of modeling and pricing CDS contracts, of estimating implied default probabilities and of developing bond market hedging strategies. Further research in this field is required.

The objective of the paper is to test for nonlinear dependence in the GARCH residuals of a number of asset classes, using nonlinear dynamic tools. In Section 2, we outline the general principles for applying the BDS test to the GARCH residuals of the three asset classes, equities, bonds, and CDS. In Section 3, we describe the data samples to be used. In Section 4, we apply the BDS test to the residuals of the GARCH (1,1) model. In Section 5, we vary the specifications of the GARCH model to assess the consistency of our results. In Section 6, we conduct some additional tests of nonlinearity to ensure that the BDS result is robust. We use time-delay embedding or phase portraits and the correlation dimension test to verify whether the nonlinear process is stochastic or deterministic. Finally, in Section 7, we conclude the paper with a brief discussion of the implications of our findings.

2. Test for Nonlinearity in the GARCH Residuals

The BDS test is the first test of nonlinearity we will be applying to the data. Following from Brock et al. [17], the BDS test can be applied to any time series with n observations. The data is initially transformed into the first difference of the natural logarithm. The null hypothesis of the BDS test is that the data is drawn from an iid process. The test uses the concept of the correlation integral and is a nonlinear test of independence. For a selected value of 𝑚, the time series is embedded into 𝑚 dimensional vectors. Thus, the series of scalars is converted to a series of vectors with overlapping entries. A correlation integral is calculated, as below: 𝐶𝑚,𝑛2(𝜖)=𝑛(𝑛1)𝑡𝑠,𝑠<𝑡𝐼[0,𝜖]𝑋𝑚𝑡𝑋𝑚𝑠,(1) Such that 𝐼() denotes the indicator function, which takes either the value of 0 or 1 according to𝐼[0,𝜖][],[],(𝑠)=1if𝑠0,𝜖0if𝑠0,𝜖(2) and denotes the supremum norm given by𝑢=sup𝑖=1,,𝑚||𝑢𝑖||.(3)

This integral estimates the spatial correlation for a particular embedded dimension 𝑚. It estimates the probability that for the dimension 𝑚, the vector length |𝑥𝑦| is less than or equal to 𝜖, a predetermined distance. Note thatlim𝑛𝐶𝑚,𝑛(𝜖)=𝐶𝑚(𝜖).(4) If the data is independent, then this implies that𝐶𝑚𝐶(𝜖)=1(𝜖)𝑚.(5) This follows from considering𝐶𝑚[]||𝑋(𝜖)=𝑃𝑋𝑌𝜖=𝑃1𝑌1||||𝑋𝜖,,𝑚𝑌𝑚||||𝑋𝜖=𝑃1𝑌1||||𝑋𝜖××𝑃𝑚𝑌𝑚||=𝐶𝜖1(𝜖)𝑚.(6)

The above proof from Diks [18] assumes that the marginal probabilities are independent; if this is the case, the correlation integral should be scalable, and thus, 𝐶𝑚(𝜖)=[𝐶1(𝜖)]𝑚. The standardized BDS test statistic is𝑇=𝑛𝐶𝑚(𝜖)𝐶1(𝜖)𝑚𝜎𝑚(𝜖).(7)Brock et al. [17] show that the statistic is normally distributed with 𝑁(0,1).

For a nonrandom process, two initially close time paths are likely to remain close, thus the process will have a larger conditional probability than an iid process, and hence, a larger correlation integral. In the application of a GARCH type model to a time series, it is claimed that the dependence in the data is removed. Thus, the resulting residuals are said to be iid. However, this claim has not been proven conclusively for all the asset classes in the financial markets. Hence, it is our objective to verify this claim in this paper. To that end, as a first step, we apply the BDS test to the residuals of the GARCH model. If the null is accepted, we can conclude that the data is iid and, thus, that there is no non linear dependence in the residuals. This would allow the application of a probability distribution function (pdf). If the null is rejected, the residuals cannot be said to be iid, and the conditions required to fit a pdf to the data do not apply.

When applying the BDS test, general principles in the choice of the value of the embedded dimension 𝑚, the value for the length 𝜖 and the sample size 𝑛 must be agreed. In general, for a small sample, if 𝑚 is too large, there will be insufficient nonoverlapping independent points [19]. If 𝑚 is too small, the analysis may not pick up the higher dimensional dependencies. Thus, it is customary to apply a range of values for 𝑚, usually two to five dimensions to take account for the bias-variance tradeoff as 𝑚 increases [20].

In general, 𝜖 should be kept small for time series data, as a too large 𝜖 will lead to a high value for the correlation integral, and this will reduce the power of the test. This is because the correlation integral measures the relative frequency with which two data points are within distance 𝜖 of each other. If there is volatility clustering, then large shocks are likely to be followed by relatively large shocks. The probability that the distance |𝑋𝑡𝑋𝑠| is large and |𝑋𝑡+1𝑋𝑠+1| is small will be lower compared to that expected under the null. If, in this case, 𝜖 is large, these low probability regions will be included in the estimates of the correlation integral, as a result of which the power of the BDS test decreases. Diks [18] shows that when applying the BDS test to a time-series sample for daily returns of the equity IBM, (𝑛=1013) the most significant results are found when 𝜖 is set equal to one standard deviation. Brock and Sayers [20] use Monte Carlo simulations to develop general principles, and they conclude that 𝜖/𝜎 should range from 0.5 to 2, depending on the value of the embedded dimension 𝑚, and that in general, if 𝑛/𝑚>200, the BDS statistic will follow a standard normal distribution.

We will start with a value of 𝜖/𝜎=0.7, the default value, and allow the embedded dimension to vary from two to five. If in our samples, 𝑛/𝑚>200, we can assume that the BDS statistic will follow a standard normal distribution. If 𝑛/𝑚<200, then Brock and Sayers [20] provide several tables of quantiles of the BDS statistics for these smaller samples. As our smaller data samples have <400 observation (i.e., the CDS samples), we use the closest tables to this which are the 𝑛=250 tables with values of 𝜖/𝜎=0.5 and 𝜖/𝜎=1. We note that in the tables, the BDS test statistic values change as 𝑚 increases and are asymmetric.

As our initial objective is to use BDS to test for nonlinear dependence in the GARCH residuals of financial time series, we must first apply the GARCH model to remove linear dependence and then apply the BDS test statistic to the GARCH residuals. Thus, the model applied in this paper is similar to that in Bollerslev [1] as below:𝑟𝑡=𝜇𝑡+𝜀𝑡,𝜀(8)𝑡=𝜎𝑡𝑧𝑡,(9) such that𝑧𝑡𝜎𝑁(0,1),(10)2𝑡=𝛼0+𝛼1𝜀2𝑡1+𝛽𝜎2𝑡1.(11) In the mean equation (8), we take 𝑟𝑡 to be the logged difference in the daily closing prices of the financial instrument. The returns are modelled to fluctuate around an expected value 𝜇𝑡. The standardized residuals of the model follow a Gaussian distribution, and thus must be iid. The GARCH model itself also is characterized by dynamic variance. As we see above in (11), the variance is an autoregressive function of the previous time lag's variance and residual squared and also a constant. Serletis et al. [5] raise the issue of the effect of imposing the GARCH model on the sample prior to applying the BDS test. They discover evidence of episodic nonlinearity in a group of exchange rate series and question if this episodic nonlinearity is caused by inappropriate volatility modeling rather than being a characteristic of the raw data. Thus, to reduce this risk, there are a number of conditions which must be met before applying the BDS test onto the GARCH residuals.

Brock et al. [17] advise the use of the BDS test as a diagnostic tool for ARCH or GARCH as long as the mean term is small. In general, when analysing financial data, we can accept this condition to the test. This condition is shown to hold in Table 1 below in the results section.

Caporale et al. [21] consider the “nuisance parameter free property”, of the BDS test statistic, that is, “its asymptotic distribution does not change when it is applied to the estimates of the residuals from a model, rather than the raw material itself” [21, page 2]. De Lima [22] shows that the BDS test statistic has this invariance property when applied to linear additive models or models that can be cast in this format. Looking at (8) and (9) above, we can see that the residuals of the GARCH model are not additive. Caporale et al. [21] suggest that the transformation of the residuals of the GARCH model to log-squared standardized residuals, before applying the BDS test, will ensure that the errors are additive, that is,𝑣𝑡=ln𝑧2𝑡=ln𝜀2𝑡ln𝜎2𝑡.(12) Also, Caporale et al. [21] carry out Monte Carlo simulations of the standardized residuals of a GARCH (1,1) model and show that the BDS test statistic's bias reduces as the sample size increases and that the statistic becomes consistent for sample sizes over 200. They also show that the results are not generally affected by the moment properties of the innovations. Thus, when applying BDS to GARCH residuals, we need to transform the residuals and ensure a sample size >200 in order to assume the invariance property of the test. As the test is not sensitive to the moment properties of the innovations, we will assume that the standardized residuals (𝑧𝑡) follow a Gaussian distribution.

3. Data

In this paper, we compare eight data samples from the international equity, bond, and CDS markets. All samples chosen represent diversified portfolios of one of the three asset classes being analysed. This is to ensure that the results reflect the systematic characteristics of each asset class rather than the idiosyncratic characteristics of an individual equity, bond, or CDS. The four international equity samples are all market indices, the FTSE 100, the DAX, the Nikkei 225, and the S&P 500. They represent four geographical regions, the UK, Germany, Japan, and the USA respectively. The S&P 500 Index sample of 14,821 observations from 3rd January, 1950 to 26th November, 2008, the FTSE 100 index sample of 6,228 observations from 2nd April 1984 to 25th November 2008, the Dax Index sample of 4,544 observations from 26th November 1990 to 27th November 2008, and finally the Nikkei 225 Index sample of 6,127 observations from 4th January 1984 to 27th November 2008. The bond fund chosen is the Barclays Capital Pan-European Aggregate Bond Index with the sample size of 2,512 observations, running from 30th March 2000 to 16th November 2009. This bond index is chosen as it represents a broad spread of European corporate and government bonds and should give a general reflection of the characteristics of bond market returns. For all of these samples, 𝑛/𝑚>200, and thus, the BDS statistic follows a standard normal distribution.

Credit default swaps offer the buyer protection against the credit default of an underlying entity for a fixed annual premium known as the spread [23]. It is the spread we are analysing in this paper. The three samples are taken from the family of iTraxx European indices. These indices are chosen due to their diversified nature and liquidity, which should improve the robustness of the findings for the CDS market in general. The index is made up of the credit default swaps of 125 investment grade corporate European entities. The entities are chosen for their liquidity and from a number of underlying industries as follows: (i)30 Autos & Industrials,(ii)30 Consumers,(iii)20 Energy,(iv)20 TMT,(v)25 Financials.

Each entity is equally weighted within the index. The index has been subdivided into a number of tranches to reflect total recovery rate as well as maturity. In the case of a credit event, the protection seller pays 1-recovery rate of the defaulted issue to the protection buyer [24]. CDS sample 1 is taken from 3rd August 2007 to 18th February 2009 and represents the iTraxx 5-year index. CDS sample 2 is taken from 20th March 2006 to 29th June 2007 and represents the iTraxx 12–22% recovery rate 5 year index. CDS sample 3 is taken from 20th March 2006 to 22nd June 2007 and represents the iTraxx 22–100% recovery rate 5 year index.

As the CDS indices are reviewed every 18 months; the number of observations in each sample is significantly less than the equity and bond indices. There are less than 400 observations in each data sample. Thus, with these samples, 𝑛/𝑚<200, and we will apply the BDS statistic tables from Brock et al. [20]. As the sample sizes are small, we will also apply bootstrapping techniques to the probability estimates.

4. Results

As is commonly used in the application of the GARCH model, we will apply the parsimonious GARCH (1,1) model in all cases; see Table 1. We note that the estimates of the parameters 𝛼1 and 𝛽 sum to close to 1 for the equity and bond market samples; this implies persistence in the volatility. Referring to the Bollerslev constraints, the residual process 𝜀𝑡 is covariant stationary if and only if the sum of the estimates of the parameters is less than 1, [25]. It is noted that the coefficients for CDS sample 1 sum to a smaller number and that the 𝛽 coefficient is insignificant. For CDS sample 2, the sum of the coefficients is slightly more than 1 would indicate a nonfinite variance [26]. CDS sample 3 behaves similarly to the equity and bond samples. Thus, there is some evidence that the CDS data may be behaving differently. We note the relatively low values for the log likelihood of the CDS data samples. The ARCH LM test was applied to all the GARCH residuals as below. The hypothesis of no serial correlation in the GARCH residuals is accepted if the 𝑛𝑅2 statistic is less than the critical value of 6.63, (for one lag). The acceptance of the null hypothesis indicates that linear dependence has been removed through the application of a GARCH (1,1) model. The null was accepted in all cases except for the DAX market. For the DAX, a GARCH (1,2) model was fitted; the sum of the coefficients was 0.9866. In this case, the ARCH LM test was accepted, (0.0248(0.8749)). Thus, the residuals of the GARCH (1,2) process were used below for the DAX sample when applying the BDS test.

Before applying the test, the GARCH residuals are first transformed to log-squared standardized residuals, [21] to ensure additivity. The test is applied to the eight data samples. Brock et al. [17] state that the null hypothesis of iid residuals is accepted if the 𝑧 statistic is less than the critical value of |1.96| for a two tailed 5% significance level, for samples where 𝑛/𝑚>200. Epsilon 𝜖 was chosen to be equal to 0.7𝜎, and the embedded dimension 𝑚, was allowed to vary from 2 to 5. The null hypothesis was accepted for all the equity samples and the bond sample; see Table 2.

For the CDS sample residuals, as 𝑛/𝑚<200, more care was needed. Caporale et al. [21] conclude that when the sample size is within the 250–500 range, the estimates are still efficient. But as Brock and Sayers [20] highlight, the BDS statistic no longer follows the standard normal distribution and the BDS tables must be used. Tables 3 and 5 are taken from Lin [19] and are for 250 observations the closest to our data samples. The BDS test statistic has been calculated for a number of values of 𝜖/𝜎, Table 3 gives the critical values for 𝜖/𝜎=0.5. The statistic is asymmetric, and thus, when we apply the 5% two tail test, we use the critical values for the 97.5% quantile and 2.5% quantile, respectively.

As the critical values are calculated for 𝑚 ranging from 2 to 5, we will use this range when assessing the CDS data. To double check the results and to improve the robustness of the test statistic, bootstrapped 𝑝 values are also calculated, see Table 4. For CDS 1, the null is rejected for the third and fourth embedded dimensions. For the other two CDS samples, the null hypothesis is rejected in all cases. These results are supported by the bootstrapped 𝑝 values (denotes the rejection of the null at the 5% significance level).

Due to the inconclusive results for CDS 1, the test was applied again at 𝜖/𝜎=1; see Table 5.

At 𝜖/𝜎=1, the null hypothesis is rejected in all CDS samples, see Table 6. These results are supported by the bootstrapped 𝑝 values (the authors carried out similar tests on single entity CDS samples with larger sample sizes >1000 and found similar results. The results of these tests are available on request from the corresponding author).

5. Sensitivity Analysis

Considering the results of Section 4, in this section we aim to test the sensitivity of the results to the ARCH type model chosen. As there are two parts to the ARCH type model, that is the mean equation and the variance equation, we will vary the mean equation specifications and the variance equation specifications in order to test the robustness of the results. Four alternative ARCH type models are applied to the data before carrying out the BDS test on the respective residuals. As above, the BDS test is applied to the log-squared standardized residuals of each of the ARCH type models. For the CDS samples as 𝑛/𝑚<200, bootstrapping techniques are applied to the probability estimates.

The ARCH type models selected include the AR GARCH model with mean equation𝑟𝑡=𝜇𝑡+𝜃𝑟𝑡1+𝜀𝑡.(13)

This allows for autocorrelation in returns. The variance equation is as above (11). As evidence of autocorrelation in returns is found in some of the samples, that is, the S&P 500, the bond fund, and the three CDS indices, all further models below include the AR(1) component.

Secondly, the GARCH-in-mean or GARCH-M model is applied with a mean equation𝑟𝑡=𝜇𝑡+𝜃𝑟𝑡1+𝜆𝜎𝑡+𝜀𝑡.(14)

As well as allowing for autoregressive dependence, the GARCH-M model also introduces the standard deviation as an independent variable in the mean equation. The estimated coefficient, 𝜆 indicates the risk-return tradeoff on the asset. The variance equation remains the same as (11).

Thirdly, the threshold GARCH or TGARCH model is applied with a mean equation as above (13) and with a variance equation as follows:𝜎2𝑡=𝛼0+𝛼1𝜀2𝑡1+𝛽𝜎2𝑡1+𝛾𝜀2𝑡1𝐼𝑡1,(15) where 𝐼=1 if 𝜖𝑡1<0 and 0 otherwise. This model allows good news and bad news to have an asymmetric effect on the conditional variance.

The fourth model is the exponential GARCH or EGARCH model, with a mean equation as above (13) and with a variance equation as follows:log𝜎2𝑡=𝛼0+𝛼1||||𝜀𝑡1𝜎𝑡1||||||||𝜀𝐸𝑡1𝜎𝑡1||||+𝛽log𝜎2𝑡1𝜀+𝛾𝑡1𝜎𝑡1.(16)

This model allows for the asymmetric effect to be exponential. Detailed discussions of all these models can be found in Jondeau et al. [25] (as in Section 4, the GARCH (1,2) model was used for the Dax index in all four models in this section, and GARCH (1,2) was also used for the S&P 500 in the TGARCH and EGARCH models, as these were better fits to the data).

Table 7 gives the BDS test statistics and the probability estimates for the two GARCH models with alternative mean equations, that is, AR GARCH and GARCH-M. Table 8 gives the BDS test statistics and the probability estimates for the two GARCH models with alternative variance equations, that is TGARCH and EGARCH. As before, * indicates rejection of the null hypothesis at a 5% level (the estimates of the coefficients of each of the models, the log likelihoods, the ARCH LM test results, and so forth are available on request from the corresponding author).

In general, the results are robust for the equity and bond samples and also for CDS sample 2 and 3, with both samples indicating nonlinear dependencies in the log squared standardized residuals of each of the four models. For CDS 1, the alternative GARCH models appear to be removing the nonlinear dependence in the data; this is a similar result as we received for some of the results for CDS 1 in Section 4, for the GARCH(1,1) model at 𝜖/𝜎=0.5.

Thus, the sensitivity analysis using various GARCH specifications does not significantly alter our conclusions; there continues to be evidence that the GARCH model is not sufficient to incorporate the nonlinear dependencies in all of the CDS samples although it is sufficient to incorporate the dependencies in the equity and bond market data. The weakness in the BDS test is that it does not infer the structure of the nonlinearity in the data; for example, it does not distinguish between nonlinear stochastic and nonlinear deterministic processes. An understanding of the structure of the data would facilitate improved modeling. This is the aim of Section 6.

6. Tests for a Nonlinear Deterministic Process

The objective of this section is to apply additional tests to develop an improved understanding of the structure of the data. We will apply two additional tests for nonlinearity. Firstly, we will use time-delay embedding to examine the phase portraits of the three CDS samples and the S&P 500. The motivation for this analysis is that it can often reveal deterministic structure which is not evident in a plot of 𝑥𝑡 versus 𝑡 (time) [27]. Secondly, we will apply the correlation dimension test [27]. By estimating the correlation dimension of the time series, we can infer whether or not the samples appear to follow a stochastic or a deterministic structure.

6.1. Phase Portraits

We use time-delay embedding to plot the solution paths in phase space (𝑥𝑡 against 𝑥𝑡1) for the three CDS samples and also (as a comparison) the S&P 500 sample. As this is a univariate time series analysis, we assume a one dimensional process, and use a time lag of 1 (alternatively the time lag could be chosen based on the first zero crossing of the autocorrelation function [6]. In this case, as the CDS data and the S&P 500 have been shown to follow an AR(1) process, thus we would use a time lag of 2. These phase portraits, (𝑥𝑡 against 𝑥𝑡2) were also collated, and as they show similar patterns to the time lag 1 phase portraits, they are not included here. They are available on request from the corresponding author).

For the purpose of a comparative analysis, we generate a time series of iid random variables that exhibit similar GARCH properties to the real data. According to Kantz and Schreiber [28], if the sample of data is properly described by the linear process of the model, we should not find any significant differences between the phase portrait of the real data and the simulated series. The procedure is to generate random numbers and then to transform these random numbers using the dynamic GARCH variance and the estimated coefficents of the real data, to create a simulated series of 𝑥𝑡 for each sample of real data. As the simulated series have been transformed using the GARCH properties of the real data, the comparison of the two portraits (real data versus simulated series) will indicate if there are further nonlinear dependencies in the real data which are not being represented in the GARCH model (as all four samples discussed in this section showed evidence of autoregressive dependence in the mean equation, we will use the ARGARCH model as in Section 5 to transform the simulated series).

We will look at four of the eight samples, the S&P 500 and the three CDS samples. Figures 1 and 2 below represent the S&P 500 and CDS 1. The real data and the simulated series appear to be stochastic and symmetric around the regression lines. Histograms have been included to examine if the simulated series is in general a good overall description of the real data, this appears to be the case for the S&P 500 and the CDS 1. Figures 3 and 4 represent the phase portraits for CDS 2 and 3. We can see certain patterns in the real data which are notably different to the patterns in the simulated series. The real data appears to have more points lying on the 𝑥 and 𝑦 axis. This would imply that there are nonlinear characteristics in the real data which are not being described by the GARCH model. Secondly, we note that the points in the simulated series phase portraits are more spread out across the phase space than the real data. This is because the GARCH model assumes that the underlying dynamic variance is at all times different from zero. On average, the GARCH model simulation overestimates the variance in the real data. In the real data, there are times when the points are close together and other times when they are far apart; therefore, the extreme values are less often but sometimes larger in the real data than in the simulated series. This characteristic is further illustrated by comparing the histograms of the real data in Figures 3 and 4 to the simulations. The real data appears to be more leptokurtic than the simulated series. It is also clear that the real data does not follow a Gaussian distribution and that the GARCH model does not fully describe the non-Gaussian characteristics of the real data. These results justify and support the results of the BDS test above; there appears to be remaining nonlinearities in the CDS real data and the real data is notably non-Gaussian.

In conclusion, if the GARCH model fully explained the dependencies in the samples, then the phase portraits of the real data would be similar to those of the simulated series. For the CDS data, particularly CDS 2 and 3, we do not see this property. We see certain other patterns (as discussed in the previous paragraph) which would lead us to conclude that the traditional method of linear ARCH type modeling may not fully reveal the nonlinear nature of the CDS data. In response to the concerns of Serletis et al. [5], mentioned above, this comparison of phase portraits also allows us to review the implications of imposing the GARCH model onto the sample prior to applying the BDS test. The GARCH simulations appear to be stochastic and symmetric around the regression line which would imply that the model itself may not be adding nonlinearities to the data. Finally, the phase portraits indicate no clear evidence of points or cycles towards which the process settles down, (i.e. no definitive evidence of attractors). The process appears to be nonlinear and stochastic rather than nonlinear and deterministic. To further test this, we will apply the correlation dimension test.

6.2. The Correlation Dimension Test

In Section 4, we concluded that nonlinear dependencies appear to remain in the GARCH residuals of the CDS samples particularly for CDS sample 2 and 3. One of the weaknesses of the BDS test is that the rejection of the null does not distinguish between a nonlinear stochastic and a nonlinear deterministic process. In this section, we aim to further test the samples to see if they are from a stochastic or a deterministic process. To this aim, the correlation dimension test is applied. The correlation dimension was first suggested by Grassberger and Procaccia, [29] and uses the same probability estimate, that is the correlation integral (1) as the BDS test. Grassberger and Proccacia [29] show that the analysis of the correlation integral 𝐶𝑚,𝑛(𝜖), can be used to analyse the dynamics of the time series in question. As above in Section 2, 𝐶𝑚,𝑛(𝜖) (1) measures the probability that the distance between any two m-histories is 𝜖. If 𝐶𝑚,𝑛(𝜖) is large, the data is said to be well correlated, and if the value of the correlation integral is small, the data is said to be relatively uncorrelated, [27]. As 𝜀 increases, one would expect 𝐶𝑚,𝑛(𝜖) to increase, as increasing 𝜖 will allow for more vector lengths to be included in the sum. Grassberger and Procaccia [29] have shown that for small values of 𝜖, as 𝜖 increases, 𝐶𝑚,𝑛(𝜖) grows exponentially at a rate known as the correlation dimension (𝐷𝑐). Thus, the definition of 𝐷𝑐 is as follows:𝐷𝑐=lim𝜖0𝑑log𝐶𝑚,𝑛(𝜖)𝑑log𝜖,(17) that is, the correlation dimension is equal to the slope of the regression of log𝐶𝑚,𝑛(𝜖) on log𝜖, [27]. As the embedded dimension 𝑚 increases, the correlation dimension of a stochastic process will continue to increase. Whereas for a deterministic process, there will be a finite saturation point; this should be for some relatively small value of 𝑚 [27]. Thus, the implementation of the correlation dimension test is relatively straightforward. Firstly, for a fixed embedded dimension, 𝑚, we estimate the correlation integral for a range of values of 𝜖. Then, 𝐷𝑐 is estimated as the slope of the regression of log𝐶𝑚,𝑛(𝜖) on log𝜖. This method is repeated for higher values of the embedded dimension 𝑚. One limitation of the correlation dimension test is that it can only distinguish a low-dimensional deterministic series. Ruelle [30] shows that if the correlation dimension reaches a finite saturation point, then the data generating process is believed to be deterministic only if that point is well below 2log10𝑛, (where 𝑛 represents the number of observations). We show that in all cases (see Table 9), the estimated correlation dimension increases beyond 2log10𝑛 for all the samples. Thus, we can conclude that our samples are not from a low-dimensional deterministic process but are from a stochastic process (or possibly from a high-dimensional deterministic process). We apply the correlation dimension test to the three CDS samples and for comparison to the S&P 500 sample. Table 9 gives the estimates of the correlation dimension for varying levels of 𝑚 and the probability estimates from the 𝑡 tests (which were estimated from the regressions). We can see in all cases the correlation dimension continues to rise as 𝑚 increases, well beyond the limit suggested by Ruelle [30]. We also see that the estimates of the correlation dimension are significant at a 5% level in all cases. Figure 5 plots the correlation dimension against m for all the 4 samples. It is clear that the samples come from a stochastic process (or possibly a high-dimensional deterministic process). It is worthwhile noting that when attempting to model a high-dimensional deterministic process, it is in general difficult to distinguish from a stochastic process [28].

7. Concluding Remarks

It is widely accepted that applying GARCH to the logged returns of a financial time series leads to iid residuals and that the application of GARCH removes linear dependence from the residuals of the time series. This claim is clearly proved in the case of the equity and bond data, and it may be possible to fit this data with a probability distribution function, such as the Gaussian or the Student 𝑡. The results of the BDS test would indicate that there is evidence of nonlinear dependence in the CDS data. These results were shown to be consistent for a number of ARCH type models.

It is noted that a weakness of the rejection of the null in the BDS test is that it is unable to distinguish between nonlinear deterministic and nonlinear stochastic processes. It does not guide us as to the specifics of the dependence in the data. Thus, further tests of nonlinearity were applied to the data. The analysis of the phase portraits and the application of the correlation-dimension test indicate that the CDS data appears to be a nonlinear, non-Gaussian stochastic (or possibly a nonlinear high-dimensional deterministic) process.

In conclusion, as 85% of the underlying assets in CDS contracts are 5-year bonds [7], it is surprising to note that the GARCH residuals for the bond sample appear to be iid, while the CDS sample GARCH residuals indicate evidence of nonlinear dependence. This may indicate that the source of the nonlinear dependence is particular to the CDS contract and does not relate to the assets underlying the contract. The linear GARCH model may not be sufficient to incorporate the dependencies in the CDS data. The results also question the efficacy of using bond market default probability estimates to price CDS contracts (or vice versa) or the efficacy of using a CDS contract to hedge a bond position. Thus, this paper has a number of implications for further research to better model the CDS in terms of its nonlinear non Gaussian stochastic (or possibly high-dimensional deterministic) structure.

Acknowledgment

The authors would like to thank Dr. Petri Piiroinen and Dr. Joanna Mason, School of Mathematics, Statistics, and Applied Mathematics National University of Ireland, Galway, for their constructive comments. The authors would also like to thank the anonymous reviewer for their constructive comments and suggestions.