#### Abstract

We investigate the Capital Asser Pricing Model (CAPM) with time dimension. By using time series analysis, we discuss the estimation of CAPM when market portfolio and the error process are long-memory process and correlated with each other. We give a sufficient condition for the return of assets in the CAPM to be short memory. In this setting, we propose a two-stage least squares estimator for the regression coefficient and derive the asymptotic distribution. Some numerical studies are given. They show an interesting feature of this model.

#### 1. Introduction

The CAPM is one of the typical models of risk asset’s price on equilibrium market and has been used for pricing individual stocks and portfolios. At first, Markowitz [1] did the groundwork of this model. In his research, he cast the investor’s portfolio selection problem in terms of expected return and variance. Sharpe [2] and Lintner [3] developed Markowitz’s idea for economical implication. Black [4] derived a more general version of the CAPM. In their version, the CAPM is constructed based on the excess of the return of the asset over zero-beta return , where and are the return of the th asset and the market portfolio and is the return of zero-beta portfolio of the market portfolio. Campbell et al. [5] discussed the estimation of CAPM, but in their work they did not discuss the time dimension. However, in the econometric analysis, it is necessary to investigate this model with the time dimension; that is, the model is represented as . Recently from the empirical analysis, it is known that the return of asset follows a short-memory process. But Granger [6] showed that the aggregation of short-memory processes yields long-memory dependence, and it is known that the return of the market portfolio follows a long-memory process. From this point of view, first, we show that the return of the market portfolio and the error process are long-memory dependent and correlated with each other.

For the regression model, the most fundamental estimator is the ordinary least squares estimator. However, the dependence of the error process with the explanatory process makes this estimator to be inconsistent. To overcome this difficulty, the instrumental variable method is proposed by use of the instrumental variables which are uncorrelated with the error process and correlated with the explanatory variable. This method was first used by Wright [7], and many researchers developed this method (see Reiersøl [8], Geary [9], etc.). Comprehensive reviews are seen in White [10]. However, the instrumental variable method has been discussed in the case where the error process does not follow long-memory process, and this makes the estimation difficult.

For the analysis of long-memory process, Robinson and Hidalgo [11] considered a stochastic regression model defined by , where are unknown parameters and the -vector processes and are long-memory dependent with . Furthermore, in Choy and Taniguchi [12], they consider the stochastic regression model , where and are stationary process with , and Choy and Taniguchi [12] introduced a ratio estimator, the least squares estimator, and the best linear unbiased estimator for . However, Robinson and Hidalgo [11] and Choy and Taniguchi [12] assume that the explanatory process and the error process are independent.

In this paper, by the using of instrumental variable method we propose the two-stage least squares (2SLS) estimator for the CAPM in which the returns of the individual asset and error process are long-memory dependent and mutually correlated with each other. Then we prove its consistency and CLT under some conditions. Also, some numerical studies are provided.

This paper is organized as follows. Section 2 gives our definition of the CAPM, and we give a sufficient condition that return of assets as short dependence is generated by the returns of market portfolio and error process which are long-memory dependent and mutually correlated each other. In Section 3 we propose 2SLS estimator for this model and show its consistency and asymptotic normality. Section 4 provides some numerical studies which show interesting features of our estimator. The proof of theorem is relegated to Section 5.

#### 2. CAPM (Capital Asset Pricing Model)

For Sharpe and Lintner version of the CAPM (see Sharpe [2] and Lintner [3]), the expected return of asset is given by where is the return of the market portfolio, and is the return of the risk-free asset. Another Sharpe-Lintner’s CAPM (see Sharpe [2] and Lintner [3]) is defined for , where and .

Black [4] derived a more general version of CAPM, which is written as
where and is the return on the *zero-beta portfolio*.

Since CAPM is single-period model, (2.1) and (2.5) do not have a time dimension. However, for econometric analysis of the model, it is necessary to add assumptions concerning the time dimension. Hence, it is natural to consider the model: where denotes the asset, denotes the period, and and , and , are, respectively, the returns of the asset and the market portfolio at .

Empirical features of the realized returns for assets and market portfolios are well known.

We plot the autocorrelation function ( ( time lag)) of returns of IBM stock and SP500 (squared transformed) in Figures 1 and 2, respectively.

From Figures 1 and 2, we observe that the return of stock (i.e., IBM) shows the short-memory dependence and that a market index (i.e., SP500) shows the long-memory dependence.

Suppose that an -dimensional process is generated by where and are unknown vector and matrix; respectively, is an explanatory stochastic regressor process, and is a sequence of disturbance process. The th component is written as where .

In the CAPM, is the return of assets and is the return of the market portfolios. As we saw, empirical studies suggest that is short-memory dependent and that is long-memory dependent. On this ground, we investigate the conditions that the CAPM (2.7) is well defined. It is seen that, if the model (2.7) is valid, we have to assume that is also long-memory dependent and is correlated with .

Hence, we suppose that and are defined by where , , and are -dimensional zero-mean uncorrelated processes, and they are mutually independent. Here the coefficients and are -matrices, and all the components of are -summable, (for short, ) and those of are -summable (for short, ). The coefficients and are -matrices, , and . From (2.9) it follows that

Although generally, if , , then , which leads to the following.

Proposition 2.1. *If , then the process is short-memory dependent.*

Proposition 2.1 provides an important view for the CAPM; that is, if we assume natural conditions on (2.7) based on the empirical studies, then they impose a sort of “curved structure”: on the regressor and disturbance. More important view is the statement implying that the process is fractionally cointegrated. Here and are called the cointegrating vector and error, respectively, (see Robinson and Yajima [13]).

#### 3. Two-Stage Least Squares Estimation

This section discusses estimation of (2.7) satisfying Proposition 2.1. Sinc, the least squares estimator for , is known to be inconsistent. In what follows we assume that in (2.7), because it can be estimated consistently by the sample mean. However, by use of the econometric theory, it is often possible to find other variables that are uncorrelated with the errors , which we call instrumental variables, and to overcome this difficulty. Without instrumental variables, correlations between the observables and unobservables persistently contaminate our estimator for . Hence, instrumental variables are useful in allowing us to estimate .

Let be -dimensional vector () instrumental variables with , , and . Consider the regression of on . If can be represented as where is a matrix and is a -dimensional vector process which is independent of , can be estimated by the OLS estimator From (2.7) with and (3.1), has the form: and is uncorrelated with ; hence, can be estimated by the OLS estimator:

Using (3.2) and (3.4), we can propose the 2SLS estimator:

Now, we aim at proving the consistency and asymptotic normality of the 2SLS estimator . For this we assume that and jointly constitute the following linear process: where is uncorrelated -dimensional vector process with and s are matrices which satisfy . Then has the spectral density matrix: where Further, we assume that , so that the process is nondeterministic. For the asymptotics of , from page 108, line1–page 109, line 7 of Hosoya [14], we impose the following assumption.

*Assumption 3.1. *(i) There exists such that, for any and for each *,*
and also
uniformly in , where is the -field generated by .

(ii) For any and for any integer , there exists such that
uniformly in , , where
and is the indicator, which is equal to 1 if and equal to 0 otherwise.

(iii) Each is square-integrable.

Under the above assumptions, we can establish the following theorem.

Theorem 3.2. *Under Assumption 3.1, it holds that *(i)*(ii)**
where
and is a random matrix whose elements follow normal distributions with mean 0 and
*

The next example prepares the asymptotic variance formula of to investigate its features in simulation study.

*Example 3.3. *Let and be scalar long-memory processes, with spectral densities and , respectively, and cross spectral density , where and . Then

Suppose that is a scalar uncorrelated process with . Assuming Gaussianity of , it is seen that the right hand of (3.17) is
which entails

#### 4. Numerical Studies

In this section, we evaluate the behaviour of in the case in (2.7) numerically.

*Example 4.1. *Under the condition of Example 3.3, we investigate the asymptotic variance behaviour of by simulation. Figure 3 plots for and .

From Figure 3, we observe that, if and if , then becomes large, and otherwise is small. This result implies only in the case that the long-memory behavior of is weak and the long-memory behavior of is strong, is large. Note that long-memory behaviour of makes the asymptotic variance of the 2SLS estimator small, but one of makes it large.

*Example 4.2. *In this example, we consider the following model:
where , , and are the scalar long-memory processes which follow FARIMA(), FARIMA(0,,0), and FARIMA(0,0.1,0), respectively. Note that and are correlated, and are correlated, but and are independent. Under this model we compare with the ordinary least squares estimator for , which is defined as
The lengths of , , and are set by 100, and based on 5000 times simulation we report the mean square errors (MSE) of and . We set in Table 1.

In most cases of and in Table 1, MSE of is smaller than that of . Hence, from this Example we can see that our estimator is better than in the sense of MSE. Furthermore, from Table 1, we can see that MSE of and increases as becomes large; that is, long-memory behavior of makes the asymptotic variances of and large.

*Example 4.3. *In this example, we calculate based on the actual financial data. We choose SP500 (square transformed) as and the Nikkei stock average as an instrumental variable . Assuming that consists of the return of IBM, Nike, Amazon, American Expresses and Ford; the 2SLS estimates for are recorded in Table 2. We chose the Nikkei stock average as the instrumental variable, because we got the following correlation analysis between the residual processes of returns and Nikkei.Correlation of IBM’s residual and Nikkei’s return: −0.000311Correlation of Nike’s residual and Nikkei’s return: −0.00015Correlation of Amazon’s residual and Nikkei’s return: −0.000622Correlation of American Express’s residual and Nikkei’s return: 0.000147Correlation of Ford’s residual and Nikkei’s return: −0.000536,which supports the assumption .

From Table 2, we observe that the return of the finance stock (American Express) is strongly correlated with that of SP500 and the return of the auto industry stock (Ford) is negatively correlated with that of SP500.

#### 5. Proof of Theorem

This section provides the proof of Theorem 3.2. First for convenience we define . Let be the residual from the OLS estimation of (3.1); that is,

The makes this residual orthogonal to : which implies the residual is orthogonal to , where is th column vector of . Hence, we can obtain for all and . This means So, the th column vector of the 2SLS estimator (3.5) (say) can be represented as which leads to Hence, we can see that

Note that, by the ergodic theorem (e.g., Stout [15] p179–181),

Furthermore, the second term of the right side of (5.8) can be represented as and by the ergodic theorem (e.g., Stout [15] p179–181), we can see

*Proof of (i). *From the above,

In view of Theorem 1.2 (i) of Hosoya [14], the right-hand side of (5.12) converges to in probability.

*Proof of (ii). *From Theorem 3.2 of Hosoya [14], if Assumption 3.1 holds, it follows that . Hence, Theorem 3.2 is proved.

#### Acknowledgments

The author would like to thank the Editor and the referees for their comments, which improved the original version of this paper.