Abstract

Missing-data problems are extremely common in practice. To achieve reliable inferential results, we need to take into account this feature of the data. Suppose that the univariate data set under analysis has missing observations. This paper examines the impact of selecting an auxiliary complete data set—whose underlying stochastic process is to some extent interdependent with the former—to improve the efficiency of the estimators for the relevant parameters of the model. The Vector AutoRegressive (VAR) Model has revealed to be an extremely useful tool in capturing the dynamics of bivariate time series. We propose maximum likelihood estimators for the parameters of the VAR(1) Model based on monotone missing data pattern. Estimators’ precision is also derived. Afterwards, we compare the bivariate modelling scheme with its univariate counterpart. More precisely, the univariate data set with missing observations will be modelled by an AutoRegressive Moving Average (ARMA(2,1)) Model. We will also analyse the behaviour of the AutoRegressive Model of order one, AR(1), due to its practical importance. We focus on the mean value of the main stochastic process. By simulation studies, we conclude that the estimator based on the VAR(1) Model is preferable to those derived from the univariate context.

1. Introduction

Statistical analyses of data sets with missing observations have long been addressed in the literature. For instance, Morrison [1] deduced the maximum likelihood estimators of the parameters of the multinormal mean vector and covariance matrix for the monotonic pattern with only a single incomplete variate. The exact expectations and variances of the estimators were also deduced. Dahiya and Korwar [2] obtained the maximum likelihood estimators for a bivariate normal distribution with missing data. They focused on estimating the correlation coefficient as well as the difference of the two means. Following this line of research and having in mind that the majority of the empirical studies are characterised by temporal dependence between observations, we will try to generalise the previous study by introducing a bivariate time series model to describe the relationship between the processes under consideration.

The literature on missing data has expanded in the last decades focusing mainly on univariate time series models [37], but there is still a lack of developments in the vectorial context.

This paper aims at analysing the main properties of the estimators from data generated by one of the most influential models in empirical studies, that is, the first-order Vector AutoRegressive (VAR) Model, when the data set from the main stochastic process, designated by , has missing observations. Therefore, we assume that there is also available a suitable auxiliary stochastic process, denoted by , which is to some extent interdependent with the main stochastic process. Additionally, the data set obtained from this process is complete. In this context, a natural question arises: is it possible to exchange information between the two data sets to increase knowledge about the process whose data set has missing observations, or should we analyse the univariate stochastic process by itself? The goal of this paper is to answer this question.

Throughout this paper, we assume that the incomplete data set has a monotone missing data pattern. We follow a likelihood-based approach to estimate the parameters of the model. It is worth pointing out that, in the literature, likelihood-based estimation is largely used to manage the problem of missing data [3, 8, 9]. The precision of the maximum likelihood estimators is also derived.

In order to answer the question raised above, we must verify if the introduction of an auxiliary variable for estimating the parameters of the model increases the accuracy of the estimators. To accomplish this goal, we compare the precision of the estimators just cited with those obtained from modelling the dynamics of the univariate stochastic process by an AutoRegressive Moving Average (ARMA(2,1)) Model, which corresponds to the marginal model of the bivariate VAR Model [10, 11]. The behaviour of the AutoRegressive Model of order one, AR, is also analysed due to its practical importance in time series modelling. Simulation studies allow us to assess the relative efficiency of the different approaches. Special attention is paid to the estimator for the mean value of the stochastic process about which information available is scarce. This is a reasonable choice given the importance of the mean function of a stochastic process in understanding the behaviour of the time series under consideration.

The paper is organised as follows. In Section 2, we review the VAR Model and highlight a few statistical properties that will be used in the remaining sections. In Section 3, we establish the monotone pattern of missing data and factorise the likelihood function of the VAR Model. The maximum likelihood estimators of the parameters are obtained in Section 4. Their precision is also deduced. Section 5 reports the simulation studies in evaluating different approaches to estimate the mean value of the stochastic process . The main conclusions are summarised in Section 6.

2. Brief Description of the VAR(1) Model

In this section, a few properties of the Vectorial Autoregressive Model of order one are analysed. These features will play an important role in determining the estimators for the parameters when there are missing observations, as we will see in Section 4.

Hereafter, the stochastic process underlying the complete data set is denoted by , while the other one is represented by . The VAR Model under consideration takes the form where and are Gaussian white noise processes with zero mean and variances and , respectively. The structure of correlation between the error terms is different from zero only at the same date , that is, , for ; , for . Exchanging information between both time series might introduce some noise in the overall process. Therefore, transfer of information from the smallest series to the largest one is not allowed here.

We have to introduce the restrictions and . They ensure not only that the underlying processes are ergodic for the respective means but also that the stochastic processes are covariance stationary (see Nunes [12, ch.3]). Hereafter, we assume that these restrictions are satisfied.

Next, we overview some relevant properties of the VAR Model (1). Theoretical details can be found in Nunes [12, ch.3].

The mean values of and are, respectively, given by

Concerning the covariance structure of the process ,

For , the covariance of the stochastic process is given by

Considering that , we have

In regard to the structure of covariance between the stochastic processes and , for , we have

When , the covariance function under study takes the form

By writing out the stochastic system of (1) in matrix notation, the bivariate stochastic process can be expressed as where is the 2-dimensional Gaussian white noise random vector.

Hence, at each date , the conditional stochastic process follows a bivariate Gaussian distribution, where the two-dimensional conditional mean value vector and the variance-covariance matrix are, respectively, given by

Straightforward computations lead us to the following factoring of the probability density function of conditional to :

Thus, the joint distribution of the pair and conditional to the values of the process at the previous date , , can be decomposed into the product of the marginal distribution of and the conditional distribution of . Both densities follow univariate Gaussian probability laws: Also, follows a Gaussian distribution with where or, for interpretive purposes, . The parameter describes, thus, a weighted correlation between the error terms and . The weight corresponds to the ratio of their standard deviations. Moreover, .

The variance has the following structure:

The conditional distribution of can be interpreted as a straight-line relationship between and and . Additionally, it is worth mentioning that if or , the above conditional distribution degenerates into its mean value. Henceforth, we will discard these particular cases, which means that .

3. Factoring the Likelihood Based on Monotone Missing Data Pattern

We focus here on theoretical background for factoring the likelihood function from the VAR Model when there are missing values in the data. Suppose that we have the following monotone pattern of missing data:

That is, there are observations available from the stochastic process , whereas due to some uncontrolled factors it was only possible to record observations from the stochastic process . In other words, there are missing observations from .

Let the observed bivariate sample of size with missing values: denote a realisation of the random process , which follows a vectorial autoregressive model of order one. The likelihood function, , is given by where is the -dimensional vector of population parameters. To lighten notation, we assume that there is no need for conditioning the arguments of the above probability density functions on the values of the processes at date . The likelihood function becomes

Two points must be emphasised: first, we emphasise that the maximum likelihood estimators (m.l.e.) for the unknown vector of parameters will be obtained by maximising the natural logarithm of the above likelihood function. Second, a worthwhile improvement in reducing the complexity of the function to maximise is to determine the conditional maximum likelihood estimators regarding the first pair of random variables, , as deterministic and maximising the log-likelihood function conditioned on the values and . The loss of efficiency of the estimators obtained from such a procedure is negligible when compared with the exact maximum likelihood estimators computed by iterative techniques. Even for moderate sample sizes, the first pair of observations makes a negligible contribution to the total likelihood. Hence, the exact m.l.e. and the conditional m.l.e. turn out to have the same large sample properties, Hamilton [13]. Hereafter, we restrict the study to the conditional loglikelihood function.

Despite the above solutions for reducing the complexity of the problem, some difficulties still remain. The loglikelihood equations are intractable. To go over this problem we have to factorise the conditional likelihood function. From (17) we get

So as to work out the analytical expressions for the unknown parameters under study, we have to decompose the entire likelihood function (18) into easily manipulated components.

For the Gaussian VAR processes, the conditional maximum likelihood estimators coincide with the least squares estimators [13]. Therefore, we may find a solution to the problem just raised in the geometrical context. The identification of such components relies on two of the most famous theorems in the Euclidean space: the Orthogonal Decomposition Theorem and the Approximation Theorem [14, Volume I, pages 572–575]. Based on these tools it is straightforward to establish that the estimation subspaces associated with the conditional distributions and are, by construction, orthogonal to each other. This means that each element belonging to one of those subspaces is uncorrelated with each element that pertains to their orthogonal complement. Hence, events that happen on one subspace provide no information about events on the other subspace.

The aforementioned arguments guarantee that the decomposition of the joint likelihood in two components can be carried out with no loss of information for the whole estimation procedure. From (18) we can, thus, decompose the conditional loglikelihood function as follows:

Henceforth, denotes the loglikelihood from the marginal distribution of , based on the whole sampled data with dimension , that is, . The function represents the loglikelihood from the conditional density of computed by the bivariate sample of size :

The components and of (19) will be maximised separately in Section 4.1.

4. Maximum Likelihood Estimators for the Parameters

In Section 4.1 the m.l.e. of the parameters from the fragmentary VAR Model are deduced. The precision of the estimators is examined in Section 4.2.

4.1. Analytical Expressions

Theoretical developments carried out in this section rely on solving the loglikelihood equations obtained from the factored loglikelihood given by (19). Before proceeding with theoretical matters, we introduce some relevant notation in the ensuing paragraphs.

Let represent the sample mean lagged time units, . The subscript , allows us to identify the number of observations that takes part in the computation of the sample mean. A similar notation is used for denoting the sample mean of the random sample , for , . According to this new definition, the sample variance of each univariate random variable based on observations and lagged time units is denoted by

Let describe the sample autocovariance coefficient at lag one for the stochastic process , based on observations. Its counterpart for the stochastic process , , is obtained by changing notation accordingly. The sample autocorrelation coefficient of the random process at lag one is denoted by . The empirical covariance between the random processes and lagged one time unit is represented by

The sample covariance coefficient of and computed from time units lag for each series is given by (i)Maximising the loglikelihood function : Using the results (11) and (19), we readily find the following m.l.e. where is the respective residual sum of squares.(ii)Maximising the loglikelihood function : Based on (12) and (13) we get the loglikelihood function

We readily find out that the m.l.e. for the parameters under study are given by where denotes the corresponding residual sums of squares.

Using the results from Section 2 we get the following estimators for the original parameters:

Thus, the analytical expressions for the estimators of the mean values, variances, and covariances of the VAR Model are given by

These estimators will play a central role in the following sections.

4.2. Precision of the Estimators

In the section, the precision of the maximum likelihood estimators underlying equations (28) is derived. The whole analysis will be separated in three stages. First, we study the statistical properties of the vector , where , with and . For notation consistency, the unknown parameter is either denoted by or . That is, . Secondly, we derive the precision of the m.l.e. of the original parameters of the VAR Model (see (1)). Finally, we will focus our attention on the estimators for the mean vector and the variance-covariance matrix at lag zero of the VAR model with a monotone pattern of missingness.

There are a few points worth mentioning. From Section 3 we know that there is no loss of information in maximising separately the loglikelihood functions and (19). As a consequence, the variance-covariance matrix associated with the whole set of estimated parameters is a block diagonal matrix. For sufficiently large sample size, the distribution of the maximum likelihood estimator is accurately approximated by the following multivariate Gaussian distribution: where and denote the Fisher information matrices, respectively, from the components and of the loglikelihood function (see (19)). There is an asymptotic equivalence between the Fisher information matrix and the Hessian matrix (see [8, ch.2]). Moreover, as long as there is also an asymptotic equivalence between the Hessian matrix computed at the points and . Henceforth, the Fisher information matrices from (29) are estimated, respectively, by

To lighten notation, from now on we suppress the “hat” from the consistent estimators of the information matrices.

The variance-covariance matrix for takes the following form:

We stress that there is orthogonality between the error and the estimation subspaces underlying the loglikelihood function .

Calculating the second derivatives of the loglikelihood function results in the following approximate information matrix:

Once again, we mention that there is orthogonality between the error and the estimation subspaces underlying the loglikelihood function . The matrix can be written in a compact form: where the submatrix and the scalar are, respectively, defined as with

Using the above partition of it is rather simple to compute the inverse matrix. In fact, with and .

Unfortunately, there is no explicit expression for the inverse matrix . As a result, there are no explicit expressions for the approximate variance-covariance of the m.l.e. for the vector of unknown parameters .

Now, we have to analyse the precision of the m.l.e. of the original parameters of the VAR Model, that is, .

Recalling from Section 2, the one-one monotone functions that relate the vector of parameters under consideration, that is,

The parameters , and remain unchanged. A key assumption in the following developments is that neither the estimates of the unknown parameters nor the true values fall on the boundary of the allowable parameter space.

The variance-covariance matrix of the m.l.e. for the vector of parameters is obtained by the first-order Taylor expansion at . We also use the chain rule for derivatives of vector fields ([for details, see [14, Volume II, pages 269–275]).

Writing the vector of parameters as a function of the vector , the respective first-order partial derivatives can be joined together in the following partitioned matrix: where the submatrix corresponds to the first-order partial derivatives of the vector with respect to itself, which means that is nothing but the identity matrix of order , . On the other hand, this statement also means that the derivatives of the parameters under consideration with respect to either or are zero. In other words, the submatrix is equal to the null vector, that is, .

The submatrix and the submatrix are composed by the first-order partial derivatives of each component of the vector of parameters with respect to, respectively, and . Their structures are, thus, given by

For finding out the approximate variance-covariance matrix of the maximum likelihood estimators for the unknown vector of parameters , it is only necessary to pre- and postmultiply the variance-covariance matrix arising from expressions (29), (31), and (36) by, respectively, the matrix and its transpose, . More precisely, Hence, with denoting the variance-covariance matrix of the m.l.e. for the vector of unknown parameters . A more detailed analysis of the variance-covariance matrix (41) can be found in Nunes [12, ch.3, p.91-92].

We can now deduce the approximate variance-covariance matrix of the maximum likelihood estimators for the mean vector and the variance-covariance matrix at lag zero of the VAR Model with a monotone pattern of missingness, represented by . The first-order partial derivatives of the vector with respect to the vector are placed in a matrix that is denoted by . It takes the following form:

According to the partition of the matrix into four blocks—expression (38)—we partition the matrix into the following blocks: the submatrix corresponds to the partial derivatives of , and with respect to themselves. As a consequence, is the identity matrix of order , that is, . Regards to the sub-matrix , its elements correspond to the partial derivatives of , and with respect to , and . Therefore, . The partial derivatives of , and with respect to , and are gathered together in the sub-matrix : where

The -dimensional square sub-matrix corresponds to the partial derivatives of , and with respect to : with its nonnull elements taking the following analytical expressions:

Straightforward calculations have paved the way to the desired partitioned variance-covariance matrix, called here , with its submatrices defined by

The matrix that has just been defined as corresponds to the first-order partial derivatives from the composite functions that relate , and with the vector of parameters . The elements of the matrix are the first-order partial derivatives from the composite functions that relate , and with the vector of unknown parameters .

The -dimensional square sub-matrix corresponds to the approximate covariance structure between the m.l.e. of the parameters , and . The sub-matrix is composed of the approximate covariances between the m.l.e. that have just been cited and , and ; its transpose is denoted by . This is the reason why , or , results from the product of the variance-covariance matrix and . The -dimensional square sub-matrix is formed by the covariances between the m.l.e. for , and .

The main point of the section is to study the variances and covariances that take part of the sub-matrix . Thus, it is of interest to further explore its analytical expression. The matrix takes a cumbersome form. The most efficient way to deal with it is to consider its partition rather than the whole matrix at once.

Let where the sub-matrix takes the form

with where is defined by (45).

The -dimensional column vector , the -dimensional row vector and the scalar are, respectively, given by

On the other hand, we can also make the following partition of the matrix : where the sub-matrix corresponds to the first order partial derivatives of the vector with respect to the vector , whereas their derivatives with respect to the parameter constitute the sub-matrix . The sub-matrix is composed of the first order partial derivatives of with respect to each component of the vector . Finally, the scalar .

The desired variance-covariance matrix can therefore be written in the following partitioned form: with where the matrix is defined by (35). The matrix takes the form

In short, the matrix defined by (54) corresponds to the approximate variance-covariance matrix of the m.l.e. for the mean vector and variance-covariance matrix at lag zero for the VAR Model with missing data. We cannot write down explicit expressions for those variances and covariances. The limitation arises from the inability to invert the matrix product in analytical terms (see (36)). Hence, its inverse can only be accomplished by numerical techniques using the observed sampled data. This point will be pursued further in Section 5.

Despite the above restrictions, several investigations can be done regarding the amount of additional information obtained by making full use of the fragmentary data available. The strength of the correlation between the stochastic processes here plays a crucial role. These ideas will be developed in Section 5.

5. Simulation Studies

In this section, we analyse the effects of using different strategies to estimate the mean value of the stochastic process , denoted by . More precisely, the bivariate modelling scheme and its univariate counterparts are compared. Simulation studies are carried out to evaluate the relative efficiency of the estimators with interest.

The m.l.e. of the mean value of the stochastic process based on the VAR Model is obtained by the second equation of the system of (28). We need to compare this estimator to those obtained by considering the univariate stochastic process itself. More precisely, having in mind that we are handling a bivariate VAR Model, the corresponding marginal model is the ARMA(2,1) [10, 11]. On the other hand, the AR Model is one of the most popular models due to its practical importance in time series modelling. Therefore, the behaviour of the AR Model will be also evaluated. In short, we will compare the performance of the VAR Model with both the ARMA(2,1) and the AR Models.

To avoid any confusion between the parameters coming from the bivariate and the univariate modelling strategies, from now on we denote the parameter from the VAR Model by , whereas those from the ARMA(2,1) and the AR Models are represented by and , respectively.

The bivariate VAR Model is described by the system of (1). Thus, the univariate stochastic process follows an ARMA(2,1) Model, and the m.l.e. of the mean value are given by

On the other hand, if we assumed that followed an AR Model, the m.l.e. of the mean value would be given by

Next, we will compare the performance of the estimators (57) and (58) with the m.l.e. based on the VAR Model (second equation of the system (28)). It is important to stress that the strategy behind the AR Model has not taken into account the relationship between the stochastic processes and . This feature will certainly introduce an additional noise in the overall estimation procedure.

Following the techniques used in Section 4.2 for determining the precision of the estimators under consideration, here we also have used the first-order Taylor expansion at the mean value for computing the estimate of the variance of .

Considering the ARMA(2,1) Model, let be the vector of the unknown parameters. Then,

In regard to the AR Model, is given by (58) and

Improvements in choosing the sophisticated m.l.e. for based on the VAR Model rather than considering its univariate counterparts are next discussed. Simulation studies are carried out to evaluate the relative efficiency of the estimators under consideration.

The data were generated by the VAR Model (system of (1)). In order to make comparisons on the same basis, a few assumptions to the parameters of the VAR Model are made. We consider that . These restrictions have no influence on the results because they are equivalent to , that is, the constant terms of the VAR Model are equal to zero (system of (1)). Additionally, we introduce the restriction .

Since the correlation coefficient regulates the supply of information between the stochastic processes and , particular emphasis is given to this parameter. Using the grid of points , the Gain index is computed. We stress that the value is not allowable in this context (see Section 2 for the details).

We analyse the performance of the estimators based on different sample sizes, , and . The simulations reported next are based on different percentages of missing observations referred to the dimension of the sampled data from the auxiliary random process . Simulation runs for each combination of the parameters are based on replicates.

It is worth emphasising that the estimates of the covariance terms that take part of the variances given by (59) and (60) were computed by the R package tseries [15].

The simulation goes as follows: after each simulation run, the relative efficiency of with respect to each estimator and is quantified by the Gain index, and , respectively, expressed as percentage:

A word of notation: the above quantities, that is, and , were computed from the estimates of the corresponding variances. To lighten the notation, we skipped the conventional nomenclature used to represent the estimates.

If , then is more precise than . Otherwise, loses precision, and becomes a better estimator for the mean value of . A similar reasoning applies to the comparison between and .

Figures 1 and 2 display the main results from the simulation studies. The estimators and are compared in Figure 1, whereas Figure 2 exhibits the comparison between and . For each combination of the parameters of the model, we represent graphically the gain indexes as functions of the percentage of missing data in the sampled data from the stochastic process .

Either Figures 1 or 2 shows that the plot of the gain index against the percentage of missing data in the sample from the stochastic process behaves roughly as a linear function, regardless of the combination of the parameters. In outline, the more the percentage of missing values in the sampled data is, the more precise is the estimator when compared with the univariate context, that is, or (see Figures 1 and 2).

Further, the gain in precision by using the sophisticated estimator rather than or increases as the strength of the linear relationship between the processes and (described by the correlation coefficient) rises from to . This statement is true for both the ARMA(2,1) and AR modelling schemes (see Figures 1 and 2).

A final point to highlight from the comparison between Figures 1 and 2 is that the increase in precision obtained by using the estimator for the mean value of based on the VAR Model is higher when we compare its performance with the results from the AR Model than when we compare the VAR Model with the ARMA(2,1) Model. This feature emphasises the idea that has already been raised that the ARMA(2,1) Model describes more accurately the dynamics of the stochastic process than the AR Model does. In short, it seems that the AR Model is not a good approach in this context because it incorporates a noise term related to the simulation scheme that we cannot control.

Summing up, the estimator is preferable to those explored in the univariate context, that is, either or .

6. Conclusions

This article deals with the problem of missing data in an univariate sample. We have considered an auxiliary complete data set, whose underlying stochastic process is serially correlated with the former by the VAR Model structure. We have proposed maximum likelihood estimators for the relevant parameters of the model based on a monotone missing data pattern. The precision of the estimators has also been derived. Special attention has been given to the estimator for the mean value of the stochastic process whose sampled data has missing values, .

We have compared the performance of the estimator for based on the VAR Model with a monotone pattern of missing data with those obtained from both the ARMA(2,1) Model and the AR Model. By simulation studies, we have showed that the estimator derived in this article based on the VAR Model performs better than those derived from the univariate context. It is essential to emphasise that, even numerically, it was quite difficult to compute the precision of the later estimators as we have shown in Section 4.2.

A compelling question remains unresolved. From an applied point of view, it would be extremely useful to develop estimators for the dynamics of the stochastic processes. More precisely, we would like to get estimators for the correlation and cross-correlation matrices as well as their precision when there are missing observations in one of the data sets. It was not possible to achieve this goal based on maximum likelihood principles. As we have shown in Section 4.2, we have only developed estimators for the covariance matrix at lag zero. In future research, we will try to solve this problem in the framework of Kalman filter.

Acknowledgments

This work was financed by the Portuguese Foundation for Science and Technology (FCT), Projecto Estratégico PEst-OE/MAT/UI0209/2011. The authors are also thankful for the comments of the two anonymous referees.