Abstract

Let be the observations from a chirp type statistical model , , where is a stationary noise. We consider a method of estimation of parameters, , , , , and , (where is the variance of ’s) which is basically an approximate least-squares method. The main advantage of the proposed approach is that no assumptions are required. We make use of the three theorems which were established associated with the kernel and then use them to prove, under certain conditions, the consistency of the estimators.

1. Introduction

In [1] 1973 Walker considered the problem of estimating the parameters of a sine wave,where ’s are the observations and ’s are independent, identically distributed random variables with mean zero and finite but unknown variance . The parameters , , , and are assumed unknown and are to be estimated. He showed that, as , the estimators , , , and converge in probability to the actual values , , , and , respectively. That is, he showed that the estimators are consistent. He then showed that the differences between the estimators and the actual values they estimate have a joint normal distribution.

Suppose now that the frequency of the above sine wave changes with time. If is the initial frequency, the frequency at time may be written asThe simplest case is one in which is linear; that is, . Then . This leads to the model The parameters for this model are , , , , and . This is sometimes called the “chirp” model (see [2, 3]). Several authors have considered the parameter estimation of the chirp model and are found in [47]. Different approaches to the estimation of chirp parameters in similar kinds of models are found in [812].

But our approach is entirely different from them. We do not make any assumptions. The method used to estimate the parameters and to prove the consistency of the estimators is similar to that by Walker in [13]. We consider not only estimates of the parameters of , , , and but also an estimate of . Although this leads to an interesting problem in estimation, the model is somewhat unrealistic. Over the course of observations, the frequency changes from to . As goes to , the sine wave oscillates faster and faster (unless ). Its frequency becomes infinite, and its period approaches . We therefore change the model by assuming that the change in frequency over the course of the observations is a number independent of . We also assume, as in (3), that the change in frequency is linear. This leads to the model or, more precisely, sequence of models We assume, as Walker does in his 1971 paper, that ’s are independent and identically distributed with mean zero and variance . Thus the parameters are , , , , and and the estimators of the parameters are , , , , and . Our objective is to show that these estimators are consistent.

We make use of the following three theorems which we have established.

Theorem 1 (see [14]). Let , where is a nonnegative real number. Then , for not a rational multiple of , uniformly in .

Theorem 2 (see [15]). Let be a sequence of independent random variables such that , , and . Then

Theorem 3 (see [16]). For any sufficiently small and

2. Estimation of the Parameters

If ’s are normally distributed, the likelihood function of the observations isThen the log likelihood function of the observations is where

Now consider :

By using the identities , , and we obtainFrom Theorem 1, the last two terms in (11) are of order . Therefore if we let

then is an approximation of . More precisely, . Thusis an approximation of the log likelihood function . Now fix in (13). Then we maximize to obtain the estimates for , , , and . That is, we minimize over the region . Now fix and . If or then . Since is continuous in , the minimum is achieved at a point . The partial derivatives and must vanish at . Thus the estimators of and are solutions of the equationsThe solution to these equations is given byTo obtain estimates for and we substitute and for and , respectively, in (12) and minimize as a function of and . Since this last expression is equal to with respect to and ; this is same as maximizingSince varies over a compact set, there is a point which maximizes this expression. We then estimate and by The minimum value of is assumed at .

Now we obtain an estimator of . If is zero then, since and , almost surely. Thus there is no randomness in the model. We assume, then, that . To obtain we substitute , , , and in (13) and maximize . From (13), We will show that , as a function of , achieves its positive maximum provided , which we show is positive with high probability for large values of . From (4) and (9) we have Using (11) and (12) we have Using the mean value theorem, where is a point on the line segment connecting and .

ThusIn Section 3 we prove that and . Therefore we have Also in Section 3 we show that Thus (25) implies that SimilarlyIn Section 3 we also show that and . Thus (22) implies that Using (21) in the above equation we obtain By the weak law of large numbers Using this in (30) we obtain that with probability arbitrarily close to 1 for sufficiently large values of . Suppose, then, that . Since and , as or . Thus since is a continuous function in the maximum is achieved when . The partial derivative vanishes at . Therefore is the solution to equation So

Note that exits only with high probability for large . Using (12) and substituting for and we obtainNow, using (18) in the above equation, we obtain Therefore, substituting in (34) we have

3. The Consistency of the Estimators

Now we will establish under certain conditions the consistency of the estimators , , , , and

Theorem 4. Let be a sequence of independent random variables with , , and . Let be any real numbers and . Let and let . For each , let The estimators , , , , and of , , , , and given in (12), (13), (15), and (17), respectively, are consistent estimators of , , , , and , respectively. Furthermore and .

Proof. From (15)ButThusIf we use complex exponentials instead of sines and cosines we have where and , and then LetThenExpanding the third term on the right, we get

Lemma 5.

Proof. From (46) we haveBy virtue of Theorem 2HenceThus we obtain the following estimates for the first and second terms in (48):Consider the term in (48). Using (44) we have Now we can write where .
AlsoIt follows by virtue of Lemma [16] that there is for which Thus we have Let . Then we obtain In the proof of Theorem 3 [16, p. 65, eq 1.9], we showed that if at least one of or tends to then Therefore, since , we have This together with (57) implies that Thus using this equation (51), and (60) in (48) we obtain ThusThis completes the proof of Lemma 5.

Lemma 6. Let be defined by where and . Then

Proof. Throughout the proof it is assumed that and . From (46) we have From Theorem 2This implies thatThus we obtain the following estimates for the first and second terms in (65): Consider the term . Using (44) we have We can write where .
ThenIt follows by virtue of Lemma [9, p. 1] that there is for which Similarly we can show that Thus we have Let in (74). Then we have Let . We will show that SupposeThen there is a sequence and for which for each . Then we can find for which for each . Let , . Since , , ; hence . It follows from equation 1.9 [16, p. 65] in the proof of Theorem 3 thatThus we have a contradiction. Therefore Hence by (76) This implies thatSubstituting (68) and (84) in (65) we obtain ThusThis completes the proof of Lemma 6.

Now combining Lemmas 5 and 6 we obtain Thuswhereand .

From Theorem 3 we have Substituting and in the above inequality we obtain Therefore since and , we have Therefore .

Let . Since , there exits such that Since (<0), there exits such that Now let . If , then Hence Now, from (88), Therefore Since is arbitrary, In other words,whereIn the last statement the maximum is taken over the region shown in Figure 1.

Suppose belongs to the shaded region of the figure. Then does not exceed . If this implies that is not the point which maximizes which in turn implies that lies in the unshaded rectangle, namely, the region . Thus (100) implies that, for large , the probability is high that the maximum value lies in that region, that is,In other words Since and are arbitrary positive numbers, we have Next we prove the consistency of and . That is, we show that Recall from (15) and (16) Thus But soLettingwhere and , we haveSince we obtain, using (44), Using the triangle inequality we getNow we estimate each of the terms on the right hand side of the above inequality.

Let .

By the mean value theorem where is a point on the open line segment connecting and .

Nowand similarly ThusUsing the triangle inequality and the fact that , we obtain Substituting and for and , respectively, we get so thatIt follows from (104) that SimilarlyThusNow consider the first term on the right in inequality (113). Again by the mean value theorem, where is a point on the open line segment connecting and . Using the above expressions for and , we have as before ThusThereforeSubstituting and for and , respectively, we get so thatUsing (60) and (104) we obtain SimilarlyThusNow consider the last term in inequality (113). As a consequence of Theorem 2, we haveBy using (123), (132), and (133) in inequality (113), we obtain Therefore That is,Now we prove the consistency of . That is, we show that .

Recall from (37) that From the definition of , we have But from (15) and (16) we have HenceThereforeSincewe haveUsing the identities , , and we obtainFrom (60) we have ThusThereforeNow we estimate each term on the right in (147). First consider the term . ’s are independent and identically distributed with . Therefore by the Weak Law of Large Numbers Next consider the second and third terms in the right hand side of (147).

Using Theorem 2 we have Now consider the last term in (136). From (132) we have ThusTherefore By using (148), (149), and (152) in (147) we obtain This completes the proof of the theorem.

Competing Interests

The author declares that there is no conflict of interests regarding the publication of this paper.