Abstract

This paper provides asymptotic estimates for the expected number of real zeros of two different forms of random trigonometric polynomials, where the coefficients of polynomials are normally distributed random variables with different means and variances. For the polynomials in the form of and we give a closed form for the above expected value. With some mild assumptions on the coefficients we allow the means and variances of the coefficients to differ from each others. A case of reciprocal random polynomials for both above cases is studied.

1. Introduction

There are mainly two different forms of random trigonometric polynomial previously studied. They are

Dunnage [1] first studied the classical random trigonometric polynomial . He showed that in the case of identically and normally distributed coefficients with and , , the number of real zeros in the interval , outside of an exceptional set of measure zero, is , when is large. Subsequent papers mostly assumed an identical distribution for the coefficients and obtained as the asymptotic formula for the expected number of real zeros. In [24] it is shown that this asymptotic formula remains valid when the expected number of real zeros of the equation , known as -level crossing, is considered. The work of Sambandham and Renganathan [5] and Farahmand [6] among others obtained this result for different assumptions on the distribution of the coefficients. Earlier works on random polynomials have been reviewed in Bharucha-Reid and Sambandham [7], which includes a comprehensive reference.

Later Farahmand and Sambandham [8] study a case of coefficients with different means and variances, which shows an interesting result for the expected number of level crossings in the interval . Based on this work, we study the following two cases in order to better understand how the behavior of random trigonometric polynomials is affected by the different assumptions of the distribution on the coefficients for both and , defined above.

To this end we allow all the coefficients to have different means and variances. Also, motivated by the recent developments on random reciprocal polynomials, we assume the coefficients and have the same distribution. In [9] for the case of random algebraic polynomial is assumed. Further in order to overcome the analysis we have to make the following assumptions on the means and variances. Let and also For the means, we assume and We also need , where and is chosen such that for any positive constant , and as . Then for finite, we have the following theorem.

Theorem 1.1. If the coefficients , of are normally distributed with mean and variance , where , then the mathematical expectation of the number of real zeros of the satisfies

We study the case of in Theorem 3.1 later. We first give some necessary identities.

2. Preliminary Analysis

In order to be able to prove the theorem, we need to define some auxiliary results. Let

Then from Farahmand [10, page 43], we have the extension of the Kac-Rice formula for our case as

where

As usual, is the function defined as

Now we are going to define the following functions to make the estimations. At first, we define and to be continuous at see also [10, page 74]. Let be any positive value arbitrary at this point, to be defined later. Since for , we have , we can obtain

Furthermore,

Now using the above identities and by expanding , we can show

In a similar way to [10], we define then for since we have . Hence, we can obtain

Furthermore, we have Now using these identities for , and and by expanding , we can get a series of the following results:

Now we are in position to give a proof for Theorem 1.1 for in the intervals and . In order to avoid duplication the remaining intervals for both cases of and are discussed together later.

3. The Proof

Case 1. Here we study the random trigonometric polynomial in the classical form of as assumed in Theorem 1.1 and prove the theorem in this section. To this end, we have to get all the terms in the Kac-Rice formula, such as . Since the property , using the results obtained in Section 2 of (2.7) and (2.10), we can have all the terms needed to calculate formula (2.2).At first, we get the variance of the polynomial, that is, Next, we calculate the variance of its derivative with respect to : At last, it turns to the covariance between the polynomial and its derivative Then, from (3.1), (3.2), and (3.3), we can get It is also easy to obtain the means of and its derivative as From (2.3) and the results of (3.1)–(3.5), we therefore have Now before considering the zeros in the small interval of length we consider the polynomial .

Case 2. We have to make the assumptions a little different. In this case let and For the means, we assume and and and .

Theorem 3.1. Consider the polynomial , where are independent, normally distributed random variables, divided into groups each with its own mean and variance , . The expected number of real zeros of satisfies

Similarly, using the same results obtained from Section 2, we can get the following terms. At first, we get the means of the polynomial and its derivative separately Then we obtain the variance of the polynomial

Next, we calculate the variance of its derivative with respect to :

At last, it turns to the covariance between the polynomial and its derivative:

Then, from (3.9), (3.10), and (3.11), we can get

From (2.3) and (3.8)–(3.12), we therefore have

This is the main contribution to the number of real zeros. In the following we show there is a negligible number of zeros in the remaining intervals of length . For the number of real roots in the interval , or , we use Jensen's theorem [11, page 300]. The method we used here is applicable to both of the cases we discussed above. Here we take the first case as the example to prove that the roots of these intervals are negligible. Let and . As is normally distributed with mean and variance , for any constant

Also since we have

Now by Chebyshev's inequality, for any , we can find a positive constant such that for ,

since Therefore, except for sample functions in an -set of measures not exceeding ,

So we obtain

except for the sample functions in an -set of measure not exceeding .

This implies that we can find an absolute constant such that

Let be the greatest integer less than or equal to . Then since the number of real zeros of is at most we have

where we choose

and is any positive number, so the error terms become small and we can prove the theorem.