Table of Contents
Journal of Numbers
Volume 2015 (2015), Article ID 892324, 14 pages
http://dx.doi.org/10.1155/2015/892324
Research Article

Hybrid Moments of the Riemann Zeta-Function

Katedra Matematike RGF-a, Univerzitet u Beogradu, Djušina 7, 11000 Belgrade, Serbia

Received 15 July 2014; Accepted 27 November 2014

Academic Editor: Yong-Gao Chen

Copyright © 2015 Aleksandar Ivić. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The “hybrid” moments of the Riemann zeta-function on the critical line are studied. The expected upper bound for the above expression is . This is shown to be true for certain specific values of , and the explicitly determined range of . The application to a mean square bound for the Mellin transform function of is given.

1. Introduction

Power moments represent one of the most important parts of the theory of the Riemann zeta-function , defined as and otherwise by analytic continuation. Of particular significance are the moments on the “critical line” , and a large literature exists on this subject (see, e.g., the monographs [15]). Let us define where is a fixed, positive number. Naturally one would want to find an asymptotic formula for for a given , but this is an extremely difficult problem. Conditional bounds, under the Riemann Hypothesis (that all complex zeros of have real parts 1/2, see Riemann [6]), have been obtained by Soundararajan [7] and Radziwill and Soundararajan [8].

Except when and , no asymptotic formula for is known yet, although there are plausible conjectures for such formulas (see, e.g., the work of Conrey et al. [911]). In the absence of asymptotic formulas for , one would like then to obtain good upper bounds for . A simple bound for is (see, e.g., [4]) where is fixed. The use of (3) allows one to replace a power of by its integral over a suitable (short) interval. In employing this procedure one obviously loses something, but on the other hand one gains flexibility from the fact that explicit upper bounds for are known only in the case when (see Lemma 5) and (see [3]). In this way bounds for are reduced to the so-called “hybrid” moments of the type where , , and are assumed to be fixed and . The expected bound for the expression in (4) (this is consistent with the hitherto unproved Lindelöf hypothesis that ) is clearly Here and later (>0) denotes arbitrarily small constants, not necessarily the same ones at each occurrence, and (same as ) means that the implied constant depends only on . The problem is to find, for given , , and , the range of for which the integral (4) is bounded by (5), and naturally one would like the lower bound for to be as small as possible. Note that from general results (e.g., see Ramachandra’s monograph [4]) one obtains that the expression in (4) is, for , This shows that, up to “,” the bound in (5) is indeed best possible. The (less difficult) case in (4) was investigated by the author in [12] () and [13] (). In particular, the former work contains a proof of the bound for if , for if , and for if , where The bound (8) in the above range was obtained in [12, 13] by employing Motohashi’s explicit formula (e.g., see [3, 14]) for , which contains quantities from the spectral theory of the non-Euclidean Laplacian.

As for the applications of bounds for (4), note that the case (this is ,  ) of the hybrid integral appeared in [15] in connection with mean square bounds for the Mellin transform function, defined initially by and otherwise by analytic continuation. The function was introduced in [16] by Motohashi. The functions (in the general case is replaced by for (>1) with suitable ) are of great importance in the theory of power moments of (see, e.g., [15]). It was shown by the author in [17] that which is the sharpest bound for the range in question.

We will obtain results on the integral in (4) when equal 2 or 4, which is logical, since it is in these cases that we have good information on . Namely, let, for fixed, where for some suitable coefficients one has and is to be considered as the error term in (13). An extensive literature exists on , especially on (see Atkinson’s classical paper [18]), and the reader is referred to [1] for a comprehensive account. It is known that ( is Euler’s constant) and is a quartic polynomial in whose leading coefficient equals . This was obtained in Ingham’s classical work [19]. For an explicit evaluation of all the coefficients of see, for example, Conrey’s work [20]. For the (conjectural) general form of see [9, 10]. One hopes that will hold for each fixed integer , which implies the Lindelöf hypothesis that . So far (16) is known to be true only in the cases and , when is a true error term in the asymptotic formula (13). In particular we have (see [1, 3, 4, 21]) for some satisfying , and . We also have and the bounds (see [1, 3, 4, 21]) As usual, for a given (>0 for ) means that

2. Statement of Results

Before we state explicitly our results note that we have the bounds This easily follows from estimates on and mentioned at the end of the last section. It means that we can restrict ourselves to the range when in (4), and to the range when . This will be implicitly assumed in the proofs of our results, which are contained in the following.

Theorem 1. One has and for one has

Theorem 2. One has, for and some ,

To assess the strength of our results note, for example, that (3) and (25) give The bound in (28), which follows easily by the Cauchy-Schwarz inequality for integrals from estimates of the fourth and twelfth moment of (see [22] and [1, Chapter 8]), is the strongest known bound for the eighth moment of .

Our last result concerns an improvement of (12). Let be such a constant for which holds. At present we have . The lower bound follows from general principles (see [1, Chapter 9]). The upper bound is a consequence of (28), and its improvements would be very significant. We will prove, using (21), the following.

Theorem 3. If is defined by (11) and is defined by (29), then

Corollary 4. One has

Note that (12) gives while (31) improves both of these bounds, since and .

3. The Necessary Lemmas

In this section we will state some lemmas that are necessary for the proofs of our theorems. The first is an explicit formula for an integral involving .

Lemma 5. For , one has where is the number of divisors of , , and for , where are suitable constants.

Proof of Lemma 5. The proof of (33) (see also [12]) is based on Motohashi’s exact formula [14] and in particular [3, Theorem 4.1]. It states that where is the cosine Fourier transform of . One requires the function in (35) to be real-valued for and that there exists a large constant such that is regular and for . The choice is permissible, and then the integral on the left-hand side of (35) becomes (see (9)) . The first integral on the right-hand side of (35) is , and the second one is evaluated by the saddle-point method (see, e.g., [1, Chapter 2]). A convenient result to use is [1, Theorem 2.2 and Lemma 15.1], due originally to Atkinson [18] for the evaluation of exponential integrals . In the latter, only the exponential factor is missing. In the notation of [1, 18] we have that the saddle point (root of ) satisfies and the presence of the above exponential factor makes it possible to truncate the series in (35) at with a negligible error. Furthermore, in the remaining range for we have (in the notation of [18]) which makes a total contribution of , as does error term integral in Theorem 2.2 of [1]. The error terms with , vanish for ,  , and (33) follows. Finally note that by using Taylor’s formula it is seen that the error made by replacing with in (33) is for .

We remark that the series in (33) can be truncated at . Namely, the contribution for is, by trivial estimation, and this is absorbed by the -term in (33).

Lemma 6. Let denote the number of solutions in integers , , and of the inequality with , , , and . Then,

Lemma 7. Let be a fixed integer and let be given. Then, the number of integers such that and is, for any given ,

Lemma 6 was proved by Sargos and the author [23], while Lemma 7 is due to Robert and Sargos [24]. They represent powerful arithmetic tools which are essential in the analysis when the cube or biquadrate of exponential sums involving appears.

Lemma 8. For and , one has

This result was proved by Jutila [25]. The analogous formula also holds with replaced by the error term in the classical Dirichlet divisor problem. From (47), Jutila deduced ( means that ) The author sharpened (49) to an asymptotic formula. Namely, it was proved in [26] that, with suitable constants () and ,

4. The Proof of Theorem 1

We begin with the bound in (20). The left-hand side equals, by the defining relation of ((13) with ), for , as asserted. Here we used the Cauchy-Schwarz inequality for integrals and (49). Note that the upper bound in (20) is the best possible, as it coincides with the lower bound in (7). An interesting, but difficult problem, would be to obtain an asymptotic formula for the integral in (20).

To discuss (21), we first exchange the order of integration in the relevant integrals. It follows that the left-hand side of (21) does not exceed which immediately gives the bound which is somewhat weaker than the one in (21), since . Here, we used the sharpest known bound , of Watt [27], while the related sharpest bound () is due to Huxley [28]. These results have been obtained by a variation of the so-called Bombieri-Iwaniec method for the estimation of exponential sums (see [29, 30]).

To obtain the sharper bound asserted by (21) we will use results on the moments of (see Section 5), and hence the proof of the bound in question will be completed there.

For the proof of (22) we start from (3) which gives, for , and we use the trivial inequality in the notation of (9), where , , and is fixed. This gives following the proof of (20), where (0) is a smooth function supported in , such that for and for and any . For we use Lemma 5, writing and integrate by parts . In this way it is seen that Note that, for , , we have and that (here ) Integrating by parts, we have Observe that the integrals in (60) can be truncated at with a negligible error. Therefore, after an integration by parts, we get an integral with the same type of exponential factor (i.e., in the exponential), but there will be in the integrand a smooth factor of the order . Hence, after a large number of integrations by parts it follows that the contribution of satisfying will be negligible (i.e., less than for any given and ). This truncation of the series over is the crucial point in the proof, as the ensuing expression will be quite similar to the expressions for , only in the exponential factor in (57) we will have . Thus, the proof reduces to the estimation of where and is the same expression with in the exponential factor. The other two terms, which arise after the squaring of the right-hand side of (33), are clearly less difficult to deal with. Note that and write the sines as exponentials. For , we use Taylor’s formula to remove the terms from all functions coming from Lemma 5. Namely, we can truncate the tails of series after sufficiently many terms to obtain a negligible error term. There remain only finitely many terms, but the exponentials are identical, so it suffices to treat the first terms only. Then, we integrate by parts many times, as was done in the previous part of the proof. Thus, we are left with sums containing the exponential say, where we set Namely, the terms with are clearly negligible by sufficiently many integrations by parts. Thus, only the combination of signs as in (65) is relevant. Here, we suppose that We suppose , since the case is analogous and the case is easy. By the first derivative test ([1, Lemma 2.1]) it is seen that the contribution is () if or with suitable positive constants , (when or ). Thus, we may assume that . Consider now the contribution from the triplets () satisfying . By using the bound (44) of Lemma 6 (with ), it is seen that the corresponding portion of the integral in (61) is for , as asserted.

Now we proceed analogously as was done in the author’s work [31]. Suppose . We may assume that , since the other case is analogous. Let where , which will be determined later, does not depend on , , and . Further suppose that If or with suitable -constants, then in either or dominates in size. Hence, we can use the method of [31]. If we have with a sufficiently large , then in . Also if and with a sufficiently small , then . In both cases we estimate the integral of by the first derivative test, and then the sum over , , and by Lemma 6.

If and , then there may exist a saddle point (root of ) in if . Hence, by the saddle-point method (see [10, Chapter 2]) or by the use the second derivative test, making first the change of variable , we obtain . Hence, by the second derivative test (see Lemma 2.2 of [1]) the corresponding portion of the integral in (61) is for or But since trivially and it follows that which is needed.

Finally, if and , then The first derivative test shows that the contribution is, since , for , provided that If, however, , then and this implies and this case has been already dealt with. This completes the proof of (22).

The proof of (23) is similar to the proof of (22). The major difference is that, instead of (61), now we will have to bound We use then Hölder’s inequality to deduce that the integral in (78) does not exceed Both integrals in (79) are estimated similarly. Here, we have Therefore, by using Taylor’s theorem, instead of the exponential we will have the simpler function , say, where now , have a different meaning than in the proof of (22), namely,

First note that the sum over in is split into subsums where , with . Instead of Lemma 6 we use (46) of Lemma 7 (with ), supposing first that and that . Afterwards the integral is estimated trivially. The contribution to the relevant integral in (79) will be for , as asserted.

Now suppose . If , we may also assume that , for otherwise all derivatives of have the same sign. Let for some suitable . Then, , and supposing that we obtain by the first derivative test that the contribution is Similar arguing is used if . There remains the case when , in which case (after the substitution ) it is seen that Let, as in the proof of (22), For the contribution is for , when But if , then since , successive integrations by part of show that the contribution is negligible.

Let now Like in the preceding case, the contribution will be for If (92) does not hold, then we have , since . Again we integrate by parts sufficiently many times. Each time we get a factor in the integrand which is so that the contribution will be negligible. This proves (23).

To prove (24), note that the left-hand side equals (see (13) with ) for , as asserted. Here we used the bound, which follows from (17); namely, with . This completes the proof of Theorem 1.

5. The Proof of Theorem 2

We have first, similarly as in (94), where we used the first bound in (17). This implies that the left-hand side of (96) is in the whole range . The bound in question was actually proved by Ivić et al. [15] in connection with mean square estimates for . Hence, the main task is to prove the other bound in (25), for which we need the second bound in (17). Note that the left-hand side of (25) is majorized by a multiple () of Here we used the fact that and integrated by parts. We used a similar procedure later, namely ( (see (13)-(14)) is a suitable polynomial of degree four, is henceforth a generic positive constant, ): We also used the bound, which follows by the Cauchy-Schwarz inequality for integrals from the mean square bound in (17); namely, To complete the proof it remains to note that When we take the square root in (101) and insert the resulting bound in (97) we are left with the bound for the left-hand side of (25). But as for and for , this means that in the bound above the term may be omitted, and (25) follows. We point out yet another estimate, namely (131), for the integral in (25). This was derived for the proof of Theorem 3 and does not contain the (expected) term , but terms which are reasonably small when is “about” .

The proof of (26) is based on the use of (98) and the fourth moment of the function , defined by where (see (48)) and is the error term in the Dirichlet divisor problem. The function was investigated by several authors, including Jutila [32, 33], who introduced it and the author [12, 17, 31, 3437]. Among other things, the author ([3437, Part II] and [31]) proved that