#### Abstract

This paper is concerned with rectangular summation of multiple Fourier series in matrix weighted -spaces. We introduce a product Muckenhoupt condition for matrix weights and prove that rectangular Fourier partial sums converge in the corresponding matrix weighted space , , if and only if the weight satisfies the product Muckenhoupt condition. The same result is shown to hold true for other summation methods such as Cesàro and summation with the Jackson kernel.

#### 1. Introduction

Let be the family of nonnegative-definite complex-valued matrices. A (periodic) matrix weight is by definition an integrable map . For a measurable vector-valued function , let where denotes the usual norm on . We let denote the family , and becomes a Banach space when we factorize over .

In this paper we are interested in convergence properties of multiple trigonometric series in and how specific convergence properties of trigonometric series can be related to properties of the weight . To be more specific, let denote the univariate Dirichlet kernel, and for we define the *rectangular* kernel . Then

defines the rectangular partial sum operator for the trigonometric system. We define the action of on vector-valued functions by letting it act separately on each coordinate function; that is,

It is well known (see, e.g., [1, Theorem ]) that , as , for . An immediate corollary is that we have convergence of the partial sums , in , for in the vector-valued case. However, it is not obvious what can be said about convergence of in for a general matrix weight . The main result of the preset paper completely characterizes the special class of weights that allow convergence; converges if and only if the weight satisfies a certain matrix Muckenhoupt product condition. Moreover, the characterization relies solely on certain localization properties of the Dirichlet kernels shared by many other summation kernels. So, in addition, we prove that the rectangular Cesàro means and approximation using the Jackson kernels converge in if and only if the weight satisfies the mentioned matrix Muckenhoupt product condition.

At a first glance, the study of vector-valued operators like on may seem artificial, but let us mention one important application to finitely generated shift invariant spaces where such mappings appear naturally. For a finite set of functions in , the associated shift invariant space is given by

A natural question to pose is whether forms some sort of “stable” generating system for . Here stable can mean a Schauder basis, or even some weaker notion such as a block Schauder basis. Consider the vector valued system , where is the standard basis for , and let the Gram matrix be given by where the Fourier transform is given by . One can show (see [2]) that the map , given by

is an isometric isomorphism between and , satisfying . Hence, the metric properties of in are equivalent to the properties of in . For example, will map to a corresponding rectangular partial sum operators for on . The same correspondence holds true for the other summation methods related to such as rectangular Cesàro means and approximation using the Jackson kernels converge in that will be discussed below.

It was proved by the present author [2] that the rectangular partial sums converge in precisely when are uniformly bounded on which happens exactly when is a Muckenhoupt product matrix weight.

The structure of this paper is as follows. In Section 2 we introduce a product condition for matrix weights. Then necessary and sufficient conditions for a convolution operator of product type (such as ) to be bounded on are given. Section 3 contains applications of the results in Section 2 to convolution operators induced by rectangular Dirichlet, Fejér, and Jackson kernels.

#### 2. The Muckenhoupt Condition and Operators on

In this section we introduce a matrix Muckenhoupt product condition suitable for dealing with convolution operators of product type such as the partial sum operator defined by (3). A sufficient condition for convolution operators of product type bounded on is given in Proposition 5, while a converse type result is considered in Proposition 7. We prove that convolution operators with “nicely localized” kernels can only be uniformly bounded on when satisfies the product condition.

The scalar condition was introduced by Muckenhoupt [3], and it was proved by Hunt et al. in their seminal paper [4] that the condition on a weight is necessary and sufficient for the Hilbert transform to be bounded on the weighted space .

More recently, Hunt-Muckenhoupt-Wheeden type results for matrix weights have been considered. The matrix condition was introduced by Nazarov et al. [5–7] and they showed that it is the right condition for “standard” singular integral operators to be bounded on . The condition () for weights was originally stated in terms of dual matrices and average, but it was shown by Roudenko [8] to be equivalent to where the sup is taken over all open balls in and is the conjugate exponent.

Since our goal is to study operators of product type related to rectangular trigonometric partial sums, the condition given by (7) is not the appropriate one. The periodic weights satisfying (7) are well behaved when it comes to the study of square or spherical partial sum operators for trigonometric series. Let us therefore introduce a new and slightly modified Muckenhoupt condition. Inspired by (7), we let denote the family of all rectangles in of the form , with being a bounded open interval in . Then we consider the following more restrictive subclass of matrix weights.

*Definition 1. * Let be a periodic matrix weight. For , let denote the conjugate exponent to . We say that belongs to the matrix Muckenhoupt (product) class provided there exists a uniform constant such that

for any .

*Remark 2. *For , Definition 1 reduces to the standard matrix condition on , which we denote by . In the scalar case (i.e., ), Definition 1 reduces to the known product condition for scalar weights, which has a long history; see [9] and references therein.

The similarity of conditions (8) and (7) implies that many results for matrix weights have straightforward analogs in the product case; the proofs can be “translated verbatim.” Let us state the following lemma which will be needed below.

Lemma 3. * Let be a matrix weight. Then the following statements are equivalent for : *(i)*; *(ii)*; *(iii)*, for all.*

We refer the reader to Roudenko [8] for the proof of Lemma 3 in the nonproduct case.

The following Lemma reveals why one can expect to be useful for product operators. A weight in is uniformly in in each of its variables.

Lemma 4. * Let be a matrix weight, and let . Then the following holds.*(a)*For any rectangles ,
*(b)*Suppose ; then the univariate weight , obtained by fixing the variables , , is uniformly in for a.e. . *

*Proof. * For (a), we notice that whenever ,

Now we turn to the proof of (b). It suffices to consider for fixed. Given an interval , we form , where is an interval of length centered at . First suppose . Since , there exists a constant independent of such that
where we have used the continuous embedding . Hence, by Lebesgue's differentiation theorem, for almost every ,

where the constant is independent of and . It follows that is uniformly in for a.e. . In the case , we use Lemma 3 to conclude that , which implies the following estimate:

By repeating the argument from the case, we conclude that is uniformly in which again by Lemma 3 implies that is uniformly in .

We can now prove the following result that explains how to get from a bounded convolution operator on to a bounded convolution operator on , for , simply by forming the natural product kernel.

Proposition 5. * Suppose that is a sequence of convolution kernels defined on for which the corresponding operators
**
are uniformly bounded on whenever . Then the associated product convolution kernels
**
induce a uniformly bounded family of operators on for . *

*Proof. *Suppose that . In the case , there is nothing to prove. We focus on the case ; the reader can easily verify that the argument below generalizes to any .

According to Lemma 4(b), and satisfy uniform Muckenhoupt -conditions a.e. on . Pick any . By Fubini's theorem, and for a.e. and , respectively.

We define

Notice that . By assumption,

An integration yields

Similarly,

It follows that the family is uniformly bounded on .

We now turn to a converse type result to Proposition 5. Proposition 7 will show that well-localized trigonometric convolution kernels of product type can only be uniformly bounded on when .

We need the following Lemma which gives an estimate of the norm of integral operators on with nice compactly supported kernels.

Lemma 6. * Suppose is an integral operator with a scalar kernel that satisfies for some bounded rectangle . For , there exists a constant independent of the particular choice of such that the norm of on is at most , with given by (8). Moreover, the kernel induces an operator with norm at least on . *

The proof of Lemma 6 for nonproduct -weights can be found in Goldberg [10]. We leave the straightforward adaptation of the proof in [10] to the product case for the reader.

We can now give a proof of Proposition 7. For , we let .

Proposition 7. * Let be a periodic matrix weight, and let be a sequence of real-valued trigonometric convolution kernels defined on . Assume there exist constants , such that , with for . Suppose that the corresponding product kernels
**
induce a uniformly bounded family of convolution operators on . Then . *

*Proof. *We have to estimate for an arbitrary rectangle . The idea is to form a suitable product kernel that is “large” on in the sense that the corresponding operator can be well approximated by an integral operator of the type considered in Lemma 6.

By assumption, the kernel is real and , so by Bernstein's inequality, . We can thus find an integer (independent of ) such that for we have , where is the constant from Lemma 6.

Let a rectangle be given. For , with , we define and replace with and obtain a possibly larger rectangle . By Lemma 4(b), there exists a universal constant such that since . Next, for each with , we choose an integer such that

Notice that for , we have so

For notational convenience we put and form the product kernel

The plan of attack is to use the simple fact that is uniformly bounded in both and . We notice that has integral kernel

We wish to estimate the operator norm of from below. For that purpose we first consider the operator with kernel

Notice that estimate (22), together with the fact that , implies the following size estimate

According to Lemma 6, the kernel induces an operator of norm at most on . At the same time, Lemma 6 shows that the operator with kernel has norm at least on . The triangle inequality for operator norms now implies that

so . Moreover, by (21), we see that , so we may conclude that

with constant independent of . We may finally conclude that .

#### 3. Summation of Multiple Trigonometric Series

This section contains applications of the results of Section 2 to convolution operators induced by rectangular Dirichlet, Fejér, and Jackson kernels. The Dirichlet kernels correspond to standard rectangular trigonometric summation while the Fejér kernels generate the corresponding Cesàro means. The Jackson kernels are (normalized) squares of the Fejér kernels, and they induce the well-known Jackson approximation by trigonometric polynomials.

We begin by studying the univariate Dirichlet kernel. The Hilbert transform is defined on , , by

We lift to a linear operator on , for any matrix weight , by letting it act coordinatewise.

Treil and Volberg completely characterized when the Hilbert transform is bounded in the matrix case on when ; see [11]. Later, Nazarov and Treĭl′ introduced in a new “Bellman function” method [6] to extend the theory to . Volberg presented a different solution to the matrix weighted boundedness of the Hilbert transform via Littlewood-Paley theory [7]. The fundamental result is the following.

Theorem 8 (see [6, 7, 11]). * Let be a matrix weight. Suppose . Then the Hilbert transform is bounded on if and only if . *

We recall that the univariate Dirichlet kernel is given by

and for we define the associated partial sum operators,

We have the following lemma which follows easily from Theorem 8.

Lemma 9. * Let be a matrix weight in . Then the partial sum operators are uniformly bounded on . *

* Proof. * We let denote the Riesz projection onto for , where is the -order partial sum operator. It follows that is bounded on since is bounded according to Theorem 8, and is bounded according to [12, Lemma 1.5]. Notice that is a norm preserving operator on , just as in the scalar case. Then we observe that

and the result follows.

For , we form the product kernel . One has

We notice that and , so the following corollary follows directly from Propositions 5 and 7 and Lemma 9.

Corollary 10. * Let be a matrix weight. For , the operators are uniformly bounded on if and only if . *

*Remark 11. *It is easy to verify that vectors of trigonometric polynomials are dense in , , whenever is a matrix weight (since so each entry in is in ). It therefore follows by standard techniques that the family is uniformly bounded on if and only if , as , for all .

Corollary 10 relies on basic localization properties of the Dirichlet kernel. However, many well-known summation kernels share the necessary properties needed to apply Propositions 5 and 7. Let us illustrate this fact by considering two specific examples.

The rectangular Cesàro summation is given by

with the product Féjer kernel given by , where the scalar Féjer kernel is defined by

Notice that and . The scalar Jackson kernel is the normalized square of the Féjer kernel and given by

The corresponding product kernel is , , and the rectangular Jackson summation operator is given by . Notice that and .

We now conclude by stating the main result, which summarizes the results obtained in the present paper. The theorem shows that uniform boundedness of the rectangular operators , , and on is equivalent to the condition .

Theorem 12. *Let be a matrix weight. For , the following conditions are equivalent: *(i)*; *(ii)*the operators are uniformly bounded on ; *(iii)*the operators are uniformly bounded on ; *(iv)*the operators are uniformly bounded on . *

* Proof. * We first notice that each of the univariate kernels , and satisfies the hypothesis of Proposition 7, so (ii), (iii), and (iv) each implies that . Now, suppose that . Then (ii) holds by Corollary 10. To conclude, we just need to recall that Cesàro and Jackson summations are both regular summation methods, so (ii) implies both (iii) and (iv).