Abstract

The aim of this paper is to connect the zeros of polynomials in two variables with the eigenvalues of a self-adjoint operator. This is done by use of a functional-analytic method. The polynomials in two variables are assumed to satisfy a five-term recurrence relation, similar to the three-term recurrence relation that the classical orthogonal polynomials satisfy.

1. Introduction

Orthogonal polynomials in two or more variables (also multivariate polynomials) constitute a very old subject and have been investigated by many authors using various approaches. The usefulness and applications of classical orthogonal polynomials (COP) in one variable are very well-known and thus on its own this is a very strong motivation for generalizing several results of COP to multivariate polynomials. Moreover, the potential application of multivariate orthogonal polynomials in approximation techniques and numerical methods is another strong motivation.

For example, in numerical integration, the Gauss quadrature formulawhere are the zeros of the polynomials which are orthogonal in with respect to , gives an approximation of the integral on the left-hand side of (1). It could be of interest to generalize (1) in two dimensions having, instead of , the two variable polynomials . Also, COP are used in the approximation of functions of one variable by uniquely determined series of the formwhere is a sequence of COP. In a similar way, two-variable functions could be approximated by similar double series involving orthogonal bivariate polynomials . Moreover, an approximation of form (2) is at the “heart” of pseudospectral numerical techniques, such as the Chebyshev pseudospectral method, where are the well-known Chebyshev polynomials . Such techniques are used for the numerical solution of one-dimensional boundary value problems and the computation of the corresponding solution is being done by evaluating the right-hand side of (2) for at specific grid points which are the Gauss-Lobatto grid points. (See, e.g., [1].) It would be interesting to extend all these in the case of bivariate orthogonal polynomials satisfying a recurrence relation similar to the recurrence relation satisfied by the COP, for the straightforward numerical investigation of two-dimensional problems.

There are various extensions in the literature of the COP to polynomials of several variables or polynomials of complex variables. For example, in [2], the system was orthogonalized with respect to some region of the -plane. In [3, 4], polynomials orthogonal with respect to a positive linear functional were considered. In [5], two variable analogues of classical orthogonal polynomials were constructed and studied.

The techniques used for the study of multivariate polynomials, orthogonal or not, constitute also a wide variety. In many cases the polynomials studied are constructed in such a way or are eventually eigenfunctions of certain partial differential operators. Furthermore, the properties of these polynomials or these operators are investigated. This is done, for example, in [622]. In other studies, as, for example, in [23], weight functions are constructed for certain orthogonal polynomials. Moreover, there exist several results which can be considered as generalizations of the results for COP. Such an example is [24], where a Rodrigues formula was introduced for classical orthogonal polynomials of two variables.

The topic of multivariate polynomials is quite vast and unfortunately there exist very few books. One such book is [25], in which the majority of the results concerning orthogonal polynomials in two variables until the publication of this book are collected. Moreover, a concise history of the development of the theory of orthogonal polynomials in two variables up to then is given. Another book on the same subject is [26], where the theory of orthogonal polynomials of several variables is developed and presented. Especially in Chapter 3 of [26], the general properties of orthogonal polynomials in several variables are given including (a) a three-term relation in matrix form that they satisfy, (b) a theorem analogous to Favard’s theorem, (c) results on their zeros, (d) their connection to moment problems, and (e) their connection with block Jacobi matrices.

As mentioned in [26, Section ] “the three term relation in several variables is not as strong as that in one variable and as a consequence the analogous to Favard’s theorem is not as strong.” Also, as mentioned in [27], “the classical orthogonal polynomials have several extensions to polynomials of two variables, depending on the geometric region of the support set of the measure.”

An especially interesting subject is the zeros of multivariate polynomials. However, the zeros of, for example, a polynomial in two variables are either single points or curves in the plane. Thus, results for such zeros could not have much in common with the results of the zeros of COP. However, if common zeros are taken into consideration, then there are many similarities with the one-dimensional case. Such a kind of result can be found in [26, 28] and they are analogous with the results stated and proved in Sections 3 and 4 of the present paper. The locations of common zeros of orthogonal polynomials in two variables were also studied in [29].

In this paper, a family of polynomials in two variables (2D-polynomials) of degrees and with respect to and , respectively, , is considered satisfying the recurrence relation:for , , withfor , where are known polynomials of of degree with and .

Instead of (4) one may also usefor , where are known polynomials of of degree with and . Obviously, relations (4) and (5) are analogous.

If one makes the conventions(C1), , , , , , , , and or(C2), , , , , , , , and ,then (3), (4) or (3), (5) reduce to the well-known recurrence relation for the COP of one variable:Thus, in the rest of the paper it will be assumed that(A1),  ,   for all not simultaneously zero.By making convention (C1) or (C2) the results of the present paper are reduced to the corresponding results for the COP. Also, it should be noted that although the terms and can be combined in one, it is better to be kept apart for reasons that will be made clear in Sections 3 and 4 and have to do with the similarity of (3) to (6). This is one reason for studying (3).

Another motivation for considering (3) comes from the notion of orthogonal basis. It is well-known (see, e.g., [30, p. 32]) that “if is an orthonormal basis for , then forms an orthonormal basis for .” It is straightforward to prove that “if is an orthonormal basis for with respect to the weight function and is an orthonormal basis for with respect to the weight function , then is an orthonormal basis for with respect to .” Examples of such orthogonal basis are the COP satisfying (6).

Suppose two families of COP and satisfying (6) with coefficients and , respectively. Then the polynomials are also orthogonal and satisfy the recurrence relation which is a special case of (3).

For the study of the zeros of the polynomials satisfying (3) a functional-analytic approach will be used. This approach is a generalization of the method used for COP by Ifantis, Siafarikas, and their collaborators which include the authors. Ifantis and Siafarikas introduced this technique for the study of COP in [31] and used it in a series of papers. The main idea of this method is the connection of the zeros of satisfying (6) with the eigenvalues of a self-adjoint tridiagonal operator in an abstract finite dimensional Hilbert space. The main advantage of this approach is that in order to study the zeros of it is not needed to take into consideration the orthogonality of or the ordinary differential equation that they satisfy, but only the corresponding recurrence relation.

Following this philosophy, the zeros of satisfying (3) will be connected with the eigenvalues of a self-adjoint operator which is the sum of two tridiagonal operators. This is also the reason for keeping the terms and apart and not combining them in one. An immediate consequence of this connection is a rough estimate of the region in which the zeros lie (see Corollary 4).

The authors believe that many interesting questions for the polynomials satisfying (3) arise which could be the aim of future study. For example, could they be orthogonal? Do they satisfy a specific type of partial differential equation?

The rest of the paper is organized as follows: In Section 2 the method of [31] is briefly presented and generalized in two dimensions. In Section 3, the common zeros of the polynomials , and , are connected with the eigenvalues of a self-adjoint operator in an abstract finite dimensional Hilbert space. As a consequence, several basic results regarding the eigenvalues of this operator are “translated” as results on the previously mentioned common zeros. These results are summarized in Corollary 4. Finally, in Section 4, the common zeros of , and , are again treated, but now they are connected with the eigenvalues of a block matrix. Several remarks in this section make a correlation between the results of Sections 3 and 4.

2. The Method

Consider a finite dimensional Hilbert space over the real field with orthonormal basis . Let be the truncated shift operator: Then its adjoint is proved to be the operator defined by

It is well-known that the zeros of the polynomials defined by (6) are the same as the eigenvalues of a symmetric, tridiagonal, Jacobi matrix (see, e.g., [32]). In [31], a somewhat different decomposition of this Jacobi matrix was used, namely, the tridiagonal operator , where are the diagonal operators defined by More precisely this classical result was formulated as follows.

Theorem 1. The zeros of the polynomials defined by (6), in the case where , are real with , are the eigenvalues of the operator with corresponding eigenvector and vice versa.

In order to generalize this in two dimensions, consider the finite dimensional Hilbert space over the real field with orthonormal basis and inner product and norm induced by this inner product denoted as usual by and , respectively. Then, any element can be represented by .

In this space, analogously to the truncated linear shift operators and can be defined byIt can be easily proved that the adjoint operators of and are the linear operators and defined by and that their norms are equal to .

Analogously to the operator , the operatorwhere , , , and are the diagonal operators defined by with , , plays a central role in our approach.

Obviously, can be considered as the sum of the two tridiagonal operators . Moreover, when , , , and are real, as assumed in the present paper, is self-adjoint. In addition, since is a linear operator defined on a finite dimensional Hilbert space, it is also bounded and compact. As a consequence, several results are known for the spectrum of the more characteristics of which are as follows:(i)The spectrum of is finite and coincides with its point spectrum.(ii)The eigenvalues of are real.(iii) has a complete system of eigenvectors.

3. Operator Approach: Connection of Zeros of 2D-Polynomials with Eigenvalues of an Operator

In this section, the zeros of the polynomials , , and , , will be connected with the eigenvalues of the operator . More precisely, the following holds.

Theorem 2. Consider the 2D-polynomials defined by (3) and (4) or (5) under assumption (A1).(a)If the pair satisfies the systemthen is an eigenvalue of the operatorwith corresponding eigenvectorthat is,(b)Conversely, if is an eigenvalue of the operator , then the corresponding eigenvector is the element defined by (17) and the pair satisfies system (15), provided that(A2) and are linearly independent for all .

Remark 3. By repeating the proof of Theorem 2 under convention (C1) in , it follows that if satisfies , then is an eigenvalue of with corresponding eigenvector and vice versa. Obviously, a similar result holds under convention (C2). This result is a classical result regarding the zeros of the COP and can be found in this operator approach, for example, in [31] (see also the references therein).

Corollary 4. Assume that (A1) and (A2) hold. Then (i)there exist common zeros of and ,(ii) is real,(iii)the following inequality holds: where is a common zero of and , with , .

Proof. (i) It follows from the fact that or are eigenvalues of the self-adjoint, bounded operator . (ii) It follows from the fact that is self-adjoint. (iii) It follows from (18) and Schwarz’s inequality.

Proof of Theorem 2. (a) First of all , since, for example, . In order for to be an eigenvector of with corresponding eigenvalue , it suffices to show that (18) holds.
Indeed it is since , , or taking into consideration (3) from which (18) follows if (15) holds, since and , for all .
(b) Suppose is an eigenvalue of . Then, there exists an element of , not identically equal to 0, such thatSuppose at first that , for all . Then, by taking the inner product of both hand sides of (22) with it follows that , for all . By induction, it can be easily proved that , for all and all . As a consequence which is a contradiction, since is an eigenvector of . Thus, there exists such that . Without loss of generality it can be assumed that and that . For the rest of the coefficients , it is assumed that they constitute a sequence of depending, generally speaking, on , , which will be denoted by .
In order to compute all the coefficients , the inner product of both hand sides of (22) with the elements will be considered, as well as recurrence relations (3)-(4). To begin with, by taking the inner product of both hand sides of (22) with it follows that
(i) For Moreover, for and , , it is deduced from (3)-(4) that Thus, (ii) For Moreover, for and , , it is deduced from (3)-(4) that Thus,(iii) For Moreover, for , and , , it is deduced from (3)-(4) thatThus,or since and are linearly independentBy induction it can be proved that for all Moreover, proceeding as before and taking the inner product of both hand sides of (22) with and then with it follows again that Finally, by taking the inner product of both hand sides of (22) with it follows thatAnalogously, one may choose and a known sequence of depending, generally speaking, on , , denoted by . In this case the coefficients will again be calculated by taking as before the inner product of both hand sides of (22) with the elements for all and considering also recurrence relations (3), (5).

4. Matrix Approach: Connection of Zeros of 2D-Polynomials with Eigenvalues of a Block Matrix

In this section, the zeros of the polynomials , , and , , will be connected with the eigenvalues of a block matrix. More precisely, for and , recurrence relations (3), (4) can be rewritten in matrix form for the values of as: : : : : : , where , are the matrices: are the diagonal matrices and are the tridiagonal matrices with(i)diagonal elements ,(ii)elements above the diagonal ,(iii)elements below the diagonal . For example, All the above relations can be rewritten in block-matrices form aswhere is the zero matrix.

From (39), the following theorem is obvious.

Theorem 5. Consider the 2D-polynomials defined by (3) and (4) or (5) under assumption (A1). (a)If the pair satisfies system (15), then is an “eigenvalue” of the block matrix with corresponding “eigenvector” , in the sense that(b)Conversely, if (40) is satisfied, then the pair satisfies system (15), provided that the sequences and are linearly independent.

Remark 6. (a) Notice that, in the case where , is an eigenvalue of the block matrix with corresponding eigenvector , in the usual sense. (b) The block matrix is the matrix representation of the operator defined by (16) with respect to the basis of the abstract Hilbert space . (c) The analogies with the corresponding well-known results in the case of the COP are obvious.

Remark 7. By comparing Theorems 2 and 5 one immediately notices the following: (i) part (a) of Theorem 2 is analogous to part (a) of Theorem 5; (ii) for part (b) of Theorem 2 it is necessary to assume that the sequences and are linearly independent for all , whereas for part (b) of Theorem 5 it is necessary to assume that only the sequences and are linearly independent. However, this is a natural consequence of the different but similar approaches used. More precisely, in the matrix approach it is obtained immediately from recurrence relations (3) and (4) that the components of the eigenvector are the polynomials , , . However, in the operator approach, the components of the eigenvector are not immediately deduced but are proved to be the polynomials , , , and in the proof it is necessary to assume that the sequences and are linearly independent for all .

Competing Interests

The authors declare that there are no competing interests.