Research Article | Open Access

Oksana Bihun, Clark Mourning, "Generalized Pseudospectral Method and Zeros of Orthogonal Polynomials", *Advances in Mathematical Physics*, vol. 2018, Article ID 4710754, 10 pages, 2018. https://doi.org/10.1155/2018/4710754

# Generalized Pseudospectral Method and Zeros of Orthogonal Polynomials

**Academic Editor:**Andrei D. Mironov

#### Abstract

Via a generalization of the pseudospectral method for numerical solution of differential equations, a family of nonlinear algebraic identities satisfied by the zeros of a wide class of orthogonal polynomials is derived. The generalization is based on a modification of pseudospectral matrix representations of linear differential operators proposed in the paper, which allows these representations to depend on two, rather than one, sets of interpolation nodes. The identities hold for every polynomial family orthogonal with respect to a measure supported on the real line that satisfies some standard assumptions, as long as the polynomials in the family satisfy differential equations , where is a linear differential operator and each is a polynomial of degree at most ; does not depend on . The proposed identities generalize known identities for classical and Krall orthogonal polynomials, to the case of the nonclassical orthogonal polynomials that belong to the class described above. The generalized pseudospectral representations of the differential operator for the case of the Sonin-Markov orthogonal polynomials, also known as generalized Hermite polynomials, are presented. The general result is illustrated by new algebraic relations satisfied by the zeros of the Sonin-Markov polynomials.

#### 1. Introduction and Main Results

##### 1.1. Summary of Results

In this paper, we identify a class of algebraic relations satisfied by the zeros of a wide class of orthogonal polynomials. To prove the identities, we generalize the notion of pseudospectral matrix representations of linear differential operators, by allowing these representations to depend on two, rather than one, sets of interpolation nodes. The identities hold for all polynomials orthogonal with respect to a measure satisfying some standard assumptions, as long as they satisfy the differential equations:where is a linear differential operator and each is a polynomial of degree at most ; does not depend on . This includes classical orthogonal polynomials [1, 2], polynomials in the Askey scheme [3] and Krall polynomials [4], and additional classes of nonclassical orthogonal polynomials. If applied to classical or Krall orthogonal polynomials, the proposed result reduces to known identities [5–7]; see Section 1.4.

The motivation of this study stems from understanding that zeros of orthogonal polynomials play an important role in mathematical physics, numerical analysis, and related areas. For example, zeros of some orthogonal polynomials are equilibria of important -body problems [8–11]. They transpire as building blocks of remarkable isospectral matrices [12–15] and play an important role in construction of highly accurate approximation schemes for numerical integration [16–19].

To prove certain algebraic identities satisfied by the zeros of the polynomials , we generalize and relate the notions of spectral and pseudospectral matrix representations of linear differential operators used in the corresponding numerical methods for solving differential equations. The standard pseudospectral matrix representations of linear differential operators are based on Lagrange collocation on the real line. These representations were proposed by Calogero in the context of numerical solving of eigenvalue and boundary value problems for linear ODEs [20, 21]; see also [8, 22–25]. Mitropolsky et al. set up a general algebraic-projection framework based on Calogero’s method and furthered its applications to solution of evolution equations in mathematical physics [26]. The convergence analysis of Calogero’s method was studied in [27].

The standard pseudospectral method was utilized in [7] to prove new properties of the zeros of Krall polynomials. While Krall polynomials are eigenfunctions of linear differential operators, the polynomial families considered in this paper satisfy differential equations (1) with being* polynomials (as opposed to eigenvalues)* of degree , where does not depend on . To prove new properties of the zeros of the latter polynomial families, we propose a* generalization* of the standard pseudospectral method.

Because, in general, the differential operator in (1) raises the degree of polynomials by a summand of , the standard pseudospectral method does not allow* exact* discretization of differential equations (1). The main idea of the proposed generalization is to construct Lagrange collocation type matrices that* exactly* represent linear differential operators acting between spaces of polynomials of different degrees. In this paper, the last goal is achieved by allowing the matrix representations to depend on two rather than one set of interpolation nodes. We thus find* exact* discretizations of the differential equations (1) satisfied by the polynomials ; the discretizations are constructed using the zeros of these polynomials as the nodes. By comparing the generalized pseudospectral and the generalized spectral matrix representations of the differential operator , we derive a family of algebraic identities satisfied by the zeros of polynomials .

The proposed generalization of the pseudospectral matrix representations of linear differential operators has applications beyond those outlined in this paper; one such application allows to simplify the process of incorporation of initial or boundary conditions into linear systems that discretize certain ODEs; see the discussion in Section 4.

We illustrate the general result of the main Theorem 1 by applying it to the case of the Sonin-Markov polynomials, also known as generalized Hermite polynomials; see [16, 28, 29] and references therein. These zeros play an important role in the computation of integrals of singular or oscillatory functions [17, 18] as well as extended Lagrange interpolation on the real line [19].

To prove the identities satisfied by the zeros of the Sonin-Markov polynomials stated in Theorem 7, we compute the generalized pseudospectral matrix representations of the differential operators and as well as the differential operator associated with the Sonin-Markov family, see (26) and Section 2.2, assuming that the interpolation nodes are zeros of the Sonin-Markov polynomials. The formulas for these matrix representations as well as the identities of Theorem 7 have been verified using programming environment MATLAB, for several particular values of the relevant parameters (the degree and the parameter ; see Theorem 7 and definition (21) and (22)).

##### 1.2. The Orthogonal Polynomial Family

Let be a sequence of polynomials orthogonal with respect to a measure and the corresponding inner product . We denote the norm associated with this inner product by ; that is, . Assume that is a Borel measure with support on the real line satisfying the following three conditions:(a) is positive.(b)all its moments exist and are finite.(c) has infinitely many points in its support .

Under the above assumptions on the measure , the zeros of each polynomial , , are real and simple and belong to the convex hull of the support of ; see, for example, [16].

*Notation 1. *Here and throughout the rest of the paper denotes a fixed integer strictly larger than 1, while is a fixed nonnegative integer. The small Greek letter denotes an index that may take values or . The small Greek letter denotes an integer index that usually takes values , unless otherwise indicated. The small Latin letters , , , , and so forth denote integer indices that usually run from to or from to , see (2) and (1), and thus we indicate the range of the indices each time they are used. We reserve the letter to denote polynomials in Lagrange interpolation bases.

Let denote the space of all algebraic polynomials with real coefficients of degree at most . Assume that, for each , the polynomials form a basis of . Let be a linear differential operator acting on functions of one variable. Assume that has the following property:for all . Recall that is a fixed nonnegative integer; it does not depend on . For example, the differential operator with and for all has property (2) with , while the operator has property (2) with .

Suppose that the orthogonal polynomials satisfy differential equations (1).

##### 1.3. Generalized Pseudospectral and Spectral Matrix Representations of Linear Differential Operators

In this subsection, we introduce the notions of a generalized pseudospectral and a generalized spectral matrix representations of the linear differential operator introduced in the previous subsection. Note that, in general, the definitions of these generalized matrix representations hold for any linear differential operator , which may or may not satisfy property (2), and for being a (positive or negative) integer such that .

The definition of the standard pseudospectral matrix representation of , as shown in [7, 25, 30, 31], is motivated by a search for an exact discretization of a differential equationunder the assumption that it possesses a polynomial solution . More precisely, choose a vector of distinct real nodes and define the isomorphism byThe inverse of is of course given in terms of the standard Lagrange interpolation basis of constructed using the nodes : for every vector , The standard pseudospectral matrix representation of the differential operator is defined as the unique matrix that satisfies the following condition: for all . Note that the superscript “” in the notation of the matrix stands for “collocation” in the spectral collocation method for numerical solving of differential equations, also known as the pseudospectral method [25]. It is not difficult to conclude that the matrix is given componentwise by where , ; see [7, 20, 21, 26, 27, 30, 31]. The last definition implies that a vector solves the linear systemif and only if the polynomial solves ODE (3). Of course, if ODE (3) does not possess a polynomial solution , linear system (8)* approximates* ODE (3), in the sense that a solution of system (8), if it exists, allows one to construct an* approximate* solution of ODE (3) given by .

Let us now generalize this notion of pseudospectral matrix representation of the differential operator to take advantage of its property (2).

In addition to the interpolation nodes , consider another vector of distinct real nodes . In short, we will work with two vectors of nodes , where or , see Notation 1, and the respective Lagrange interpolation bases . Recall that, for each ,where is the node polynomial.

We define the generalized pseudospectral matrix representation of the linear differential operator componentwise bywhere, as before, the superscript “” stands for “collocation.” This definition is motivated by the following relation: for all , where the isomorphisms are defined by (4) with being replaced with , . In other words, if ODE (3) has a polynomial solution , then the vector solves the system of linear equations if and only if the polynomial solves ODE (3).

Using analogous motivation, we define the generalized spectral matrix representation of the linear differential operator componentwise byHere, the superscript “” indicates that the -variant of the spectral method is used [25].

We prove that the matrices and satisfy the following property:where each of the two matrices with or is the transition matrix from the orthogonal polynomial basis to the Lagrange interpolation basis ; see Theorem 8 in Section 3.

##### 1.4. Main Result: Algebraic Identities Satisfied by the Zeros of the Polynomials

Let us now assume that are the zeros of the polynomial from the orthogonal family , while are the zeros of the polynomial . We therefore use these two sets of zeros as the nodes in the definition of the pseudospectral matrix representation of the differential operator ; see (10) and (9). In this case, each of the two matrices in relation (14), where or , can be expressed in terms of the values and the Christoffel numbers :where the Christoffel numbers are defined bysee Theorem 8.

Recall that Christoffel numbers arise in the Gaussian quadrature numerical integration formulas; they are always positive [16]. Christoffel numbers play an important role in the proof of the main identity (18) presented in this paper, although they are eliminated from that identity in the process of inversion of the matrices : the inverses of are given componentwise bysee (51) in the proof of Theorem 8.

Using property (14) of the matrix representations and , together with the neat formulas (17) for the matrices in the case where are the zeros of , , we prove the following algebraic identities satisfied by the zeros of the polynomials in the family .

Theorem 1. *The zeros of the polynomial and the zeros of the polynomial in the orthogonal polynomial family of generalized eigenfunctions of the linear differential operator (see (1)) satisfy the following algebraic relations for all integer such that , :where the pseudospectral and spectral matrix representations and , respectively, are defined by (10) and (13).*

*Remark 2. *For every pair of integers such that and , identity (18) relates the zeros of the polynomials , and with the zeros of all the polynomials such that the index satisfies .

*Remark 3. *The main identity of Theorem 1 may be recast as follows:for all and , where each is a column-vector with the components , each is a row-vector with the components and the column-vectors form the standard basis of .

Theorem 1 is proved in Section 3.

Identities (18) or, equivalently, (19), are remarkable in the sense that they reveal a deeper structure and relation between the linear operators and , the generalized spectral and pseudospectral representations, respectively, of the differential operator , for the case where these operators are constructed using two sets of zeros of orthogonal polynomials as the nodes.

Let us compare the main result of Theorem 1 with other results of this kind. By setting in identities (18), (19), we obtain thatfor all such that , . In this case, of course, are constants, see (1), and are either classical or Krall orthogonal polynomials. In this special case where , identities of Theorem 1 reduce to those reported in [7]. It was shown in [7] that, if applied to the classical Jacobi, Hermite, or Laguerre polynomials, identities (20) reduce to known identities for the zeros of these polynomials reported in [5, 6].

Therefore, identities (18), (19) generalize similar results for classical orthogonal polynomials proved in [5, 6] and for Krall polynomials proved in [7]. These identities may be considered as analogues of the properties of the zeros of the Askey scheme and generalized hypergeometric polynomials proved in [12–15], for the case of the polynomial families considered in this paper. An application of the identities proved in this paper is related to the study of the asymptotic behavior of algebraic expressions involving the zeros of orthogonal polynomials of degree as ; see [32, 33].

In Section 2, we apply Theorem 1 to prove new identities satisfied by the zeros of the nonclassical Sonin-Markov orthogonal polynomials. In Section 3, “Proofs,” we elaborate on the proofs of most of the theorems of this paper, except for those that are straightforward consequences of another theorem. In Section 4, titled “Discussion and Outlook,” we summarize the results proposed in this paper and discuss their importance, possible applications, and further developments.

#### 2. Application: Properties of the Zeros of the Sonin-Markov Orthogonal Polynomials

In this section, we illustrate Theorem 1 by applying it to the case of the Sonin-Markov orthogonal polynomials, which are generalized eigenfunctions of a certain linear differential operator ; see (25) and (26). This application requires a computation of the generalized spectral and the generalized pseudospectral matrix representations of the differential operator associated with the Sonin-Markov polynomials. We thus proceed as follows. In Section 2.1, we define the Sonin-Markov polynomials and state their basic properties. In Sections 2.2 and 2.3, respectively, we compute the generalized pseudospectral and the generalized spectral representations, respectively, of the differential operator . Finally, in Section 2.4, we provide a family of new algebraic properties satisfied by the zeros of the Sonin-Markov polynomials.

##### 2.1. Definition and Basic Properties of the Sonin-Markov Polynomials

The Sonin-Markov polynomials , which are also known as generalized Hermite polynomials, are orthogonal on with respect to the weight (see [16, 28] and references therein). These polynomials can be expressed in terms of the generalized Laguerre polynomials as follows:whereand the coefficients,are chosen to make the polynomials orthonormal and the leading coefficients positive. Note that the leading coefficients of are given byThe zeros of the Sonin-Markov polynomials are distinct, real, and symmetric with respect to the origin.

The Sonin-Markov polynomials satisfy the following differential equations:where is the linear differential operator given byThese differential equations can be derived from the corresponding differential equations satisfied by the generalized Laguerre polynomials stated in [3]; see also [1] and formula (3.5) in [34]. Clearly, , so we can apply Theorem 1 with to these polynomials.

##### 2.2. The Generalized Pseudospectral Matrix Representation of the Sonin-Markov Differential Operator (26)

Let be the zeros of the Sonin-Markov polynomial defined by (21) and let be the zeros of the Sonin-Markov polynomial . In this subsection, we find the generalized pseudospectral matrix representation of the Sonin-Markov differential operator (26) with respect to the two vectors of nodes and . To compute this representation , we use definition (10) with . In the following, we use the notation of Section 2.1; note the appropriate definitions of , , and , .

Let be the Lagrange interpolation basis with respect to the nodes defined by (9) with and replaced with . Let be the generalized pseudospectral matrix representation of the differential operator with respect to the two vectors of nodes and . By definition (10), its components are given by

The generalized pseudospectral matrix representation of the differential operator is given componentwise bysee definition (10).

By using the fact that the Sonin-Markov polynomial satisfies differential equation (25) and (26) and the differential equation obtained by differentiation of (25) and (26) with respect to (with ), we simplify the formulas for , where , to readwhere are defined by

By using the last two expressions for the components of the matrices and in (29), we obtain the following formulas for the components of the generalized pseudospectral matrix representation :where, again, is given by definition (32).

*Remark 4. *If is even, the roots of the Sonin-Markov polynomial are all distinct from zero because the roots of every generalized Laguerre polynomial are all distinct from zero; see definition (21). Therefore, case (33b) does not apply to even values of .

*Remark 5. *It is known that two generalized Laguerre polynomials and do not have common roots [2, 16]. But then, by definition (21), two Sonin-Markov polynomials and have a common root if and only if is odd and . This is why formulas (30) and (31) include the case where (which is possible for odd but not for even ), but do not include the case where (the latter case is not possible).

##### 2.3. The Generalized Spectral Matrix Representation of the Sonin-Markov Differential Operator (26)

The generalized spectral matrix representation of the operator is defined by formula (13). To find an explicit formula for the components of , we employ recurrence relations satisfied by the Sonin-Markov polynomials.

Recall that the generalized Laguerre polynomials satisfy certain three-term recurrence relations [2, 16], which implywith where are assumed to equal zero if . Using recurrence relations (34), we obtain where , , and the coefficients , and , , are given by (27a), (27b) and (35).

*Remark 6. *The coefficients and defined in (35), must not be confused with the parameters and in the definition of the Sonin-Markov polynomials; see (21) and (22).

For completeness, let us mention that, as a consequence of their orthogonality, Sonin-Markov polynomials satisfy the standard three-term recurrence relation:where and if . Of course, the three-term recurrence relation (34) is a consequence of the recurrence relation (37).

##### 2.4. Algebraic Properties of the Zeros of the Sonin-Markov Polynomials

Having found the generalized pseudospectral and the generalized spectral matrix representations and , respectively, of the differential operator , we apply Theorem 1 to the case of the Sonin-Markov polynomials. We thus obtain the following algebraic identities satisfied by their zeros.

Theorem 7. *For every pair of integers such that and , the zeros of the Sonin-Markov polynomial and the zeros of the Sonin-Markov polynomial satisfy the following relations.**If , thenwhere if .**If , thenwhere are defined by (32) and, as before, if .*

#### 3. Proofs

The proof of Theorem 1 is based on the following result.

Theorem 8. *Let be a linear differential operator that satisfies condition (2). Let be the Lagrange interpolation basis of constructed using the distinct real nodes and let be the Lagrange interpolation basis of constructed using the distinct real nodes ; see (9). If the matrices , are defined by (10) and (13), respectively, while the two matrices with or are defined componentwise bythenMoreover, if the interpolation nodes are the distinct real zeros of the polynomial from the orthogonal family introduced in Section 1, then the transition matrix and its inverse are given by (15) and (17), respectively, where .*

*Remark 9. *Let us note that is the transition matrix from the polynomial basis to the basis of , while is the transition matrix from the polynomial basis to the basis of .

The proof of this theorem is similar to the proof of Theorem in [7]. It is provided below for the convenience of the reader.

*Proof of Theorem 8. *First, let us prove property (42). Let be a polynomial of degree . Thenand, on the other hand,where the coefficientsrespectively, are the components of the column-vectors and , respectively, and . To prove relation (42), we will show that and .

Let us expandwhere the coefficients are given by (41). Upon a substitution of (47) into (43), we obtainTo obtain the equality , we first notice that because and the operator satisfies , we haveOn the other hand,By comparing expansions (49) and (50), we obtain . Because , we conclude that and so finish the proof of relation (42).

Second, let us assume that are the zeros of the polynomial , where or . Let us prove that the transition matrix is given componentwise by (15). The Gaussian rule for approximate integration with respect to the measure based on these nodes has degree of exactness ; see, for example, Theorem of [16]. Therefore, for the polynomial of degree we havewhich implies (15). By applying the Gaussian rule to the polynomial , where , , we obtain which implies (17).

*Proof of Theorem 1. *Theorem 1 is a straightforward consequence of identity (42) rewritten as , where and , and formula (17) both proved in Theorem 8.

We note that the index in the sum from the right hand side of identity (18) ranges from to rather than from to . This is due to the following consideration. Because the polynomial family is orthogonal and has the property that is a basis of for each , this family must satisfy a three-term recurrence relation where are constants that equal zero if or is outside of the range [16]. From the last recurrence relation we derive where are constants that equal zero if . Thus, for all integer such that and , if the index is outside of the range .

#### 4. Discussion and Outlook

The identities of the main Theorem 1 are derived using a very special* exact* discretization of the differential equation:compare with (1). This discretization is constructed using the fact that ODE (56) has a* polynomial* solution , where is a member of the nonclassical orthogonal polynomial family . Because the differential operator maps to (see property (2)) we choose to represent the operator by a matrix defined by (10) in terms of* two* vectors of interpolation nodes, and . We thus* generalize* the notion of pseudospectral matrix representations of a linear differential operator.

This generalization can be utilized to resolve the issues related to the incorporation of the initial or boundary conditions into pseudospectral methods for solving a linear ODE , where is a linear differential operator; see [30, 31]. For example, if the differential operator has the property , where is a positive integer, it is beneficial to use the generalized pseudospectral matrix representation of the differential operator with respect to two vectors of nodes and . Of course, is defined componentwise by , , ; compare with (10), where is the standard Lagrange interpolation basis with respect to the nodes .

We may then approximate the ODE by the system of linear equations , where , compare with (4). The linear system for the components of consists of linear equations, which allows incorporating boundary or initial conditions into the system without making the system overdetermined. Once the solution of this augmented linear system is found, an approximate solution of the ODE is given by , where is the standard Lagrange interpolation basis with respect to the nodes .

The ODE (56) may also be discretized using only one vector of nodes and the standard pseudospectral