Discrete Dynamics in Nature and Society

Volume 2011, Article ID 503561, 22 pages

http://dx.doi.org/10.1155/2011/503561

## Mean Convergence Rate of Derivatives by Lagrange Interpolation on Chebyshev Grids

Department of Mathematics, Tianjin Normal University, Tianjin 300387, China

Received 23 May 2011; Revised 30 August 2011; Accepted 19 September 2011

Academic Editor: Carlo Piccardi

Copyright © 2011 Wang Xiulian and Ning Jingrui. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We consider the rate of mean convergence of derivatives by Lagrange interpolation operators based on the Chebyshev nodes. Some estimates of error of the derivatives approximation in terms of the error of best approximation by polynomials are derived. Our results are sharp.

#### 1. Introduction and Main Results

Mean convergence of Lagrange interpolation based on the zeros of orthogonal polynomials (and possibly some additional points) has been studied for at least 70 years. There is a vast literature on this topic. The authors of [1–3] considered the simultaneous approximation by the Hermite interpolation operators, and we will consider the simultaneous approximation by Lagrange interpolation operators based on the zeros of Chebyshev polynomials. The relevant results can be found in [4–6]. We introduce these results below.

Let be a so-called generalized Jacobi weight (), and let be the zeros of the th orthogonal polynomial associated with the weight-function . Let denote the Lagrange interpolating polynomial which interpolates at the zeros of . By using Markov-Bernstein type inequalities in metric, J. Szabados and A. K. Varma [5] reduced the weighted mean convergence of derivatives to the weighted mean convergence of and obtained the following. If means functional space equipped with norm and then, for , we have

Here and in the following, the constant (may be different in the same expression) is independent of and but depends on , and denotes the error of the best polynomial approximation of degree of the corresponding function in the metric.

Mastroianni and Nevai [4] get sharper estimates in terms of modulus of continuity instead of the best approximation. It improves some old results. But its proof also needs weighted Markov-Bernstein type inequality in metric and the idea of additional points. For the weight functions not satisfying (*), it is not possible to discuss by their method. To deal with these case, Du and Xu [7] consider the most important special case . Let be the zeros of , the th degree Chebyshev polynomial of the first kind. If , then the well-known Lagrange interpolation polynomial of based on is given by (see [8]) where

Du and Xu [7] obtained the following.

Theorem A. *Let be as defined as above. Then, for , we have
**
and the estimation for is sharp.*

We notice that although the sharp estimate is obtained, the upper bound is not for . Now we will give a Lagrange interpolation to improve their results.

Let be the zeros of , the th degree Chebyshev polynomial of the second kind. If , then the well-known Lagrange interpolation polynomial of based on is given by (see [9]) where

Firstly, we obtain the following.

Theorem 1.1. *Let be as defined as above, . Then, for , we have
*

By Theorem A and Theorem 1.1, we know that have better convergence rate than in the case . But for continuous function approximation, we noticed that have the same approximation order with , that is, if , then, for , from Hölder inequality [8, 9], it follows that

For high derivatives approximation, how the cases are? Secondly, we will consider second derivative approximation by and and obtain the following.

Theorem 1.2. *Let and be as defined as above. Then, for , we have
**
and the estimation for or () is sharp.*

From Theorem 1.2, we know that for the second derivative approximation, have better approximation orders than in the case .

Using the same way as in the proof of Theorem 1.2, we can consider the order derivatives approximation for , but the computation is more complicated, and we omit the detail.

#### 2. Some Lemmas

We introduce some lemmas which are the main tools in our proof.

Lemma 2.1 (see [10, p. 519]). * If , then there exists an algebraic polynomial of degree at most such that
*

In the past, the error estimate depended on the Markov-Bernstein type inequalities in metric. In this paper, we will use the inequality in metric.

Lemma 2.2 (see [7, p. 50]). * Let be as defined by (1.10), . Then, for any fixed ,
*

To prove our results, we need to build another polynomial integral inequality in metric. For its proof, we introduce two lemmas.

Lemma 2.3 (see [8, p. 914]). *Let be distinct integers between 1 and . Then, we have
**
and it is well known that
*

Let be independent variables, are positive integers, and

By the mathematical induction we can obtain the following.

Lemma 2.4. * If is a positive integer, , then, the homogeneous symmetrical polynomial of degree :
**
can be represented as a homogeneous polynomial of degree about :
*

Now we give the inequality in metric which plays a key role in our paper.

Lemma 2.5. *Let be as defined by (2.1), . Then, for any fixed ,
*

*Proof. *Firstly, we will consider the special case by induction on . For , by (2.3) and (2.4), we obtain

Suppose that for , we have

For , if , then, (2.4) gives
If , then by Lemma 2.4, we know
where
From (2.3), it follows that
From (2.4), we know that, for ,
By virtue of (2.12) and (2.15), we have
From (2.11), (2.12), (2.14), and (2.16), it follows that

Now we consider the general case. For arbitrary and , it is easy to see that we can choose a positive integer satisfying and . By Hölder inequality and (2.17), we can obtain

*Remark 2.6. *
P. Erdös and E. Feldheim [8] give a proof for and . We give a mathematical induction proof for completion.

#### 3. Proof of Theorem 1.1

We will consider instead of for simplicity. For , let be the polynomial of degree at most satisfying (2.1). It is easily checked that for , From (3.1), we can conclude that From (2.1), we can derive It is easy to see that is a polynomial of degree at most . Hence,

By a direct computation, we know Combining (3.4) and (3.5), we derive We consider first. For an arbitrary , Similar to [9, p. 71], we have By [9, p. 71], we know Let , then by (3.8), (3.9), and (3.10), we obtain From (3.7), (3.11), and (3.12), we obtain that for an arbitrary , From (2.8) and (3.13), we can obtain Now we consider . Exchanging the summation order, we have It is easy to know Let , then, we have By (3.16), (3.17), and the identity we conclude that From (3.15) and (3.19), it follows that For an arbitrary , by (3.20), (2.1), , and a simple computation, we can obtain For , by (2.1), and a simple computation we obtain From , we derive Hence, Similarly, The fact that is an algebraic polynomial of degree at most implies Let . By (3.24) and a simple computation similar to [11, p. 204], we obtain that, for and , Similarly, By virtue of (2.2) and (3.21), we have From (3.26), (3.27), (3.28), and (3.29), it follows that By (3.2), (3.3), (3.6), (3.14), and (3.30), we obtain the upper estimate.

#### 4. Proof of Theorem 1.2

We consider first. We will consider instead of for simplicity. For , let be the polynomial of degree at most satisfying (2.1). From (3.1), it follows that

From (2.1), we can derive

Similar to (3.4),

By a direct computation, we get

Equations (4.3) and (4.4) yield We consider now. For an arbitrary , from (2.1), (3.12), and (see [9]), it follows that From (2.8) and (4.6), we can obtain Now we consider . From , it follows that . By (2.1) and (3.12), we have that, for an arbitrary , From (2.8) and (4.8), we can obtain For the , similar to (3.15), we have For an arbitrary , by (2.1) and a simple computation, we can obtain For , (2.1) leads to Similarly, Similar to (3.30), from (4.10), (4.11), (4.12), and (4.13), it follows that For the , similar to (3.15), we have It is easy to verify For , it is easy to verify that From (4.17), (3.16) and we obtain From (4.16), (4.19), (61), (3.9), and a direct computation, we get From (4.15) and (4.20), we obtain For , from (2.1), we can obtain By (3.9), Markov inequality, and , we obtain So for an arbitrary , Let . From , we can choose such that and . Then by (4.24) and , we can obtain From , it follows that From (4.22), (4.25), and (4.26), it follows that For , from (2.1) and a simple computation, we can obtain that, for , Let . Then, from (3.10) and we obtain From (2.1), it follows that Similarly, Similar to (3.30), from (4.28), (4.31), and (4.32), we can obtain From (4.1), (4.2), (4.5), (4.7), (4.9), (4.14), (4.21), (4.27), and (4.33), we obtain the upper estimate.

On the other hand, for , let . Then, here, is a polynomial of degree at most . Hence, It is easy to verify that

Let , then, implies that . Therefore,

We consider in the following. For , let be the polynomial of degree at most satisfying (2.1). Then, From (2.1), we can derive If , then the well-known Lagrange interpolation polynomial of based on is given by where Similar to (3.4), we have By a direct computation, we obtain From (4.42) and (4.43), it follows that Exchanging the summation order, we have For an arbitrary , It is well known that Let . Then, from (4.47) and (3.9), it follows that Combining (4.47), (4.48), and (3.9), we obtain From (4.45) and (4.49), it follows that