#### Abstract

We determine the weakly asymptotically orders for the average errors of the Grünwald interpolation sequences based on the Tchebycheff nodes in the Wiener space. By these results we know that for the -norm approximation, the -average error of some Grünwald interpolation sequences is weakly equivalent to the -average errors of the best polynomial approximation sequence.

#### 1. Introduction and Main Results

Let be a real separable Banach space equipped with a probability measure on the Borel sets of . Let be another normed space such that is continuously embedded in . By we denote the norm in . Any such that is a measurable mapping is called an approximation operator (or just approximation). The -average error of is defined as

Since in practice the underlying function is usually given via its (exact or noisy) values at finitely many points, the approximation operator is often considered depending on some function values about only. Many papers such as [1–4] studied the complexity of computing an -approximation in average case setting. Noticed that the polynomial interpolation operators are important approximation tool in the continuous functions space, and they are depending on some function values about only, we want to know the average error for some polynomial interpolation operators in the Wiener measure. Now we turn to describe the contents in detail.

Let be the space of continuous function defined on such that . The space is equipped with the sup norm. The Wiener measure is uniquely defined by the following property: for every , where denote the set class of all Borel measurable subsets of , and with . Its mean is zero, and its correlation operator is given by for , that is,

In this paper, we specify , and for every measurable subset , we define

For , denote by the linear normed space of -integrable function on with the following finite norm:

Let be the zeros of the th degree Tchebycheff polynomial of the first kind. The well-known Grünwald interpolation polynomial of based on is given by (see [5]) where

Theorem 1.1. *Let be defined as above. Then
**
where in the following means that there exists independent of such that , and the constant may be different in the same expression.*

By Hölder inequality, combining Theorem 1.1 with paper [2] we know that for ,

*Remark 1.2. *Denote by the set of algebraic polynomials of degree . For , let denote the best -approximation polynomial of from . Then the -average error of the best -approximation of continuous functions by polynomials from over the Wiener space is given by
By Theorem 1.1 and paper [6] we can obtain that for and , we have

*Remark 1.3. *Let us recall some fundamental notions about the information-based complexity in the average case setting. Let be a set with a probability measure , and let be a normed linear space with norm . Let be a measurable mapping from into which is called a solution operator. Let be a measurable mapping from into , and let be a measurable mapping from into which are called an information operator and an algorithm, respectively. For , the -average error of the approximation with respect to the measure is defined by
and the -average radius of information with respect to is defined by
where ranges over the set of all algorithms. Futhermore, let denote a class of permissible information functional and denote the set of nonadaptive information operators from of cardinality , that is,
Let
denote the th minimal -average radius of nonadaptive information in the class .

For example, if and are defined as above, is the identity mapping , and is consist of function evaluations at fixed point; then by [2] we know that for -norm approximation, if , we have

It is easy to understand that can be viewed as a composition of a nonadaptive information operator from and a linear algorithm, and for ,

In comparison with the result of Theorem 1.1, we consider the following Grünwald interpolation. Let be the zeros of , the th Tchebycheff polynomial of the second kind. The Grünwald interpolation polynomial of based on is given by where

Theorem 1.4. *Let be defined as above. Then
*

#### 2. The Proof of Theorem 1.1

We consider the upper estimate first. From [7, page 107, ] we obtain where is the th absolute moment of the standard normal distribution. It is easy to verify From (2.2) and Hölder inequality we can obtain By (1.3) we obtain Let , then it is easy to verify By (2.4), (2.5), and a simple computation we can obtain By (1.3), it is easy to verify that for , Let . From (2.7) and a simple computation we know that for , , From [8] we know , hence From (1.8) it follows that From (2.7) and (2.10) it follows that From (2.10) it follows that Let , we have By we know that for , thus It is easy to know By , (2.16), (2.17), and (2.18) we can obtain From (2.12) and (2.16) we can obtain Similarly From (2.3), (2.8), (2.11), (2.17), and (2.18) we can obtain By (2.1), (2.3), (2.6), and (2.19) we can obtain the upper estimate.

Now we consider the lower estimate. For , we can obtain the lower estimate from [2]. For , from (2.4) we know that Let , then from (2.5) we know that Hence we can verify that for From (2.20) and (2.22) and a simple computation we can obtain From (2.2), (2.3), and (2.19) it follows that From (2.1), (2.2), (2.23), and (2.24) we can obtain the lower estimate for .

#### 3. The Proof of Theorem 1.4

Let be the quasi-Hermite-Fejer interpolation polynomial of degree based on the extended Tchebycheff nodes of the second kind (see [8]); then by a simple computation we obtain Denote By (3.2) and the unique of the Hermite interpolation polynomial which satisfies interpolation conditions, we obtain By (3.5) and we know that From [8] we know that for every , where is the modulus of continuity of on defined for every , and is independent of and . By (3.7) and [6] we can obtain By using and we obtain By (1.3) we obtain From (3.9) and (3.10) we obtain Similar to (3.11), we have By (3.3) and we obtain By [8] we know that By (1.3), (3.13), (3.14), and , we obtain By (3.6), (3.8), (3.11), (3.12), and (3.15) we can obtain the upper estimate of Theorem 1.4. On the other hand, by (3.5) we can verify that From (3.16), (3.8), (3.11), (3.12), and (3.15) we can obtain the lower estimate of Theorem 1.4.