Research Article | Open Access

# Quantitative Estimates for Positive Linear Operators in terms of the Usual Second Modulus

**Academic Editor:**Milan Pokorny

#### Abstract

We give accurate estimates of the constants appearing in direct inequalities of the form , , , and where is a positive linear operator reproducing linear functions and acting on real functions defined on the interval , is a certain subset of such functions, is the usual second modulus of , and is an appropriate weight function. We show that the size of the constants mainly depends on the degree of smoothness of the functions in the set and on the distance from the point to the boundary of . We give a closed form expression for the best constant when is a certain set of continuous piecewise linear functions. As illustrative examples, the Szàsz-Mirakyan operators and the Bernstein polynomials are discussed.

#### 1. Introduction

Let be a closed real interval with nonempty interior set . The usual second modulus of smoothness of a function is defined as where Denote by the set of measurable functions such that , . Many sequences of positive linear operators acting on allow for a probabilistic representation of the form (cf. [1]) where stands for mathematical expectation and is an -valued random variable whose mean and standard deviation are given, respectively, by for some nonnegative function . The condition is equivalent to say that reproduces linear functions.

It is well known (see, for instance, [2–6] and the references therein) that such operators satisfy pointwise inequalities of the formwhich measure the rate of convergence from to according to the degree of smoothness of . In (5), is a positive constant only depending upon and . It is also interesting to consider in (5) the uniform constant

Several authors have obtained estimates of this uniform constant. For instance, Adell and Sangüesa [7] gave for the Weierstrass operator. Păltănea [5, Corollary 4.1.2, pp. 93-94] obtained for the Bernstein polynomials, and Gonska and Păltănea [8] showed that for a certain class of Bernstein-Durrmeyer operators. More generally, in Păltănea’s book [5, Corollary 2.2.1, p. 31] it is shown that for a large class of positive linear operators reproducing linear functions.

The aim of this paper is to give a general method to provide accurate estimates of the constants satisfying the inequalitieswhere is a certain subset of . Such a problem is meaningful, because in specific examples the estimates of the constants in (6) and (7) may be quite different, mainly depending on two facts: the degree of smoothness of the functions in the set and the distance from the point to the boundary of . In this way, we complete the general results shown by Păltănea [5].

The method is based on the approximation of any function by a quasi interpolating piecewise linear function having an appropriate set of nodes. In doing this, special attention must be paid to the nodes near the endpoints of , if any. The main results are Theorems 6 and 7 stated in Section 3. In particular, Theorem 6 provides inequalities of form (7), where the upper bound consists of various terms involving evaluated at different lengths. Theorem 7 gives a closed form expression for the best constant in (7) when is a certain set of continuous piecewise linear functions.

As illustrative examples, we consider the Szàsz-Mirakyan operator (Section 4) and the Bernstein polynomials (Section 5). Although the kind of estimates is similar in both examples, the results take on a simpler form in the first case, because the interval of definition has only one endpoint. In any case, both examples show that the size of the constants in front of heavily depends on the set of functions under consideration and on the distance from point to boundary of .

We believe that the methods proposed in this paper could be applied to a wide class of positive linear operators, such as Baskakov operators, Stancu operators, and their -analogues, among others (see [9, 10] and the references therein). To obtain accurate estimates of the constants involved in each case, we essentially need to compute second moments (see Theorem 8 in Section 3) and tail probabilities of the underlying random variables defining the operators under consideration (see Lemmas 9 and 11 in Sections 4 and 5, resp.).

#### 2. Continuous Piecewise Linear Functions

Throughout this paper, is a closed real interval of positive length and is the interior set of . If , we denote by a finite ordered set of nodes , for some . If is an infinite interval, could also be infinite. In such a case, the finite endpoint of , if any, is always in . We denote by the set of continuous piecewise linear functions whose set of nodes is . Unless otherwise specified, we assume from now on that . Given a sequence , we denote by , . We set , and denote by the indicator function of the set .

Lemma 1. *For any , one has the representations**where*

*Proof. *The first equality in (8) follows from the fact that the two functions involved have the same Radon-Nikodym derivative in , , given by the constant defined in (9). The second equality in (8) follows from the first one and the equalitiesThe proof is complete.

The following auxiliary result is taken from [5, Lemma 2.5.7] (see also [11]). We give a simple proof of it for the sake of completeness.

Lemma 2. *Let be a function such that , for some with . Then, *

*Proof. *Assume that , the case being similar. Set . Then, The proof is complete.

For any , denote by the set of functions in whose set of nodes satisfies

Lemma 3. *Let , for some . Then, *

*Proof. *Let and . Denote by . We claim thatFormula (15) is obvious if ; suppose that . If , then whereas if , then thus showing claim (15). By virtue of (10), formula (15) is also true if we replace by any one of the functions or , . We therefore have from (8) and (15)Since for , as follows from (13), we have from (18) Similarly, and, for thus showing thatBy assumption (13), . We thus have from (18) This shows the converse inequality to (22) and completes the proof.

*Remark 4. *If assumption (13) is dropped, Lemma 3 is no longer true. To see this, consider the function , , where . Then,Actually, let . If , we have from (15) whereas if , we have thus showing (24).

We close this section with the following auxiliary result concerning the symmetric functionsFor any , let and be the floor and the ceiling of , respectively; that is,

Lemma 5. *Let and be as in (27). Then, *

*Proof. *Let . Then,Thanks to (30), the second inequality in Lemma 5 is equivalent toIt is easily checked that These equalities imply (31), since is convex and is linear in each interval , . The proof is complete.

#### 3. Main Results

Denote by the set of convex functions in . Given and , we consider the setIf , the preceding set should be defined as and analogously if or . Observe that and . We define the functionwhereNote that and its set of nodes isIf , then and therefore . The same is true if . Since , we see that and therefore has at least three nodes. From (35), (36), and Lemma 3, we have

Finally, let be a random variable taking values in such thatSince , we have from (10)

With these notations, we enunciate our first main result.

Theorem 6. *Let and . Then one has the following. *(a)*If , then *(b)*If , then *

*Proof. *Fix and . Let be the function having representation (8), whose set of nodes is , as defined in (37), and satisfying the following properties: (a), , ;(b)if , then is linear in and ; if , then is linear in (in such a case, it could happen that );(c)if , then is linear in and ; if , then is linear in (in such a case, it could happen that ).

These properties, together with (8) and (39), allow us to writewhere is defined in (36) and We therefore have from (35) and (43)On the other hand, applying Lemma 2 to the function , we haveIf , we set and obtain thanks to (46)In the same way,Thus, we have from (46)–(48) This, together with (45), shows part (a).

Suppose that . By substracting an affine function, if necessary, we can assume without loss of generality that , . The convexity of implies that We therefore have from (45), (47), and (48) The proof is complete.

Let and . Denote by the set of functions having a node at and not being linear in (in other words, , ). It turns out that the function defined in (35) is a maximal function in , as shown in the following result.

Theorem 7. *Let and . Then, *

*Proof. *Let with representation (8) and set of nodes , for some . From (39) and Lemma 3, we haveLet be as in (35) with set of nodes as defined in (37). By assumption, , . Therefore, This implies, by virtue of (35) and (53), that This, in conjunction with (38), completes the proof.

In order to apply Theorems 6 and 7 to concrete examples, we need to estimate the expectation and the tail probabilities of the random variable under consideration. With regard to the first question, we give the following.

Theorem 8. *Let . Then one has the following. *(a)*If , then *(b)*If , then *

*Proof. *Suppose that . Using definitions (33)–(36) and Lemma 5, we have Part (a) follows by replacing by in the preceding inequality and then taking expectations. Part (b) follows in a similar manner, by noting that if , then as follows from Lemma 5. This completes the proof.

Theorem 8 gives an upper bound for in terms of the variance of the random variable , which is easy to compute in many usual examples. Such an upper bound also suggests the choice

#### 4. Example 1: The Szàsz Operator

Let be the standard Poisson process, that is, a stochastic process starting at the origin, having independent stationary increments such thatLet and . Thanks to (61), the classical Szàsz-Mirakyan operator can be written aswhere . It is well known thatConcerning the tail probabilities of the standard Poisson process, we give the following lemma.

Lemma 9. *Let be as in (61). Then, *(a)*one has *(b)*for any , one has being strictly increasing in .*

*Proof. *(a) Suppose that . Denote by the solution to the equation . If , we obviously have from (61) whereas if , we have Suppose that . We have from (63) and Markov’s inequality (b) Let . Again by Markov’s inequality, we have It suffices to choose in the preceding inequality. The proof is complete.

Theorem 10. *Let be as in (62), , and let . Then, * (a) *if , then * (b) *if , then * (c) *if , then * *where is defined in (65).*

*Proof. *For any and , denote by and . In view of (62), we will apply Theorems 6 and 8 with .

(a) If , then and , as follows from (33). Thus, we have from Theorem 8(b) and (63) as well as Therefore, the conclusion follows from Theorem 6(a).

(b) If , we see that . By Theorem 8(a) and (63), we haveIf , then , as follows from (33). Thus, If , then , again by (33). We therefore have from Lemma 9(a) Similarly, if , then . Again by Lemma 9(a), we have In any of the previous cases, we always have and thereforeIn view of the preceding discussion, part (b) follows from Theorem 6(a).

(c) If , we have as in (75) As in part (b), and inequality (79) holds. Also, we have from Lemma 9(b) By Theorem 6(a), this shows part (c) and completes the proof.

Theorem 10 could also be stated for functions using Theorem 6(b) instead of Theorem 6(a). In such a case, we obtain better estimates. For instance, if , we get Observe that, for fixed , the constant exponentially decreases to zero, as , as follows from (65).

#### 5. Example 2: Bernstein Polynomials

Let and let be a sequence of independent identically distributed random variables having the uniform distribution on . We consider the (uniform) empirical process defined asObserve that the random variable has the binomial law with parameters and ; that is,Also observe that the paths of the empirical process are nondecreasing, since we have from (83)It is well known thatFor any function , the Bernstein polynomials of can be written asIn view of (86), we defineThe following auxiliary result will be needed.

Lemma 11. *Let and . Let and be as in (65) and (88), respectively. Then,*

*Proof. *Let . By Markov’s inequality and (84), we havewhere we have used the inequalityInequality (89) follows by choosing in (91). On the other hand, the random variables and have the same law, as follows from (84). We therefore have from (88) and (89)Again by Markov’s inequality, (86), and (88), we get This, together with (93), shows (90) and completes the proof.

Denote byNumerical computations show that , for .

Theorem 12. *Let and . Let , , and be as in (65), (90), and (95), respectively. For any , one has the following. *(a)*If , then *(b)*If , then *(c)*If , then *

*Proof. *In view of (87), we will apply Theorems 6 and 8 with and , as defined in (88). In the first place, we have from (90)