Abstract

Let 𝐻 be a real Hilbert space, let 𝑆, 𝑇 be two nonexpansive mappings such that 𝐹(𝑆)𝐹(𝑇), let 𝑓 be a contractive mapping, and let 𝐴 be a strongly positive linear bounded operator on 𝐻. In this paper, we suggest and consider the strong converegence analysis of a new two-step iterative algorithms for finding the approximate solution of two nonexpansive mappings as 𝑥𝑛+1=𝛽𝑛𝑥𝑛+(1𝛽𝑛)𝑆𝑦𝑛, 𝑦𝑛=𝛼𝑛𝛾𝑓(𝑥𝑛)+(𝐼𝛼𝑛𝐴)𝑇𝑥𝑛, 𝑛0 is a real number and {𝛼𝑛}, {𝛽𝑛} are two sequences in (0,1) satisfying the following control conditions: (C1) lim𝑛𝛼𝑛=0, (C3) 0<liminf𝑛𝛽𝑛limsup𝑛𝛽𝑛<1, then 𝑥𝑛+1𝑥𝑛0. We also discuss several special cases of this iterative algorithm.

1. Introduction

Let 𝐻 be a real Hilbert space. Recall that a mapping 𝑓𝐻𝐻 is a contractive mapping on 𝐻 if there exists a constant 𝛼(0,1) such that

𝑓(𝑥)𝑓(𝑦)𝛼𝑥𝑦,𝑥,𝑦𝐻.(1.1) We denote by Π the collection of all contractive mappings on 𝐻, that is,

={𝑓𝐻𝐻isacontractivemapping}.(1.2)

Let 𝑇𝐻𝐻 be a nonexpansive mapping, namely,

𝑇𝑥𝑇𝑦𝑥𝑦,𝑥,𝑦𝐻.(1.3)

Iterative algorithms for nonexpansive mappings have recently been applied to solve convex minimization problems (see [14] and the references therein).

A typical problem is to minimize a quadratic function over the closed convex set of the fixed points of a nonexpansive mapping 𝑇 on a real Hilbert space 𝐻:

min𝑥𝐶12𝐴𝑥,𝑥𝑥,𝑏,(1.4) where 𝐶 is a closed convex set of the fixed points a nonexpansive mapping 𝑇 on 𝐻, 𝑏 is a given point in 𝐻 and 𝐴 is a linear, symmetric and positive operator.

In [5] (see also [6]), the author proved that the sequence {𝑥𝑛} defined by the iterative method below with the initial point 𝑥0𝐻 chosen arbitrarily

𝑥𝑛+1=1𝛼𝑛𝐴𝑇𝑥𝑛+𝛼𝑛𝑏,𝑛0,(1.5) converges strongly to the unique solution of the minimization problem (1.4) provided the sequence {𝛼𝑛} satisfies certain control conditions.

On the other hand, Moudafi [3] introduced the viscosity approximation method for nonexpansive mappings (see also [7] for further developments in both Hilbert and Banach spaces). Let 𝑓 be a contractive mapping on 𝐻. Starting with an arbitrary initial point 𝑥0𝐻, define a sequence {𝑥𝑛} in 𝐻 recursively by

𝑥𝑛+1=1𝛼𝑛𝑇𝑥𝑛+𝛼𝑛𝑓𝑥𝑛,𝑛0,(1.6) where {𝛼𝑛} is a sequence in (0,1), which satisfies some suitable control conditions.

Recently, Marino and Xu [8] combined the iterative algorithm (1.5) with the viscosity approximation algorithm (1.6), considering the following general iterative algorithm:

𝑥𝑛+1=𝐼𝛼𝑛𝐴𝑇𝑥𝑛+𝛼𝑛𝑥𝛾𝑓𝑛,𝑛0,(1.7) where 0<𝛾<𝛾/𝛼.

In this paper, we suggest a new iterative method for finding the pair of nonexpansive mappings. As an application and as special cases, we also obtain some new iterative algorithms which can be viewed as an improvement of the algorithm of Xu [7] and Marino and Xu [8]. Also we show that the convergence of the proposed algorithms can be proved under weaker conditions on the parameter {𝛼𝑛}. In this respect, our results can be considered as an improvement of the many known results.

2. Preliminaries

In the sequel, we will make use of the following for our main results:

Lemma 2.1 (see [4]). Let {𝑠𝑛} be a sequence of nonnegative numbers satisfying the condition 𝑠𝑛+11𝛼𝑛𝑠𝑛+𝛼𝑛𝛽𝑛,𝑛0,(2.1) where {𝛼𝑛}, {𝛽𝑛} are sequences of real numbers such that (i){𝛼𝑛}[0,1] and 𝑛=0𝛼𝑛=,(ii)lim𝑛𝛽𝑛0 or 𝑛=0𝛼𝑛𝛽𝑛 is convergent.Then lim𝑛𝑠𝑛=0.

Lemma 2.2 (see [9, 10]). Let {𝑥𝑛} and {𝑦𝑛} be bounded sequences in a Banach space 𝑋 and {𝛽𝑛} be a sequence in [0,1] with 0<liminf𝑛𝛽𝑛limsup𝑛𝛽𝑛<1.(2.2) Suppose that 𝑥𝑛+1=(1𝛽𝑛)𝑦𝑛+𝛽𝑛𝑥𝑛 for all 𝑛0 and limsup𝑛(𝑦𝑛+1𝑦𝑛𝑥𝑛+1𝑥𝑛)0. Then lim𝑛𝑦𝑛𝑥𝑛=0.

Lemma 2.3 (see [2] (demiclosedness Principle)). Assume that 𝑇 is a nonexpansive self-mapping of a closed convex subset 𝐶 of a Hilbert space 𝐻. If 𝑇 has a fixed point, then 𝐼𝑇 is demiclosed, that is, whenever {𝑥𝑛} is a sequence in 𝐶 weakly converging to some 𝑥𝐶 and the sequence {(𝐼𝑇)𝑥𝑛} strongly converges to some 𝑦, it follows that (𝐼𝑇)𝑥=𝑦, where 𝐼 is the identity operator of 𝐻.

Lemma 2.4 (see [8]). Let {𝑥𝑡} be generated by the algorithm 𝑥𝑡=𝑡𝛾𝑓(𝑥𝑡)+(𝐼𝑡𝐴)𝑇𝑥𝑡. Then {𝑥𝑡} converges strongly as 𝑡0 to a fixed point 𝑥 of 𝑇 which solves the variational inequality (𝐴𝛾𝑓)𝑥,𝑥𝑥0,𝑥𝐹(𝑇).(2.3)

Lemma 2.5 (see [8]). Assume 𝐴 is a strong positive linear bounded operator on a Hilbert space 𝐻 with coefficient 𝛾>0 and 0<𝜌𝐴1. Then 𝐼𝜌𝐴1𝜌𝛾.

3. Main Results

Let 𝐻 be a real Hilbert space, let 𝐴 be a bounded linear operator on 𝐻, and let 𝑆, 𝑇 be two nonexpansive mappings on 𝐻 such that 𝐹(𝑆)𝐹(𝑇). Throughout the rest of this paper, we always assume that 𝐴 is strongly positive.

Now, let 𝑓Π with the contraction coefficient 0<𝛼<1 and let 𝐴 be a strongly positive linear bounded operator with coefficient 𝛾>0 satisfying 0<𝛾<𝛾/𝛼. We consider the following modified iterative algorithm:

𝑥𝑛+1=𝛽𝑛𝑥𝑛+1𝛽𝑛𝑆𝑦𝑛,𝑦𝑛=𝛼𝑛𝑥𝛾𝑓𝑛+𝐼𝛼𝑛𝐴𝑇𝑥𝑛,𝑛0,(3.1) where 𝛾>0 is a real number and {𝛼𝑛}, {𝛽𝑛} are two sequences in (0,1).

First, we prove a useful result concerning iterative algorithm (3.1) as follows.

Lemma 3.1. Let {𝑥𝑛} be a sequence in 𝐻 generated by the algorithm (3.1) with the sequences {𝛼𝑛} and {𝛽𝑛} satisfying the following control conditions: (C1)lim𝑛𝛼𝑛=0,(C3)0<liminf𝑛𝛽𝑛limsup𝑛𝛽𝑛<1.Then 𝑥𝑛+1𝑥𝑛0.

Proof. From the control condition (C1), without loss of generality, we may assume that 𝛼𝑛𝐴1. First observe that 𝐼𝛼𝑛𝐴1𝛼𝑛𝛾 by Lemma 2.5.
Now we show that {𝑥𝑛} is bounded. Indeed, for any 𝑝𝐹(𝑆)𝐹(𝑇), 𝑦𝑛=𝛼𝑝𝑛𝑥𝛾𝑓𝑛+𝐴𝑝𝐼𝛼𝑛𝐴𝑇𝑥𝑛𝑝𝛼𝑛𝑥𝛾𝑓𝑛𝛾𝑓(𝑝)+𝛼𝑛(𝛾𝑓𝑝)𝐴𝑝+1𝛼𝑛𝛾𝑇𝑥𝑛𝑝𝛼𝑛𝑥𝛾𝛼𝑛𝑝+𝛼𝑛𝛾𝑓(𝑝)𝐴𝑝+1𝛼𝑛𝛾𝑥𝑛=𝑝1𝛼𝛾𝛾𝛼𝑛𝑥𝑛𝑝+𝛼𝑛(𝛾𝑓𝑝)𝐴𝑝.(3.2) At the same time, 𝑥𝑛+1=𝛽𝑝𝑛𝑥𝑛+𝑝1𝛽𝑛𝑆𝑦𝑛𝑝𝛽𝑛𝑥𝑛+𝑝1𝛽𝑛𝑆𝑦𝑛𝑝𝛽𝑛𝑥𝑛+𝑝1𝛽𝑛𝑦𝑛.𝑝(3.3) It follows from (3.2) and (3.3) that 𝑥𝑛+1𝑝𝛽𝑛𝑥𝑛+𝑝1𝛽𝑛1𝛼𝛾𝛾𝛼𝑛𝑥𝑛𝑝+𝛼𝑛1𝛽𝑛(=𝛾𝑓𝑝)𝐴𝑝1𝛼𝛾𝛾𝛼𝑛1𝛽𝑛𝑥𝑛+𝑝𝛼𝛾𝛾𝛼𝑛1𝛽𝑛𝛾𝑓(𝑝)𝐴𝑝,𝛾𝛾𝛼(3.4) which implies that 𝑥𝑛𝑥𝑝max0,(𝑝𝛾𝑓𝑝)𝐴𝑝𝛾𝛾𝛼,𝑛0.(3.5) Hence {𝑥𝑛} is bounded and so are {𝐴𝑇𝑥𝑛} and {𝑓(𝑥𝑛)}.
From (3.1), we observe that 𝑦𝑛+1𝑦𝑛=𝛼𝑛+1𝑥𝛾𝑓𝑛+1+𝐼𝛼𝑛+1𝐴𝑇𝑥𝑛+1𝛼𝑛𝑥𝛾𝑓𝑛𝐼𝛼𝑛𝐴𝑇𝑥𝑛=𝛼𝑛+1𝛾𝑓𝑥𝑛+1𝑥𝑓𝑛+𝛼𝑛+1𝛼𝑛𝑥𝛾𝑓𝑛+𝐼𝛼𝑛+1𝐴𝑇𝑥𝑛+1𝑇𝑥𝑛+𝛼𝑛𝛼𝑛+1𝐴𝑇𝑥𝑛𝛼𝑛+1𝛾𝑓𝑥𝑛+1𝑥𝑓𝑛+1𝛼𝑛+1𝛾𝑇𝑥𝑛+1𝑇𝑥𝑛+||𝛼𝑛+1𝛼𝑛||𝑥𝛾𝑓𝑛+𝐴𝑇𝑥𝑛𝛼𝑛+1𝑥𝛾𝛼𝑛+1𝑥𝑛+1𝛼𝑛+1𝛾𝑥𝑛+1𝑥𝑛+||𝛼𝑛+1𝛼𝑛||𝑥𝛾𝑓𝑛+𝐴𝑇𝑥𝑛=1𝛼𝛾𝛾𝛼𝑛+1𝑥𝑛+1𝑥𝑛+||𝛼𝑛+1𝛼𝑛||𝑥𝛾𝑓𝑛+𝐴𝑇𝑥𝑛.(3.6) It follows that 𝑆𝑦𝑛+1𝑆𝑦𝑛𝑥𝑛+1𝑥𝑛𝑦𝑛+1𝑦𝑛𝑥𝑛+1𝑥𝑛=𝛼𝛾𝛾𝛼𝑛+1𝑥𝑛+1𝑥𝑛+||𝛼𝑛+1𝛼𝑛||𝑥𝛾𝑓𝑛+𝐴𝑇𝑥𝑛,(3.7) which implies, from (C1) and the boundedness of {𝑥𝑛}, {𝑓(𝑥𝑛)}, and {𝐴𝑇𝑥𝑛}, that limsup𝑛𝑆𝑦𝑛+1𝑆𝑦𝑛𝑥𝑛+1𝑥𝑛0.(3.8) Hence, by Lemma 2.2, we have 𝑆𝑦𝑛𝑥𝑛0as𝑛.(3.9) Consequently, it follows from (3.1) that lim𝑛𝑥𝑛+1𝑥𝑛=lim𝑛1𝛽𝑛𝑆𝑦𝑛𝑥𝑛=0.(3.10) This completes the proof.

Remark 3.2. The conclusion 𝑥𝑛+1𝑥𝑛0 is important to prove the strong convergence of the iterative algorithms which have been extensively studied by many authors, see, for example, [3, 6, 7].

If we take 𝑆=𝐼 in (3.1), we have the following iterative algorithm:

𝑥𝑛+1=𝛽𝑛𝑥𝑛+1𝛽𝑛𝑦𝑛,𝑦𝑛=𝛼𝑛𝑥𝛾𝑓𝑛+𝐼𝛼𝑛𝐴𝑇𝑥𝑛,𝑛0.(3.11) Now we state and prove the strong convergence of iterative scheme (3.11).

Theorem 3.3. Let {𝑥𝑛} be a sequence in 𝐻 generated by the algorithm (3.11) with the sequences {𝛼𝑛} and {𝛽𝑛} satisfying the following control conditions: (C1)lim𝑛𝛼𝑛=0,(C2)lim𝑛𝛼𝑛=,(C3)0<liminf𝑛𝛽𝑛limsup𝑛𝛽𝑛<1.Then {𝑥𝑛} converges strongly to a fixed point 𝑥 of 𝑇 which solves the variational inequality (𝐴𝛾𝑓)𝑥,𝑥𝑥0,𝑥𝐹(𝑇).(3.12)

Proof. From Lemma 3.1, we have 𝑥𝑛+1𝑥𝑛0.(3.13)
On the other hand, we have 𝑥𝑛𝑇𝑥𝑛𝑥𝑛+1𝑥𝑛+𝑥𝑛+1𝑇𝑥𝑛=𝑥𝑛+1𝑥𝑛+𝛽𝑥𝑛𝑇𝑥𝑛+1𝛽𝑛𝑦𝑛𝑇𝑥𝑛𝑥𝑛+1𝑥𝑛+𝛽𝑛𝑥𝑛𝑇𝑥𝑛+1𝛽𝑛𝑦𝑛𝑇𝑥𝑛𝑥𝑛+1𝑥𝑛+𝛽𝑛𝑥𝑛𝑇𝑥𝑛+1𝛽𝑛𝛼𝑛𝑥𝛾𝑓𝑛+𝐴𝑇𝑥𝑛,(3.14) that is, 𝑥𝑛𝑇𝑥𝑛11𝛽𝑛𝑥𝑛+1𝑥𝑛+𝛼𝑛𝑥𝛾𝑓𝑛+𝐴𝑇𝑥𝑛,(3.15) this together with (C1), (C3), and (3.13), we obtain lim𝑛𝑥𝑛𝑇𝑥𝑛=0.(3.16)
Next, we show that, for any 𝑥𝐹(𝑇), limsup𝑛𝑦𝑛𝑥𝑥,𝛾𝑓𝐴𝑥0.(3.17)
In fact, we take a subsequence {𝑥𝑛𝑘} of {𝑥𝑛} such that limsup𝑛𝑥𝑛𝑥𝑥,𝛾𝑓𝐴𝑥=lim𝑘𝑥𝑛𝑘𝑥𝑥,𝛾𝑓𝐴𝑥.(3.18) Since {𝑥𝑛} is bounded, we may assume that 𝑥𝑛𝑘0𝑥00086𝑧, where “” denotes the weak convergence. Note that 𝑧𝐹(𝑇) by virtue of Lemma 2.3 and (3.16). It follows from the variational inequality (2.3) in Lemma 2.4 that limsup𝑛𝑥𝑛𝑥𝑥,𝛾𝑓𝐴𝑥=𝑧𝑥𝑥,𝛾𝑓𝐴𝑥0.(3.19) By Lemma 3.1 (noting 𝑆=𝐼), we have 𝑦𝑛𝑥𝑛0.(3.20) Hence, we get limsup𝑛𝑦𝑛𝑥𝑥,𝛾𝑓𝐴𝑥0.(3.21)
Finally, we prove that {𝑥𝑛} converges to the point 𝑥. In fact, from (3.2) we have 𝑦𝑛𝑥𝑥𝑛𝑥+𝛼𝑛𝑥𝛾𝑓𝐴𝑥.(3.22) Therefore, from (3.16), we have 𝑥𝑛+1𝑥2=𝛽𝑛𝑥𝑛𝑥)+(1𝛽𝑛𝑦𝑛𝑥)2𝛽𝑛𝑥𝑛𝑥2+1𝛽𝑛𝑦𝑛𝑥2=𝛽𝑛𝑥𝑛𝑥2+1𝛽𝑛𝛼𝑛𝑥(𝛾𝑓𝑛𝐴𝑥)+(𝐼𝛼𝑛𝐴)(𝑇𝑥𝑛𝑥)2𝛽𝑛𝑥𝑛𝑥2+1𝛽𝑛(1𝛼𝑛𝛾)2𝑥𝑛𝑥2+2𝛼𝑛𝑥𝛾𝑓𝑛𝐴𝑥,𝑦𝑛𝑥=12𝛼𝑛𝛾+1𝛽𝑛𝛼2𝑛𝛾2𝑥𝑛𝑥2+2𝛼𝑛𝑥𝛾𝑓𝑛𝑥𝛾𝑓,𝑦𝑛𝑥+2𝛼𝑛𝑥𝛾𝑓𝐴𝑥,𝑦𝑛𝑥12𝛼𝑛𝛾+1𝛽𝑛𝛼2𝑛𝛾2𝑥𝑛𝑥2+2𝛼𝑛𝑥𝛾𝛼𝑛𝑥𝑦𝑛𝑥+2𝛼𝑛𝑥𝛾𝑓𝐴𝑥,𝑦𝑛𝑥12𝛼𝑛𝑥𝛾𝛾𝛼𝑛𝑥2+1𝛽𝑛𝛼2𝑛𝛾2𝑥𝑛𝑥2+2𝛼2𝑛𝑥𝛾𝛼𝑛𝑥𝑥𝛾𝑓𝐴𝑥+2𝛼𝑛𝑥𝛾𝑓𝐴𝑥,𝑦𝑛𝑥.(3.23) Since {𝑥𝑛}, 𝑓(𝑥) and 𝐴𝑥 are all bounded, we can choose a constant 𝑀>0 such that 1𝛾𝛾𝛼1𝛽𝑛𝛾22𝑥𝑛𝑥2𝑥+𝛾𝛼𝑛𝑥𝑥𝛾𝑓𝐴𝑥𝑀,𝑛0.(3.24) It follows from (3.23) that 𝑥𝑛+1𝑥212𝛼𝛾𝛼𝛾𝑛𝑥𝑛𝑥2+2𝛼𝛾𝛼𝛾𝑛𝛿𝑛,(3.25) where 𝛿𝑛=𝛼𝑛1𝑀+𝑥𝛾𝛾𝛼𝛾𝑓𝐴𝑥,𝑦𝑛𝑥.(3.26) By (C1) and (3.17), we get limsup𝑛𝛽𝑛0.(3.27) Now, applying Lemma 2.1 and (3.25), we conclude that 𝑥𝑛𝑥. This completes the proof.

Taking 𝑇=𝐼 in (3.1), we have the following iterative algorithm:

𝑥𝑛+1=𝛽𝑛𝑥𝑛+1𝛽𝑛𝑆𝑦𝑛,𝑦𝑛=𝛼𝑛𝑥𝛾𝑓𝑛+𝐼𝛼𝑛𝐴𝑥𝑛,𝑛0.(3.28)

Now we state and prove the strong convergence of iterative scheme (3.28).

Theorem 3.4. Let {𝑥𝑛} be a sequence in 𝐻 generated by the algorithm (3.28) with the sequences {𝛼𝑛} and {𝛽𝑛} satisfying the following control conditions: (C1)lim𝑛𝛼𝑛=0,(C2)lim𝑛𝛼𝑛=,(C3)0<liminf𝑛𝛽𝑛limsup𝑛𝛽𝑛<1.Then {𝑥𝑛} converges strongly to a fixed point 𝑥 of 𝑆 which solves the variational inequality (𝐴𝛾𝑓)𝑥,𝑥𝑥0,𝑥𝐹(𝑆).(3.29)

Proof. From Lemma 3.1, we have 𝑥𝑛𝑆𝑦𝑛0.(3.30) Thus, we have 𝑥𝑛𝑆𝑥𝑛𝑥𝑛𝑆𝑦𝑛+𝑆𝑦𝑛𝑆𝑥𝑛𝑥𝑛𝑆𝑦𝑛+𝑦𝑛𝑥𝑛𝑥𝑛𝑆𝑦𝑛+𝛼𝑛𝑥𝛾𝑓𝑛+𝐴𝑥𝑛0.(3.31) By the similar argument as (3.17), we also can prove that limsup𝑛𝑦𝑛𝑥𝑥,𝛾𝑓𝐴𝑥0.(3.32) From (3.28), we obtain 𝑥𝑛+1𝑥2=𝛽𝑛𝑥𝑛𝑥)+(1𝛽𝑛(𝑆𝑦𝑛𝑥)2𝛽𝑛𝑥𝑛𝑥2+1𝛽𝑛𝑆𝑦𝑛𝑥2𝛽𝑛𝑥𝑛𝑥2+1𝛽𝑛𝑦𝑛𝑥2=𝛽𝑛𝑥𝑛𝑥2+1𝛽𝑛𝛼𝑛𝑥(𝛾𝑓𝑛𝐴𝑥)+(𝐼𝛼𝑛𝑥𝐴)𝑛𝑥)2𝛽𝑛𝑥𝑛𝑥2+1𝛽𝑛1𝛼𝑛𝛾2𝑥𝑛𝑥2𝑥+2𝛾𝑓𝑛𝐴𝑥,𝑦𝑛𝑥.(3.33) The remainder of proof follows from the similar argument of Theorem 3.3. This completes the proof.

From the above results, we have the following corollaries.

Corollary 3.5. Let {𝑥𝑛} be a sequence in 𝐻 generated by the following algorithm 𝑥𝑛+1=𝛽𝑛𝑥𝑛+1𝛽𝑛𝑦𝑛,𝑦𝑛=𝛼𝑛𝑓𝑥𝑛+1𝛼𝑛𝑇𝑥𝑛,𝑛0,(3.34) where the sequences {𝛼𝑛} and {𝛽𝑛} satisfy the following control conditions: (C1)lim𝑛𝛼𝑛=0, (C2)lim𝑛𝛼𝑛=, (C3)0<liminf𝑛𝛽𝑛limsup𝑛𝛽𝑛<1.Then {𝑥𝑛} converges strongly to a fixed point 𝑥 of 𝑇 which solves the variational inequality (𝐼𝑓)𝑥,𝑥𝑥0,𝑥𝐹(𝑇).(3.35)

Corollary 3.6. Let {𝑥𝑛} be a sequence in 𝐻 generated by the following algorithm 𝑥𝑛+1=𝛽𝑛𝑥𝑛+1𝛽𝑛𝑆𝑦𝑛,𝑦𝑛=𝛼𝑛𝑓𝑥𝑛+1𝛼𝑛𝑥𝑛,𝑛0,(3.36) where the sequences {𝛼𝑛} and {𝛽𝑛} satisfy the following control conditions: (C1)lim𝑛𝛼𝑛=0, (C2)lim𝑛𝛼𝑛=, (C3)0<liminf𝑛𝛽𝑛limsup𝑛𝛽𝑛<1.Then {𝑥𝑛} converges strongly to a fixed point 𝑥 of 𝑆 which solves the variational inequality (𝐼𝑓)𝑥,𝑥𝑥0,𝑥𝐹(𝑆).(3.37)

Remark 3.7. Theorems 3.3 and 3.4 provide the strong convergence results of the algorithms (3.11) and (3.28) by using the control conditions (C1) and (C2), which are weaker conditions than the previous known ones. In this respect, our results can be considered as an improvement of the many known results.