Abstract

The fine spectra of upper and lower triangular banded matrices were examined by several authors. Here we determine the fine spectra of tridiagonal symmetric infinite matrices and also give the explicit form of the resolvent operator for the sequence spaces 𝑐0, 𝑐, 1, and .

1. Introduction

The spectrum of an operator is a generalization of the notion of eigenvalues for matrices. The spectrum over a Banach space is partitioned into three parts, which are the point spectrum, the continuous spectrum, and the residual spectrum. The calculation of these three parts of the spectrum of an operator is called the fine spectrum of the operator.

The spectrum and fine spectrum of linear operators defined by some particular limitation matrices over some sequence spaces was studied by several authors. We introduce the knowledge in the existing literature concerning the spectrum and the fine spectrum. Wenger [1] examined the fine spectrum of the integer power of the Cesàro operator over 𝑐 and, Rhoades [2] generalized this result to the weighted mean methods. Reade [3] worked on the spectrum of the Cesàro operator over the sequence space 𝑐0. Gonzáles [4] studied the fine spectrum of the Cesàro operator over the sequence space 𝑝. Okutoyi [5] computed the spectrum of the Cesàro operator over the sequence space 𝑏𝑣. Recently, Rhoades and Yildirim [6] examined the fine spectrum of factorable matrices over 𝑐0 and 𝑐. Coşkun [7] studied the spectrum and fine spectrum for the p-Cesàro operator acting over the space 𝑐0. Akhmedov and Başar [8, 9] have determined the fine spectrum of the Cesàro operator over the sequence spaces 𝑐0, , and 𝑝. In a recent paper, Furkan, et al. [10] determined the fine spectrum of 𝐵(𝑟,𝑠,𝑡) over the sequence spaces 𝑐0 and 𝑐, where 𝐵(𝑟,𝑠,𝑡) is a lower triangular triple-band matrix. Later, Altun and Karakaya [11] computed the fine spectra for Lacunary matrices over 𝑐0 and 𝑐.

In this work, our purpose is to determine the fine spectra of the operator, for which the corresponding matrix is a tridiagonal symmetric matrix, over the sequence spaces 𝑐0, 𝑐, 1, and . Also we will give the explicit form of the resolvent for this operator and compute the norm of the resolvent operator when it exists and is continuous.

Let 𝑋 and 𝑌 be Banach spaces and 𝑇𝑋𝑌 be a bounded linear operator. By (𝑇), we denote the range of 𝑇, that is, (𝑇)={𝑦𝑌𝑦=𝑇𝑥;𝑥𝑋}.(1.1) By 𝐵(𝑋), we denote the set of all bounded linear operators on 𝑋 into itself. If 𝑋 is any Banach space, and let 𝑇𝐵(𝑋) then the adjoint 𝑇 of 𝑇 is a bounded linear operator on the dual 𝑋 of 𝑋 defined by (𝑇𝜙)(𝑥)=𝜙(𝑇𝑥) for all 𝜙𝑋 and 𝑥𝑋. Let 𝑋{𝜃} be a complex normed space and 𝑇𝒟(𝑇)𝑋 be a linear operator with domain 𝒟(𝑇)𝑋. With 𝑇, we associate the operator 𝑇𝜆=𝑇𝜆𝐼,(1.2) where 𝜆 is a complex number and 𝐼 is the identity operator on 𝒟(𝑇). If 𝑇𝜆 has an inverse, which is linear, we denote it by 𝑇𝜆1, that is 𝑇𝜆1=(𝑇𝜆𝐼)1(1.3) and call it the resolvent operator of 𝑇𝜆. If 𝜆=0, we will simply write 𝑇1. Many properties of 𝑇𝜆 and 𝑇𝜆1 depend on 𝜆, and spectral theory is concerned with those properties. For instance, we will be interested in the set of all 𝜆 in the complex plane such that 𝑇𝜆1 exists. Boundedness of 𝑇𝜆1 is another property that will be essential. We shall also ask for what 𝜆s the domain of 𝑇𝜆1 is dense in 𝑋. For our investigation of 𝑇, 𝑇𝜆, and 𝑇𝜆1, we need some basic concepts in spectral theory which are given as follows (see [12, pages 370-371]).

Let 𝑋{𝜃} be a complex normed space, and let 𝑇𝒟(𝑇)𝑋 be a linear operator with domain 𝒟(𝑇)𝑋. A regular value 𝜆 of 𝑇 is a complex number such that (R1)𝑇𝜆1 exists,(R2)𝑇𝜆1 is bounded, and(R3)𝑇𝜆1 is defined on a set which is dense in 𝑋.

The resolvent set 𝜌(𝑇) of 𝑇 is the set of all regular values 𝜆 of 𝑇. Its complement 𝜎(𝑇)=𝜌(𝑇) in the complex plane is called the spectrum of 𝑇. Furthermore, the spectrum 𝜎(𝑇) is partitioned into three disjoint sets as follows: the point spectrum 𝜎𝑝(𝑇) is the set such that 𝑇𝜆1 does not exist. A 𝜆𝜎𝑝(𝑇) is called an eigenvalue of 𝑇. The continuous spectrum 𝜎𝑐(𝑇) is the set such that 𝑇𝜆1 exists and satisfies (R3) but not (R2). The residual spectrum 𝜎𝑟(𝑇) is the set such that 𝑇𝜆1 exists but does not satisfy (R3).

A triangle is a lower triangular matrix with all of the principal diagonal elements nonzero. We shall write , 𝑐, and 𝑐0 for the spaces of all bounded, convergent, and null sequences, respectively. And by 𝑝, we denote the space of all 𝑝-absolutely summable sequences, where 1𝑝<. Let 𝜇 and 𝛾 be two sequence spaces and 𝐴=(𝑎𝑛𝑘) be an infinite matrix of real or complex numbers 𝑎𝑛𝑘, where 𝑛,𝑘. Then, we say that 𝐴 defines a matrix mapping from 𝜇 into 𝛾, and we denote it by writing 𝐴𝜇𝛾, if for every sequence 𝑥=(𝑥𝑘)𝜇 the sequence 𝐴𝑥={(𝐴𝑥)𝑛}, the 𝐴-transform of 𝑥, is in 𝛾, where(𝐴𝑥)𝑛=𝑘𝑎𝑛𝑘𝑥𝑘(𝑛).(1.4) By (𝜇𝛾), we denote the class of all matrices 𝐴 such that 𝐴𝜇𝛾. Thus, 𝐴(𝜇𝛾) if and only if the series on the right side of (1.4) converges for each 𝑛 and every 𝑥𝜇, and we have 𝐴𝑥={(𝐴𝑥)𝑛}𝑛𝛾 for all 𝑥𝜇.

A tridiagonal symmetric infinite matrix is of the form 𝑆=𝑆(𝑞,𝑟)=𝑞𝑟0000𝑟𝑞𝑟0000𝑟𝑞𝑟0000𝑟𝑞𝑟0,(1.5) where 𝑞,𝑟. The spectral results are clear when 𝑟=0, so for the sequel we will have 𝑟0.

Theorem 1.1 (cf. [13]). Let 𝑇 be an operator with the associated matrix 𝐴=(𝑎𝑛𝑘). (i)𝑇𝐵(𝑐) if and only if 𝐴=sup𝑛𝑘=1||𝑎𝑛𝑘||𝑎<,(1.6)𝑘=lim𝑛𝑎𝑛𝑘existsforeach𝑘,(1.7)𝑎=lim𝑛𝑘=1𝑎𝑛𝑘exists.(1.8)(ii)𝑇𝐵(𝑐0) if and only if (1.6) and (1.7) with 𝑎𝑘=0 for each 𝑘.(iii)𝑇𝐵() if and only if (1.6). In these cases, the operator norm of 𝑇 is 𝑇()=𝑇(𝑐𝑐)=𝑇(𝑐0𝑐0)=𝐴.(1.9)(iv)𝑇𝐵(1) if and only if 𝐴𝑡=sup𝑘𝑛=1||𝑎𝑛𝑘||<.(1.10) In this case, the operator norm of 𝑇 is 𝑇(11)=𝐴𝑡.

Corollary 1.2. Let 𝜇{𝑐0,𝑐,1,}. 𝑆(𝑞,𝑟)𝜇𝜇 is a bounded linear operator and 𝑆(𝑞,𝑟)(𝜇𝜇)=|𝑞|+2|𝑟|.

2. The Spectra and Point Spectra

Theorem 2.1. 𝜎𝑝(𝑆,𝜇)= for 𝜇{1,𝑐0,𝑐}.

Proof. Since 1𝑐0𝑐, it is enough to show that 𝜎𝑝(𝑆,𝑐)=. Let 𝜆 be an eigenvalue of the operator 𝑆. An eigenvector 𝑥=(𝑥0,𝑥1,)𝑐 corresponding to this eigenvalue satisfies the linear system of equations: 𝑞𝑥0+𝑟𝑥1=𝜆𝑥0𝑟𝑥0+𝑞𝑥1+𝑟𝑥2=𝜆𝑥1𝑟𝑥1+𝑞𝑥2+𝑟𝑥3=𝜆𝑥2.(2.1) If 𝑥0=0, then 𝑥𝑘=0 for all 𝑘. Hence 𝑥00. Without loss of generality we can suppose 𝑥0=1. Then 𝑥1=(𝜆𝑞)/𝑟 and the system of equations turn into the linear homogeneous recurrence relation 𝑥𝑛+𝑝𝑥𝑛1+𝑥𝑛2=0for𝑛2,(2.2) where 𝑝=(𝑞𝜆)/𝑟. The characteristic polynomial of the recurrence relation is 𝑥2+𝑝𝑥+1=0.(2.3) There are three cases here.Case 1 (𝑝=2). Then characteristic polynomial has only one root: 𝛼=1. Hence, the solution of the recurrence relation is of the form 𝑥𝑛=(𝐴+𝐵𝑛)(𝛼)𝑛=𝐴+𝐵𝑛,(2.4) where 𝐴 and 𝐵 are constants which can be determined by the first two terms 𝑥0 and 𝑥1. 1=𝑥0=𝐴+𝐵0, so we have 𝐴=1. And 𝑝=𝑥1=𝐴+𝐵1, so we have 𝐵=1. Then 𝑥𝑛=𝑛+1. This means (𝑥𝑛)𝑐. So, we conclude that there is no eigenvalue in this case.Case 2 (𝑝=2). Then characteristic polynomial has only one root: 𝛼=1. The solution of the recurrence relation, found as in Case 1, is 𝑥𝑛=(𝑛+1)(1)𝑛. So, there is no eigenvalue in this case.Case 3 (𝑝±2). Then the characteristic polynomial has two distinct roots 𝛼1±1 and 𝛼2±1 with 𝛼1𝛼2=1. Let |𝛼1|1|𝛼2|. The solution of the recurrence relation is of the form 𝑥𝑛𝛼=𝐴1𝑛𝛼+𝐵2𝑛.(2.5) Using the first two terms and the fact that 𝑝=(𝛼1+𝛼2), we get 𝐴=𝛼1/(𝛼1𝛼2) and 𝐵=𝛼2/(𝛼2𝛼1). So we have 𝑥𝑛=𝛼1𝑛+1𝛼2𝑛+1𝛼1𝛼2.(2.6) If |𝛼1|>1>|𝛼2|, then ||𝑥𝑛||1||𝛼1𝛼2||||𝛼1||𝑛+1||𝛼2||𝑛+1.(2.7)So lim𝑛|𝑥𝑛|=, which means (𝑥𝑛)𝑐. Now, if |𝛼1|=|𝛼2|=1, then there exists 𝜃(0,𝜋) such that 𝛼1=𝑒𝑖𝜃 and 𝛼2=𝑒𝑖𝜃. So, 𝑥𝑛=[sin(𝑛+1)𝜃]/sin𝜃. Again we have (𝑥𝑛)𝑐. Hence there is no eigenvalue also in this case.

Repeating all the steps in the proof of this theorem for , we get to the following.

Theorem 2.2. 𝜎𝑝(𝑆,)=(𝑞2𝑟,𝑞+2𝑟).

Theorem 2.3. Let 𝑝=(𝑞𝜆)/𝑟. Let 𝛼1 and 𝛼2 be the roots of the polynomial 𝑃(𝑥)=𝑥2+𝑝𝑥+1, with |𝛼2|>1>|𝛼1|. Then the resolvent operator over 𝑐0 is 𝑆𝜆1=(𝑠𝑛𝑘), where 𝑠𝑛𝑘=1𝑟𝛼21𝛼11𝑛𝑘+1𝛼1𝑛+𝑘+3𝛼if𝑛𝑘1𝑛+𝑘+1𝛼1𝑛+𝑘+3if𝑛<𝑘.(2.8) Moreover, this operator is continuous and the domain of the operator is the whole space 𝑐0.

Proof. Let 𝛼1 and 𝛼2 be as it is stated in the theorem. From (1/𝑟)𝑆𝜆𝑥=𝑦 we get to the system of equations: 𝑝𝑥0+𝑥1=𝑦0𝑥0+𝑝𝑥1+𝑥2=𝑦1𝑥1+𝑝𝑥2+𝑥3=𝑦2.(2.9) This is a nonhomogenous linear recurrence relation. Using the fact that (𝑥𝑛),(𝑦𝑛)𝑐0, for (2.9) we can reach a solution with generating functions. This solution can be given by 𝑥𝑛=1𝛼211𝑘=0𝑡𝑛𝑘𝑦𝑘,(2.10) where 𝑡𝑛𝑘=𝛼1𝑛+1𝑘𝛼1𝑛+3+𝑘𝛼if𝑛𝑘1𝑘+1𝑛𝛼1𝑘+3+𝑛if𝑛<𝑘.(2.11) Let 𝑇=(𝑡𝑛𝑘). We can see that by using Theorem 1.1, 𝑇𝐵(𝑐0). So (1/𝛼211)𝑇 is the resolvent operator of (1/𝑟)𝑆𝜆 and is continuous.

If 𝑇𝜇𝜇 (𝜇 is 1 or 𝑐0) is a bounded linear operator represented by the matrix 𝐴, then it is known that the adjoint operator 𝑇𝜇𝜇 is defined by the transpose 𝐴𝑡 of the matrix 𝐴. It should be noted that the dual space 𝑐0 of 𝑐0 is isometrically isomorphic to the Banach space 1 and the dual space 1, of 1 is isometrically isomorphic to the Banach space .

Corollary 2.4. 𝜎(𝑆,𝜇)[𝑞2𝑟,𝑞+2𝑟] for 𝜇{1,𝑐0,𝑐,}.

Proof. 𝜎(𝑆,𝑐0)=𝜎(𝑆,𝑐0)=𝜎(𝑆,1)=𝜎(𝑆,1)=𝜎(𝑆,). And by Cartlidge [14], if a matrix operator 𝐴 is bounded on 𝑐, then 𝜎(𝐴,𝑐)=𝜎(𝐴,). Hence we have 𝜎(𝑆,𝑐0)=𝜎(𝑆,1)=𝜎(𝑆,)=𝜎(𝑆,𝑐). What remains is to show that 𝜎(𝑆,𝑐0)[𝑞2𝑟,𝑞+2𝑟]. By Theorem 2.3, there exists a resolvent operator of 𝑆𝜆 which is continuous and the whole space 𝑐0 is the domain if the roots of the polynomial 𝑃(𝑥)=𝑥2+𝑝𝑥+1 satisfy ||𝛼2||||𝛼>1>1||.(2.12) So, if 𝜆𝜎(𝑆,𝑐0) then (2.12) is not satisfied. Since 𝛼1𝛼2=1, (2.12) is not satisfied means, the roots can be only of the form 𝛼1=1𝛼2=𝑒𝑖𝜃(2.13) for some 𝜃[0,2𝜋). Then (𝑞𝜆)/𝑟=𝑝=(𝛼1+𝛼2)=(𝑒𝑖𝜃+𝑒𝑖𝜃)=2cos𝜃. Hence 𝜆=𝑞+2𝑟cos𝜃, which means 𝜆 can be only on the line segment [𝑞2𝑟,𝑞+2𝑟].

Theorem 2.5. 𝜎(𝑆,𝜇)=[𝑞2𝑟,𝑞+2𝑟] for 𝜇{1,𝑐0,𝑐,}.

Proof. By Theorem 2.2 and Corollary 2.4(𝑞2𝑟,𝑞+2𝑟)𝜎(𝑆,)[𝑞2𝑟,𝑞+2𝑟]. Since the spectrum of a bounded linear operator over a complex Banach space is closed, we have 𝜎(𝑆,)=[𝑞2𝑟,𝑞+2𝑟]. And from the proof of Corollary 2.4 we have 𝜎(𝑆,1)=𝜎(𝑆,𝑐0)=𝜎(𝑆,𝑐)=𝜎(𝑆,).

3. The Continuous Spectra and Residual Spectra

Lemma 3.1 (see [15, page 59]). 𝑇 has a dense range if and only if 𝑇 is one to one.

Corollary 3.2. If 𝑇(𝜇𝜇) then 𝜎𝑟(𝑇,𝜇)=𝜎𝑝(𝑇,𝜇)𝜎𝑝(𝑇,𝜇).

Theorem 3.3. 𝜎𝑟(𝑆,𝑐0)=.

Proof. 𝜎𝑝(𝑆,1)= by Theorem 2.1. Now using Corollary 3.2, we have 𝜎𝑟(𝑆,𝑐0)=𝜎𝑝(𝑆,𝑐0)𝜎𝑝(𝑆,𝑐0)=𝜎𝑝(𝑆,1)𝜎𝑝(𝑆,𝑐0)=.

Theorem 3.4. 𝜎𝑟(𝑆,1)=(𝑞2𝑟,𝑞+2𝑟).

Proof. Similarly as in the proof of the previous theorem, we have 𝜎𝑟(𝑆,1)=𝜎𝑝(𝑆,1)𝜎𝑝(𝑆,1)=𝜎𝑝(𝑆,)𝜎𝑝(𝑆,1)=(𝑞2𝑟,𝑞+2𝑟).

If 𝑇𝑐𝑐 is a bounded matrix operator represented by the matrix 𝐴, then 𝑇𝑐𝑐 acting on 1 has a matrix representation of the form 𝜒0𝑏𝐴𝑡,(3.1) where 𝜒 is the limit of the sequence of row sums of 𝐴 minus the sum of the limits of the columns of 𝐴, and 𝑏 is the column vector whose 𝑘th entry is the limit of the 𝑘th column of 𝐴 for each 𝑘. For 𝑆𝜆𝑐𝑐, the matrix 𝑆𝜆 is of the form 2𝑟+𝑞𝜆00𝑆𝜆.(3.2)

Theorem 3.5. 𝜎𝑟(𝑆,𝑐)={𝑞+2𝑟}.

Proof. Let 𝑥=(𝑥0,𝑥1,)1 be an eigenvector of 𝑆 corresponding to the eigenvalue 𝜆. Then we have (2𝑟+𝑞)𝑥0=𝜆𝑥0 and 𝑆𝑥=𝜆𝑥 where 𝑥=(𝑥1,𝑥2,). By Theorem 2.1,𝑥=(0,0,). Then 𝑥00. And 𝜆=2𝑟+𝑞 is the only value that satisfies (2𝑟+𝑞)𝑥0=𝜆𝑥0. Hence 𝜎𝑝(𝑆,𝑐)={2𝑟+𝑞}. Then 𝜎𝑟(𝑆,𝑐)=𝜎𝑝(𝑆,𝑐)𝜎𝑝(𝑆,𝑐)={2𝑟+𝑞}.

Now, since the spectrum 𝜎 is the disjoint union of 𝜎𝑝, 𝜎𝑟, and 𝜎𝑐, we can find 𝜎𝑐 over the spaces 1, 𝑐0, and 𝑐. So we have the following.

Theorem 3.6. For the operator 𝑆, one has the following: 𝜎𝑐𝑆,1=𝜎{𝑞2𝑟,𝑞+2𝑟},𝑐𝑆,𝑐0=[],𝜎𝑞2𝑟,𝑞+2𝑟𝑐[(𝑆,𝑐)=𝑞2𝑟,𝑞+2𝑟).(3.3)

4. The Resolvent Operator

The following theorem is a generalization of Theorem 2.3.

Theorem 4.1. Let 𝜇{𝑐0,𝑐,1,}. The resolvent operator 𝑆1 over 𝜇 exists and is continuous, and the domain of 𝑆1 is the whole space 𝜇 if and only if 0[𝑞2𝑟,𝑞+2𝑟]. In this case, 𝑆1 has a matrix representation (𝑠𝑛𝑘) defined by 𝑠𝑛𝑘=1𝑟𝛼21𝛼11𝑛𝑘+1𝛼1𝑛+𝑘+3𝛼if𝑛𝑘1𝑛+𝑘+1𝛼1𝑛+𝑘+3if𝑛<𝑘,(4.1) where 𝛼1 is the root of the polynomial 𝑃(𝑥)=𝑟𝑥2+𝑞𝑥+𝑟 with |𝛼1|<1.

Proof. Let 𝜇 be one of the sequence spaces in {𝑐0,𝑐,1,}. Suppose 𝑆 has a continuous resolvent operator where the domain of the resolvent operator is the whole space 𝜇. Then 𝜆=0 is not in 𝜎(𝑆,𝜇)=[𝑞2𝑟,𝑞+2𝑟]. Conversely if 0[𝑞2𝑟,𝑞+2𝑟], then 𝑆 has a continuous resolvent operator, and since 𝑆 is bounded by Lemma 7.2-7.3 of [12] the domain of this resolvent operator is the whole space 𝜇.
Now, suppose 0[𝑞2𝑟,𝑞+2𝑟]. Let 𝛼1 and 𝛼2 be the roots of the polynomial 𝑃(𝑥)=𝑟𝑥2+𝑞𝑥+𝑟 where |𝛼1||𝛼2|. Since 0[𝑞2𝑟,𝑞+2𝑟], by the proof of Corollary 2.4|𝛼1||𝛼2|. Then |𝛼1|<1<|𝛼2|. So 𝑆 satisfies the conditions of Theorem 2.3. Hence the resolvent operator of 𝑆 is represented by the matrix 𝑆1=(𝑠𝑛𝑘) defined by 𝑠𝑛𝑘=1𝑟𝛼21𝛼11𝑛𝑘+1𝛼1𝑛+𝑘+3𝛼if𝑛𝑘1𝑛+𝑘+1𝛼1𝑛+𝑘+3if𝑛<𝑘,(4.2) when 𝜇=𝑐0 by that theorem. The matrix 𝑆1 is already a left inverse of the matrix 𝑆. Observe that 𝑆1 satisfies also the corresponding conditions of Theorem 1.1, which means 𝑆1(𝜇,𝜇) for 𝜇{𝑐,1,}. So, the matrix 𝑆1 is the representation of the resolvent operator also for the spaces in {𝑐,1,}.

Remark 4.2. If a matrix 𝐴 is a triangle, we can see that the resolvent (when it exists) is the unique lower triangular left hand inverse of 𝐴. In our case, 𝑆 is far away from being a triangle. The matrix 𝑆1 of this theorem is not the unique left inverse of the matrix 𝑆 for 0[𝑞2𝑟,𝑞+2𝑟]. For example, the matrix 𝑇=(𝑡𝑛𝑘) defined by 𝑡𝑛𝑘=1𝑟𝛼21𝛼11𝑛+𝑘+1𝛼1𝑛𝑘+1if𝑛<𝑘0if𝑛𝑘(4.3) is another left inverse of 𝑆. Then 𝜆𝑆1+(1𝜆)𝑇 is also a left inverse of 𝑆 for any 𝜆, which means there exist infinitely many left inverses for 𝑆.

Theorem 4.3. Let 0[𝑞2𝑟,𝑞+2𝑟], and, 𝛼1 be the root of 𝑃(𝑥)=𝑟𝑥2+𝑞𝑥+𝑟 with |𝛼1|<1. Then for 𝜇{𝑐0,𝑐,1,} we have 𝑆1(𝜇𝜇)=||𝛼1||+||𝛼1||2|||𝑟|1𝛼21||||𝛼11||.(4.4)

Proof. Since 𝑆1 is a symmetric matrix, the supremum of the 1 norms of the rows is equal to the supremum of the 1 norms of the columns. So, according to Theorem 1.1, what we need is to calculate the supremum of the 1 norms of the rows of 𝑆1. Denote the 𝑛th row 𝑆1 by 𝑆𝑛1 for 𝑛=0,1,. Now, let us fix the row 𝑛 and calculate the 1 norm for this row. Let 𝜌=1/|𝑟(1𝛼21)|. By using Theorem 4.1, we have 𝑆𝑛11=𝜌𝑛𝑘=0||𝛼1𝑛𝑘+1𝛼1𝑛+𝑘+3||+𝑘=𝑛+1||𝛼1𝑛+𝑘+1𝛼1𝑛+𝑘+3||𝜌𝑛𝑘=0||𝛼1||𝑛𝑘+1+||𝛼1||𝑛+𝑘+3+𝑘=𝑛+1||𝛼1||𝑛+𝑘+1+||𝛼1||𝑛+𝑘+3=𝜌𝑛𝑘=0||𝛼1||𝑛𝑘+1+𝑘=𝑛+1||𝛼1||𝑛+𝑘+1+𝑘=0||𝛼1||𝑛+𝑘+3=𝜌𝑛+1𝑘=1||𝛼1||𝑘+𝑘=2||𝛼1||𝑘+𝑘=𝑛+3||𝛼1||𝑘2=𝜌𝑘=0||𝛼1||𝑘||𝛼21||||𝛼1||𝑛+22𝜌𝑘=0||𝛼1||𝑘||𝛼21||=||𝛼1||+||𝛼1||2|||𝑟|1𝛼21||||𝛼11||.(4.5) Hence 𝑆1(𝜇𝜇)=sup𝑛𝑆𝑛11||𝛼1||+||𝛼1||2|||𝑟|1𝛼21||||𝛼11||.(4.6) On the other hand 𝑆𝑛11=𝜌𝑛𝑘=0||𝛼1𝑛𝑘+1𝛼1𝑛+𝑘+3||+𝑘=𝑛+1||𝛼1𝑛+𝑘+1𝛼1𝑛+𝑘+3||𝜌𝑛𝑘=0||𝛼1||𝑛𝑘+1||𝛼1||𝑛+𝑘+3+𝑘=𝑛+1||𝛼1||𝑛+𝑘+1||𝛼1||𝑛+𝑘+3=𝜌𝑛𝑘=0|𝛼1|𝑛𝑘+1+𝑘=𝑛+1||𝛼1||𝑛+𝑘+1𝑘=0||𝛼1||𝑛+𝑘+3=𝜌𝑛+1𝑘=1||𝛼1||𝑘+𝑘=2||𝛼1||𝑘𝑘=𝑛+3||𝛼1||𝑘.(4.7) Then 𝑆1(𝜇𝜇)=sup𝑛𝑆𝑛11lim𝑛𝑆𝑛11=𝜌𝑘=1||𝛼1||𝑘+𝑘=2||𝛼1||𝑘=||𝛼1||+||𝛼1||2|||𝑟|1𝛼21||||𝛼11||.(4.8)

Acknowledgment

The author thanks the referees for their careful reading of the original paper and for their valuable comments.