Abstract

We consider the maximal dissipative second-order difference (or discrete Sturm-Liouville) operators acting in the Hilbert space (:), that is, the extensions of a minimal symmetric operator with defect index () (in the Weyl-Hamburger limit-circle cases at ). We investigate two classes of maximal dissipative operators with separated boundary conditions, called “dissipative at ” and “dissipative at .” In each case, we construct a self-adjoint dilation of the maximal dissipative operator and its incoming and outgoing spectral representations, which make it possible to determine the scattering matrix of the dilation. We also establish a functional model of the maximal dissipative operator and determine its characteristic function through the Titchmarsh-Weyl function of the self-adjoint operator. We prove the completeness of the system of eigenvectors and associated vectors of the maximal dissipative operators.

1. Introduction

Method of contour integration of the resolvent is one of the general methods of the spectral analysis of nonself-adjoint (dissipative) operators. It is related to a fine estimate of the resolvent on expanding contours which separates the spectrum. The feasibility of this method is restricted to weak perturbations of self-adjoint operators and operators having sparse discrete spectrum. Since there are no asymptotics of the solutions for a wide class of singular problems, this method cannot be applied properly.

It is well known [14] that the theory of dilations with application of functional models gives an adequate approach to the spectral theory of dissipative (contractive) operators. In this theory, a key role is played by the characteristic function, which carries the full information on the spectral properties of the dissipative operator. Thus, the dissipative operator becomes the model in the incoming spectral representation of the dilation. The completeness problem of the system of eigenvectors and associated vectors is solved by the factorization of the characteristic function. The computation of the characteristic functions of dissipative operators is preceded by the construction and investigation of the self-adjoint dilation and the corresponding scattering problem, in which the characteristic function is realized as the scattering matrix [5]. The adequacy of this approach for dissipative Jacobi operators and second-order difference (or discrete Sturm-Liouville) operators has been indicated in [69].

In this paper, we consider the maximal dissipative second-order difference (or discrete Sturm-Liouville) operators acting in the Hilbert space , that is the extensions of a minimal symmetric operator with defect index () (in the Weyl-Hamburger limit-circle cases at ). We investigate two classes of maximal dissipative operators with separated boundary conditions, called “dissipative at ” and “dissipative at .” In each of these cases we construct a self-adjoint dilation of the maximal dissipative operator and its incoming and outgoing spectral representations, which make it possible to determine the scattering matrix of the dilation according to the scheme of Lax and Phillips [5]. By means of the incoming spectral representation, we establish a functional model of the maximal dissipative operator and construct its characteristic function using the Titchmarsh-Weyl function of the self-adjoint operator. Finally, on the basis of the results obtained for the characteristic functions, we prove the theorems on completeness of the system of eigenvectors and associated vectors (or root vectors) of the maximal dissipative second-order difference operators.

2. Preliminaries

Let be a sequence of complex numbers and denote the sequence with components . We consider the following second-order difference (or discrete Sturm-Liouville) equation on the whole line: where is a complex spectral parameter, , and :, .

If we let , and , (2.1) can be written in Sturm-Liouville form as follows:

For arbitrary sequences and , we denote by the sequence with components defined as: Let with . Then we have the Green’s formula:

For any sequence , let denote the sequence with components given by , . We denote by the Hilbert space of all complex sequences , such that , with the inner product . Next, we denote by the set of all vectors such that . We define a maximal operator on by setting .

It follows from Green’s formula (2.4) that the limits and exist and are finite for arbitrary vectors . Therefore, taking the limit as and in (2.4), for all , we have

Denote by the closure of the symmetric operator defined by on the linear set of finite sequences (i.e., vectors having only finitely many nonzero components) . The minimal operator is symmetric and . The computation of the defect index of can be reduced to the computation of the defect index for the half-line case. In fact, is the orthogonal sum of the space , () and () which are imbedded in the natural way in . Denote by and the minimal (maximal) operators generated by and in the spaces and , respectively, and is a domain of , where , , , . Then it is easy to see that the equality is satisfied for the defect number :, , of . This shows that the defect index of has the form , where or 2. For defect index () the operator is self-adjoint, that is, .

Assume that the symmetric operator has defect index (). There are several sufficient conditions that guarantee Weyl-Hamburger limit-circle cases at (i.e., the operator has defect index (), see [1017]). The domain of consists of precisely those vectors satisfying the condition

Denote by and the solutions of (2.1) satisfying the initial conditions:

The Wronskian of the two solutions and of (2.1) is defined as , so that . The Wronskian of the two solutions of (2.1) is independent of , and the two solutions of this equation are linearly independent if and only if their Wronskian is nonzero. It follows from the conditions (2.7) and the constancy of the Wronskian that . Consequently, and form a fundamental system of solutions of (2.1), and , for all . The theory of difference equations can be seen in [18, 19].

Let and . Since the vectors and are real valued and , the following assertion can be verified easily using (2.3).

Lemma 2.1. For arbitrary vectors and , one has the equality:

The domain of the operator consists of precisely those vectors satisfying the boundary conditions:

Let us consider the following linear maps from into Then we have the following result (see [8]).

Theorem 2.2. For any contraction in the restriction of the operator to the set of vectors satisfying the boundary conditions or is, respectively, a maximal dissipative or a accretive extension of the operator . Conversely, every maximal dissipative (accretive) extension of is the restriction of to the set of vectors satisfying (2.11) (2.12), and the contraction is uniquely determined by the extension. These conditions give a self-adjoint extension if and only if is unitary. In the latter case (2.11) and (2.12) are equivalent to the condition , where is a self-adjoint (Hermitian matrix) operator in . The general form of dissipative and accretive extensions of the operator is given by the conditions respectively, where is a linear operator in with . The general form of symmetric extensions is given by the formulae (2.13), where is an isometric operator.
In particular, if is a diagonal matrix, the boundary conditions with or , and or or , and or describe all the maximal dissipative (maximal accretive) extensions of with separated boundary conditions. The self-adjoint extensions of are obtained precisely when or , and or . Here for   condition (2.14) (2.15) should be replaced by .

In what follows, we will study the dissipative operators generated by and the boundary conditions (2.14) and (2.15) of two types: “dissipative at ,” that is, when either and   or ; “dissipative at ,” when   or and .

3. Self-Adjoint Dilations of the Maximal Dissipative Operators

In order to construct a self-adjoint dilation of the maximal dissipative operator in the case of “dissipative at ” (i.e., and or ), we associate with the “incoming” and “outgoing” channels and , we form the orthogonal sum and we call it the main Hilbert space of the dilation. In the space , we consider the operator generated by the expression on the set of vectors satisfying the conditions , , and where denotes the Sobolev space and , .

Theorem 3.1. The operator is self-adjoint in and it is a self-adjoint dilation of the maximal dissipative operator .

Proof. We assume that with and . Then using integration by parts and (3.1), we obtain If we use the boundary conditions (3.2) for the components of the vectors and Lemma 2.1, we see that . Thus, is symmetric. Therefore, to prove that is self-adjoint, it is sufficient to show that . Let us take and let so that If we choose the components of properly in (3.4), it becomes easy to show that , , , and , where the operator is given by (3.1). As a result, (3.4) takes the form , for all . Hence, in the bilinear form , the sum of the integral terms must be equal to zero: for all . In addition, if we solve the boundary conditions (3.2) for and , we get It follows from Lemma 2.1 and (3.6) that (3.5) is equivalent to the following equality: Note that the values can be any complex numbers. Therefore, when we compare the coefficients of on the left and right of the last equality we see that the vector satisfies the boundary conditions , , . Consequently, we obtain , and hence .
The self-adjoint operator generates in a unitary group ,  . Let and denote the mappings acting according to the formulas and . Let . The family , , of operators is a strongly continuous semigroup of completely nonunitary contractions on . (We recall that the linear bounded operator acting in the Hilbert space is called completely nonunitary if invariant subspace ( of operator whose restriction to is unitary, does not exist). Let us denote by the generator of this semigroup: . The domain of consists of all the vectors for which the limit exists. is a maximal dissipative operator. The operator is called the self-adjoint dilation of [14]. We show that , and thus is a self-adjoint dilation of . To do this, we first verify the equality [14]: Denote . Then , and hence , and . Since , and hence, ; it follows that , and, consequently, satisfies the boundary conditions , . Therefore, and since a point with cannot be an eigenvalue of a dissipative operator, it follows that . Note that is obtained from the formula . Then for and . By applying , one can obtain (3.8).
Now, it is not difficult to show that . In fact, it follows from (3.8) that and thus . Theorem 3.1. is proved.

In order to construct a self-adjoint dilation of the maximal dissipative operator in the case “dissipative at ” (i.e., or and ) in , we consider the operator generated by the expression (3.1) on the set of vectors satisfying the conditions , , and where .

The proof of the next theorem is similar to that of Theorem 3.1.

Theorem 3.2. The operator is self-adjoint in and it is a self-adjoint dilation on the maximal dissipative operator .

4. Scattering Theory of the Dilations and Functional Models of the Maximal Dissipative Operators

The unitary group () has a crucial property which enables us to apply the Lax-Phillips scheme [5]. In other words, it has incoming and outgoing subspaces and satisfying the following properties:(1), and , ;(2);(3); (4).

Property (4) is obvious. To verify property (1) for (the proof for is similar), we set , for all with . Then, for any , we have Hence, we find . Therefore, if , then it follows that

From this, we conclude that for all . Hence , for , which completes the proof of property (1).

To prove property (2), we denote by and the mappings acting according to the formulas and , respectively. Note that the semigroup of isometries , is a one-sided shift in . Indeed, the generator of the semigroup of the one-sided shift in is the differential operator satisfying the boundary condition . On the other hand, the generator of the semigroup of isometries , , is the operator , where and . As a semigroup is uniquely determined by its generator, it follows that , and thus, , which verifies the property (2).

The scattering matrix is defined in terms of the spectral representations theory in this scheme of the Lax-Phillips scattering theory. We will continue with their construction and prove property (3) of the incoming and outgoing subspaces along the way.

We recall that the linear operator (with domain ) acting in the Hilbert space is called completely nonself-adjoint (or simple) if the invariant subspace () of the operator whose restriction to is self-adjoint, does not exist.

Lemma 4.1. The operator   is completely nonself-adjoint (simple).

Proof. Let be a nontrivial subspace where (the proof for is similar) induces a self-adjoint operator with domain . If , then we get and , . It follows that , and for the eigenvectors of the operator that lie in and are eigenvectors of . Since all solutions of (2.1) belong to , we conclude that the resolvent of the operator is a Hilbert-Schmidt operator, and hence the spectrum of is purely discrete. Using the theorem on expansion in eigenvectors of the self-adjoint operator , we see that , that is, the operator is simple. The lemma is proved.

To prove property (3) we first set and prove the following lemma.

Lemma 4.2. The equality holds.

Proof. Using property (1) of the subspace , we can easily show that the subspace is invariant with respect to the group and has the form , where is a subspace in . Accordingly, if the subspace (and thus, as well) were nontrivial, then the unitary group , restricted to this subspace, would be a unitary part of the group , and thus the restriction of to would be a self-adjoint operator in . It follows from the simplicity of the operator that , that is, . The proof is completed.

Let and be the solutions of (2.1) satisfying the conditions:

The Titchmarsh-Weyl function of the self-adjoint operator is determined by the condition . Then, we have The last equality implies that is a meromorphic function on the complex plane with a countable number of poles on the real axis, which coincide with the eigenvalues of the self-adjoint operator . One can also show that the function has the following properties: for and for complex with the exception of the real poles of .

We adopt the following notations: ,

Let For real values of , the vectors do not belong to the space , but they satisfy the equation and the boundary conditions (3.2). Using , we define the transformation by on the vector , where , are smooth, compactly supported functions, and , is a finite sequence.

Lemma 4.3. The transformation isometrically maps onto . For all vectors , the Parseval equality and the inversion formula hold: where and .

Proof. For , , , we have and, by the usual Parseval equality for Fourier integrals, From now on, let denote the Hardy classes in consisting of the functions which are analytically extendable to the upper and lower half-planes, respectively.
Let us extend the Parseval equality to the whole . To this end, we consider in the dense set of vectors obtained from the smooth, compactly supported functions in if , , , where is a nonnegative number (depending on ). In this case, if , then for and . Furthermore, the first components of these vectors belong to . Since the operators , are unitary, the equality gives us that If we take the closure in (4.11), we get the Parseval equality for the whole space . The inversion formula follows from the Parseval equality if all integrals in it are considered as limits in the mean of integrals over finite intervals. In conclusion, we have which implies that maps onto the whole of . The lemma is proved.

Now, we let Note as in the previous case that the vectors , for real values of , do not belong to the space . But, satisfies the equation , , and the boundary conditions (3.2). By means of , we consider the transformation by setting on vectors , where , are smooth, compactly supported functions, and , , is a finite sequence. The proof of the next result is similar to that of Lemma 4.3.

Lemma 4.4. The transformation isometrically maps onto , and for all vectors , the Parseval equality and the inversion formula hold: where and .

From (4.6), we see that for all . Therefore, it follows from the explicit formula for the vectors and that Lemmas 4.3 and 4.4 imply that . Together with Lemma 4.2, this results in and the property (3) of the incoming and outgoing subspaces for .

Therefore, the transformation maps isometrically onto with the subspace mapped onto and the operators are transformed into the operators of multiplication by , that is, is the incoming spectral representation for the group . Similarly is the outgoing spectral representation for . It is seen from (4.14) that we can realize the passage from the -representation of a vector to its -representation multiplying by the function . According to [5], the scattering function (matrix) of the group with respect to the subspaces and , is the coefficient by which the -representation of a vector must be multiplied in order to get the corresponding -representation: and, thus, we have proved the following theorem.

Theorem 4.5. The function is the scattering matrix of the group (of the self-adjoint operator ).

Let be an arbitrary nonconstant inner function [14] on the upper half-plane (the analytic function on the upper half-plane is called inner function on if for and for almost all ). Let . We can see that is a subspace of the Hilbert space . Now, let us consider the semigroup of the operators , , acting in according to the formula , , where denotes the orthogonal projection from onto . The generator of the semigroup is denoted by , which is a maximal dissipative operator acting in with the domain consisting of all vectors , so that the limit exists. The operator is called a model dissipative operator (note that this model dissipative operator, which is associated with the names of Lax and Phillips [5], is a special case of a more general model dissipative operator constructed by Sz-Nagy and Foiaş [1, 2]). The basic assertion is that is the characteristic function of the operator .

Let so that . It can be concluded from the explicit form of the unitary transformation   that The formulas (4.15) show that the operator is unitarily equivalent to the model dissipative operator with the characteristic function . Since the characteristic functions of unitarily equivalent dissipative operators coincide [14], we have proved the theorem below.

Theorem 4.6. The characteristic function of the maximal dissipative operator coincides with the function defined in (4.6).

If is the Titchmarsh-Weyl function of the self-adjoint operator , then it can be expressed in terms of the Wronskian of the solutions as follows: Here and are solutions of (2.1) and normalized by Let us adopt the following notations:

Let One can see that the vector does not belong to for , but satisfies the equation , , and the boundary conditions (3.11). By means of , we define the transformation by on the vector , where , are smooth, compactly supported functions, and , is a finite sequence. The next result can be proved following the steps similar to the proof of Lemma 4.3.

Lemma 4.7. The transformation isometrically maps onto . For all vectors , the Parseval equality and the inversion formula hold: where and .

Let The vector does not belong to for . However, satisfies the equation , , and the boundary conditions (3.11). Using , let us consider the transformation on vectors , in which are smooth, compactly supported functions, and , is a finite sequence, by setting .

Lemma 4.8. The transformation isometrically maps onto . For all vectors , the Parseval equality and the inversion formula hold: where and .

It is seen from (4.18) that the function satisfies for . Therefore, the explicit formula for the vectors and gives us that Hence, we conclude the equality from Lemmas 4.7 and 4.8. Together with Lemma 4.2, we get . We can see from (4.23) that the passage from the -representation of a vector to its -representation is realized as follows: . Thus, we have proved the following assertion.

Theorem 4.9. The function is the scattering matrix of the group (of the self-adjoint operator ).

Using the explicit form of the unitary transformation , we obtain We conclude from (4.24) that the operator is a unitary equivalent to the model dissipative operator with characteristic function , which in turn proves the next theorem.

Theorem 4.10. The characteristic function of the maximal dissipative operator coincides with the function defined by (4.18).

5. Completeness Theorems for the System of Eigenvectors and Associated Vectors of the Maximal Dissipative Operators

We know that the characteristic function of a maximal dissipative operator carries complete information about the spectral properties of this operator [14]. For example, completeness of the system of eigenvectors and associated vectors of the maximal dissipative operators is guaranteed by the absence of a singular factor of the characteristic function in the factorization (where is a Blaschke product).

Let be a linear operator in the Hilbert space with the domain . The complex number is called an eigenvalue of the operator if there exists a nonzero element satisfying . Such an element is called the eigenvector of the operator corresponding to the eigenvalue . The elements are called the associated vectors of the eigenvector if they belong to and satisfy , . The element is called a root vector of the operator corresponding to the eigenvalue , if all powers of are defined on this element and for some integer . The set of all root vectors of corresponding to the same eigenvalue with the vector forms a linear set and is called the root lineal. The dimension of the lineal is called the algebraic multiplicity of the eigenvalue . The root lineal coincides with the linear span of all eigenvectors and associated vectors of corresponding to the eigenvalue . Therefore, the completeness of the system of all eigenvectors and associated vectors of is equivalent to the completeness of the system of all root vectors of this operator.

Theorem 5.1. For all values of with , except possibly for a single value , and for fixed or , the characteristic function of the maximal dissipative operator is a Blaschke product and the spectrum of is purely discrete and belongs to the open upper half plane. The operator has a countable number of isolated eigenvalues with finite algebraic multiplicity and limit points at infinity, and the system of all eigenvectors and associated vectors (or root vectors) of this operator is complete in the space .

Proof. It can be easily seen from (4.6) that is an inner function in the upper half-plane and, moreover, it is meromorphic in the whole -plane. Then, it can be factorized as where is a Blaschke product. It can be inferred from (5.1) that Further, if we express in terms of and then use (4.6), we find
If for a given value (), then follows from (5.2). Hence, we obtain in the light of (5.3). Since is independent of , can be nonzero at not more than a single point (and, further, ). Hence, the theorem is proved.

The proof of the next result is similar to that of Theorem 5.1.

Theorem 5.2. For all values of with , except possibly for a single value , and for fixed or , the characteristic function of the maximal dissipative operator is a Blaschke product and the spectrum of is purely discrete and belongs to the open upper half-plane. The operator has a countable number of isolated eigenvalues with finite algebraic multiplicity and limit points at infinity, and the system of all eigenvectors and associated vectors of this operator is complete in the space .

Since a linear operator acting in a Hilbert space is maximal accretive if and only if is maximal dissipative, all results obtained for maximal dissipative operators can be immediately transferred to maximal accretive operators.