#### Abstract

A linear operator on a Hilbert space may be approximated with finite matrices by choosing an orthonormal basis of thez Hilbert space. In this paper, we establish an approximation of the -numerical range of bounded and unbounnded operator matrices by variational methods. Application to Schrödinger operator, Stokes operator, and Hain-Lüst operator is given.

#### 1. Introduction and Definitions

The simplest concepts which can be used to obtain an enclosure of the spectrum of a linear operator in a Hilbert space are the numerical range.where denotes the domain. It is not difficult to see that the point spectrum of is contained in and that the approximate point spectrum of is contained in the closure of ; the inclusion holds if is closed. Many estimates of eigenvalues of differential operators, for instance, involve calculating estimates of the inner products , using partial integration. It is always convex [1]. However, the numerical range often gives a poor localization of the spectrum and cannot reveal the existence of spectral gaps. In [2], the numerical range of a (finite) matrix was approximated by projection methods. This concept was generalized to *-numerical range* in [3], as follows:where . It is easy to see that if , then coincides with the numerical range of . For a closed operator , the closure of the *-numerical range* contains the eigenvalues of scaled by , and , for every . Moreover, It is known that is a compact convex subset of [4]. A review of the properties of the *-numerical range* of operator matrices may be found in [3, 5, 6]. The main purpose of this work is the approximation of the *-numerical range* of bounded and unbounded operator matrices, which contains both theoretical results and applications to some self-adjoint and non-self-adjoint operator matrices. The paper is organized as follows. In Section 2, we establish an approximation of the *-numerical range* of bounded and unbounded operator matrices. In Section 3, we shall apply these results to compute the *-numerical range* of differential operators and some new analytic bounds.

#### 2. Convergence Theorems

In this section, we will use finite matrices to approximate the numerical range of linear operators. However, the idea of approximating linear operators by finite matrices is an obvious one that must happen again and again, suppose that one wishes to compute the *-numerical range* of by using the following projection method. Let be a nested family of spaces in given by , where is the orthonormal basis of a Hilbert space whose element lies in and suppose that the corresponding orthogonal projections converge strongly to the identity operator . We identify with its matrix representation with respect to :

Then, the compression of to is denoted by , where

Theorem 1. *Let**be a bounded operator in**Let**be a nested family of spaces in**given by**where**is orthonormal, and**be as in Equation(*(*4*)).*Then,* ,*for*

*Proof. *Define an isometry by and Suppose that Then, for some with such that Choose such that , and Then, a direct computation shows that where and Thus,

The next inclusion, which will be used in the proof of Theorem 3, asserts that forms an increasing sequence of sets.

Lemma 2. *Let**and**be as in Theorem**1*. *Given*, *then*, *for*.

*Proof. *This is an immediate consequence of the fact that is a subspace of . In detail, suppose is in ; then, there exists , , with such that Choose by setting , . A simple calculation shows that and and so is in .

Theorem 3. *Let**and**be as in Theorem**1*. *Let**be orthogonal projection. If**converge strongly to the identity operator*, *then*.

*Proof. *In view of Theorem 1, it is sufficient to prove . Suppose . Choose such that and , such that . We know and as and if thus as and as .

Fix Let be the standard isometries as in the proof of Theorem 1. Define by Consider the matrix , where the -element of the matrix is equal to , for . A simple calculation shows that , Since as and as , we have as . Hence, there exist such that . In view of Lemma 2, this immediately gives .

*Remark 4. **The hypotheses**converge strongly to the identity operator**are (in general) necessary. It is easy to construct an example where**is a strict subset of*.

*Example 1. **Let**be an operator matrix in*, *where* *be a nested family of subspace in**with**where**standard basis vector, and**where**is any orthonormal sequence. Then, performing an analysis analogous to Theorem**3*, *we see that**is not convergent to**unless**is orthogonal to*

*Remark 5. **We assume readers are familiar with basic notions and results about linear unbounded operators, as well as matrices of nonnecessarily bounded operators. Useful references are* [7–9]. *We call a few definitions though: a linear operator**with a domain**contained in a Hilbert space**is said to be densely defined if*. *Say that a linear operator**is closed if its graph**is closed in*. *A linear operator**is called closable if the closure**of its graph is the graph of some operators. A subspace**is called a core of a closable operator**if**is closable with closure*. *The definition of the**-numerical range for bounded linear operators in Equation*((*2*)) *generalizes as follows to unbounded operator matrices**with dense domain*.

*Definition 6. **For a linear operator**with domain*, *we define the**-numerical range of**for**by*

Theorem 7. *Let**be an unbounded operator in*. *Let**be a nested family of spaces in**given by*, *where**is orthonormal, and**be as in Equation*((*4*)). *Then,*, *for*

*Proof. *Define an isometry by and Suppose that Then, for some with such that Choose such that , and Then, a direct computation shows that where and Thus,

The following lemma can be the proof in a similar fashion as Lemma 2.

Lemma 8. *Let**and**be as in Theorem**7*. *Given*, *then*, *for*.

In the following result, we describe that the closure of the range is approximated by under the assumption that the linear span of is a core of .

Theorem 9. *Let**be an unbounded operator in*. *Let**be a nested family of spaces in**given by*, *where**is orthonormal, and**be as in Equation*((*4*)). *Then,*, *for*

*Proof. *Since is a core of , there exists a sequence , with each for some such that and . In a similar way, we may also find a sequence , with each for some such that and , so this means that as and as . Fix Let be the standard isometries as in the proof of Theorem 7. Define by Consider the matrix , where the -element of the matrix is equal to , for . A simple calculation shows that and . Since as and as , this implies that as . Hence, there exists such that . In view of Lemma 8, this immediately gives .

#### 3. Numerical Experiments on Differential Operator

In this section, we study some concrete examples and demonstrate that, in spite of the results obtained in the previous section, practical computation of the *-numerical range* of differential operator is very far from being straightforward. We define the inner product to be linear in the first parameter and conjugate linear in the second parameter, and we consider the space of square-integrable functions, where is an interval in , a Hilbert space with inner product

The computations were performed in Matlab.

##### 3.1. Application to Schrödinger Operator

In the Hilbert space , we introduce the Schrödinger operator(with bounded potential ) and the domain of is given by

*Remark 10. *(i)Because is self-adjoint and bounded below with purely discrete spectrum, the eigenvalues of are given bywhere is the Rayleigh functional(ii)It is clear the operator in has a sequence of eigenvalues and normalized eigenfunctions for the operator in areunder the setting (iii)Because Equation (7) is a closed operator, then it is not difficult to see that the subspace is a core of , so in this case, the main Theorem 9 is applicable to this example(iv)We may use these eigenfunctions in Equation (12) as basis elements for a discretization of the type discussed in Section 2, form the matrix elements , using the inner product in Equation (6) with respect to the orthonormal basis in Equation (12) and consider the (infinite) operator matrix

The matrix in Equation (4) is obtained by taking the leading submatrices of the (infinite) operator matrix in Equation (13), with appropriate dimensions. Observe that which can be evaluated explicitly. The following figure shows attempts to calculate for various and and also some attempts to estimate these sets by qualitative means, using existing theorems from the literature as well as the theorems proved above.

##### 3.2. Analytical Estimates for Schrödinger Operator

In order to understand to what extent Figure 1 is qualitatively correct, we now analyze the -numerical range of the Schrödinger operator. Comments:(i)In Remark 10 part (i), it is obvious ; thus, the numerical range of is (ii)While for , the -numerical range of contains the rangefor any integer Let Then, the rangecontains the convex polygon with vertexes

For a sufficiently large , such a convex polygon contains an arbitrary given point in the Gaussian plane. Thus, this differential operator satisfies (iii)If we restrict our attention for the variational approximation to the Schrdinger operator, we may have another story. The approximation in Theorem 9 is given by a diagonal matrix with some real eigenvalues with . The -numerical range of for is given by . Then, the range is the union of closed circular discs

##### 3.3. Application to Block Differential Operators

In this subsection, we apply Theorem 9 to compute the -numerical range of Stokes-type operators and Hain-Lüst-type operators. First, we study Stokes-type operators.

###### 3.3.1. Application to Stokes-Type Operators

Consider the differential expression in the Hilbert space , we introduce the matrix differential operatoron the domain

The operator matrix is not closed but closable [2]. In order to show that its closure is self-adjoint, we need the following:

*Remark 11. *(i)Consider the operator given bysuch that

Lemma 12. *The operator**in Equation* ((*21*)) *coincides with the adjoint of the block operator matrix in Equation* ((*18*)).

*Proof. *Suppose thatwithThenor, equivalently,for all with compact support contained in In particular, when , then Equation (25) becomesThe first part of the left-hand side of Equation (26) is a bounded linear functional of , which means that , and also, the second part of the left-hand side of equation (26) is a bounded linear functional of which means that ; again return to Equation (26), and integrating by part, we obtainBecause is a dense in , this means that By the same argument, if we set , we find thatAgain, return to Equation (28), and integrating by part, we obtainThe first part of the left-hand side of Equation (29) is bounded only when , and the second part is bounded linear functional of which means that Then, this implies that , and because is a dense in , this means that on It follows that

*Remark 13. *(i)By the same argument of Lemma 12, it is not difficult to see that the operator in Equation (18) is symmetric in with the domain(ii)Because is a symmetric operator and has nonempty resolvent, then self-adjoint. Thus, by [10, Theorem 5.4],

The following result shows that the eigenvalues of the Stocks operator coincide with eigenvalues of the operator .

Proposition 14. *Let**be as in Equation*((*18*)), *then*

*Proof. *Let , then there exists an , such thatWe see that Equation (32) is equivalent to the following system of equations:where andBecause , then from Equation (33), we get and Equation (34) can be written asthis implies that for Since is a linear space then , hence it follows that so is an eigenvalue of

Conversely, if , then there exists an such thatThis means that is an eigenfunction of since

*Remark 15. *(i)It may be shown that has two series of eigenvalues given byThe eigenvalues of the minus series are located in the interval (-5/2,-3/2] and convergent to -5/2 as The eigenvalues of the plus series are located in the interval and convergent to as The essential spectrum of is the accumulation point of the eigenvalues: (ii)It is not difficult to see that the subspace is a core of (iii)Now form the matrix elements , , , using the inner product in (6) with respect to the orthonormal basis in Equation (12) and consider the (infinite) block operator matrixThe defined matrix in (4) is obtained by taking the leading submatrices of the block with appropriate dimensions. Observe that and , which can be evaluated explicitly

Figure 2 shows attempts to calculate for various and and also some attempts to estimate these sets by qualitative means, using existing theorems from the literature as well as the theorems proved above.

##### 3.4. Analytical Estimates for Stokes Operator

In order to understand the results in Figure 2, it is useful to find an analytical estimate for . Let , where

with and let

where Equation (42) gives, as an estimate for the first term on the right-hand side of (42),

For the second and the third term on the right-hand side of (42), the Cauchy-Schwarz inequality and Youngs inequality yieldCombining Equations (44) and (45), we get that

The fourth term of the right-hand side of Equation (42) satisfies

Hence, from Equations (43), (46), and (47), we get that

This simplifies to

This yields

For our example, these yield

To estimate , observe that

This completes the estimates on .

##### 3.5. The Hain-Lüst Operator

This operator was introduced by Hain and Lüst in application to problems of magnetohydrodynamics [11], and the problems of this type were studied in [7, 8, 12]. Assume that , , and are such that , , , for each . We introduce the differential expression

Let be the operators in the Hilbert space induced by the differential expressions , , , and with domain

In the Hilbert space , we introduce the matrix differential operatoron the domain

*Remark 16. *(i)By [13], Corollary VII.2.7, the operator with domain is closed. Moreover, because , then the operator is -bounded with relative bound 0. This follows since there is a such that, for every ,On the other hand , then the operator is -bounded, we conclude that the operator matrix is diagonally dominant of order 0; it is closed by [14], Corollary 2.2.9 (i)(ii)In Equation (54), since is self-adjoint with the purely discrete spectrum, then the linear span is a core of , where is an orthonormal basis in , and by the same argument because in Equation (54) is bounded, then the linear span is a core of Hence, it is not difficult to see that the subspace is a core of . So the main Theorem 9 is applicable to this example(iii)We may use the eigenfunctions in Equation (12) as basis elements for a discretization of the type discussed in Section 2, forming the matrix elements , , , , with respect to the inner product in (6) and considering the infinite block matrixThe defined matrix in (4) is obtained by taking the leading submatrices of the block , with appropriate dimensions.

Observe that ,,, and , which can be evaluated explicitly. If the operator included a potential, for instance, then its eigenfunctions would not generally be explicitly computable. We could still use the functions in (12) as basis functions, but the matrix elements would have to be computed by quadrature and the corresponding matrix would no longer diagonal. Figure 3 shows attempts to compute the numerical approximation of the boundary of for various and and also some attempts to estimate these sets by qualitative means, using existing theorems from the literature as well as the theorems proved above.

##### 3.6. Analytical Estimates for Non-Self-Adjoint Hain-Lüst Operator

In order to understand the results in Figure 3, it is useful to find an analytical estimate for . Let , where

with , and let

The first term of (59) gives, as an estimate,

For the second and the third term on the right-hand side of (59), the Cauchy-Schwarz inequality and Youngs inequality yieldCombining Equations (61) and (62), we get that

The fourth term of the right-hand side of Equation (59) satisfies