Abstract and Applied Analysis

Volumeย 2011, Article IDย 738520, 41 pages

http://dx.doi.org/10.1155/2011/738520

## Weyl-Titchmarsh Theory for Time Scale Symplectic Systems on Half Line

Department of Mathematics and Statistics, Faculty of Science, Masaryk University, Kotlářská 2, 61137 Brno, Czech Republic

Received 8 October 2010; Accepted 3 January 2011

Academic Editor: Miroslavaย Růžičková

Copyright ยฉ 2011 Roman Šimon Hilscher and Petr Zemánek. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We develop the Weyl-Titchmarsh theory for time scale symplectic systems. We introduce the -function, study its properties, construct the corresponding Weyl disk and Weyl circle, and establish their geometric structure including the formulas for their center and matrix radii. Similar properties are then derived for the limiting Weyl disk. We discuss the notions of the system being in the limit point or limit circle case and prove several characterizations of the system in the limit point case and one condition for the limit circle case. We also define the Green function for the associated nonhomogeneous system and use its properties for deriving further results for the original system in the limit point or limit circle case. Our work directly generalizes the corresponding discrete time theory obtained recently by S. Clark and P. Zemánek (2010). It also unifies the results in many other papers on the Weyl-Titchmarsh theory for linear Hamiltonian differential, difference, and dynamic systems when the spectral parameter appears in the second equation. Some of our results are new even in the case of the second-order Sturm-Liouville equations on time scales.

#### 1. Introduction

In this paper we develop systematically the Weyl-Titchmarsh theory for time scale symplectic systems. Such systems unify and extend the classical linear Hamiltonian differential systems and discrete symplectic and Hamiltonian systems, including the Sturm-Liouville differential and difference equations of arbitrary even order. As the research in the Weyl-Titchmarsh theory has been very active in the last years, we contribute to this development by presenting a theory which directly generalizes and unifies the results in several recent papers, such as [1–4] and partly in [5–14].

Historically, the theory nowadays called by Weyl and Titchmarsh started in [15] by the investigation of the second-order linear differential equation where are continuous, , and , is a spectral parameter. By using a geometrical approach it was showed that (1.1) can be divided into two classes called the limit circle and limit point meaning that either all solutions of (1.1) are square integrable for all or there is a unique (up to a multiplicative constant) square-integrable solution of (1.1) on . Analytic methods for the investigation of (1.1) have been introduced in a series of papers starting with [16]; see also [17]. We refer to [18–20] for an overview of the original contributions to the Weyl-Titchmarsh theory for (1.1); see also [21]. Extensions of the Weyl-Titchmarsh theory to more general equations, namely, to the linear Hamiltonian differential systems was initiated in [22] and developed further in [6, 8, 10, 11, 23–38].

According to [19], the first paper dealing with the parallel discrete time Weyl theory for second-order difference equations appears to be the work mentioned in [39]. Since then a long time elapsed until the theory of difference equations attracted more attention. The Weyl-Titchmarsh theory for the second-order Sturm-Liouville difference equations was developed in [22, 40, 41]; see also the references in [19]. For higher-order Sturm-Liouville difference equations and linear Hamiltonian difference systems, such as where , , , , are complex matrices such that and are Hermitian and and are Hermitian and nonnegative definite, the Weyl-Titchmarsh theory was studied in [9, 14, 42]. Recently, the results for linear Hamiltonian difference systems were generalized in [1, 2] to discrete symplectic systems where , , , , are complex matrices such that is Hermitian and nonnegative definite and the transition matrix in (1.4) is symplectic, that is,

In the unifying theory for differential and difference equations—the theory of time scales—the classification of second-order Sturm-Liouville dynamic equations to be of the limit point or limit circle type is given in [4, 43]. These two papers seem to be the only ones on time scales which are devoted to the Weyl-Titchmarsh theory for the second order dynamic equations. Another way of generalizing the Weyl-Titchmarsh theory for continuous and discrete Hamiltonian systems was presented in [3, 5]. In these references the authors consider the linear Hamiltonian system on the so-called Sturmian or general time scales, respectively. Here is the time scale -derivative and , where is the forward jump at ; see the time scale notation in Section 2.

In the present paper we develop the Weyl-Titchmarsh theory for more general linear dynamic systems, namely, the time scale symplectic systems where , , , , are complex matrix functions on , is Hermitian and nonnegative definite, , and the coefficient matrix in system () satisfies where is the graininess of the time scale. The spectral parameter is only in the second equation of system (). This system was introduced in [44], and it naturally unifies the previously mentioned continuous, discrete, and time scale linear Hamiltonian systems (having the spectral parameter in the second equation only) and discrete symplectic systems into one framework. Our main results are the properties of the function, the geometric description of the Weyl disks, and characterizations of the limit point and limit circle cases for the time scale symplectic system (). In addition, we give a formula for the solutions of a nonhomogeneous time scale symplectic system in terms of its Green function. These results generalize and unify in particular all the results in [1–4] and some results from [5–14]. The theory of time scale symplectic systems or Hamiltonian systems is a topic with active research in recent years; see, for example, [44–51]. This paper can be regarded not only as a completion of these papers by establishing the Weyl-Titchmarsh theory for time scale symplectic systems but also as a comparison of the corresponding continuous and discrete time results. The references to particular statements in the literature are displayed throughout the text. Many results of this paper are new even for (1.6), being a special case of system (). An overview of these new results for (1.6) will be presented in our subsequent work.

This paper is organized as follows. In the next section we recall some basic notions from the theory of time scales and linear algebra. In Section 3 we present fundamental properties of time scale symplectic systems with complex coefficients, including the important Lagrange identity (Theorem 3.5) and other formulas involving their solutions. In Section 4 we define the time scale -function for system () and establish its basic properties in the case of the regular spectral problem. In Section 5 we introduce the Weyl disks and circles for system () and describe their geometric structure in terms of contractive matrices in . The properties of the limiting Weyl disk and Weyl circle are then studied in Section 6, where we also prove that system () has at least linearly independent solutions in the space (see Theorem 6.7). In Section 7 we define the system () to be in the limit point and limit circle case and prove several characterizations of these properties. In the final section we consider the system () with a nonhomogeneous term. We construct its Green function, discuss its properties, and characterize the solutions of this nonhomogeneous system in terms of the Green function (Theorem 8.5). A certain uniqueness result is also proven for the limit point case.

#### 2. Time Scales

Following [52, 53], a time scale is any nonempty and closed subset of . A bounded time scale can be therefore identified as which we call the time scale interval, where and . Similarly, a time scale which is unbounded above has the form . The forward and backward jump operators on a time scale are denoted by and and the graininess function by . If not otherwise stated, all functions in this paper are considered to be complex valued. A function on is called *piecewise rd-continuous*; we write on if the right-hand limit exists finite at all right-dense points , and the left-hand limit exists finite at all left-dense points and is continuous in the topology of the given time scale at all but possibly finitely many right-dense points . A function on is *piecewise rd-continuous*; we write on if on for every . An matrix-valued function is called *regressive* on a given time scale interval if is invertible for all in this interval.

The time scale -derivative of a function at a point is denoted by ; see [52, Definition 1.10]. Whenever exists, the formula holds true. The product rule for the -differentiation of the product of two functions has the form
A function on is called *piecewise rd-continuously **-differentiable*; we write on ; if it is continuous on , then exists at all except for possibly finitely many points , and on . As a consequence we have that the finitely many points at which does not exist belong to and these points are necessarily right-dense and left-dense at the same time. Also, since at those points we know that and exist finite, we replace the quantity by in any formula involving for all . Similarly as above we define on . The time scale integral of a piecewise rd-continuous function over is denoted by and over by provided this integral is convergent in the usual sense; see [52, Definitions 1.71 and 1.82].

*Remark 2.1. *As it is known in [52, Theorem 5.8] and discussed in [54, Remark 3.8], for a fixed and a piecewise rd-continuous matrix function on which is regressive on , the initial value problem for with has a unique solution on for any . Similarly, this result holds on .

Let us recall some matrix notations from linear algebra used in this paper. Given a complex square matrix , by , , , , , , , , we denote, respectively, the conjugate transpose, positive definiteness, positive semidefiniteness, negative definiteness, negative semidefiniteness, rank, kernel, and the defect (i.e., the dimension of the kernel) of the matrix . Moreover, we will use the notation and for the Hermitian components of the matrix ; see [55, pages 268-269] or [56, Fact 3.5.24]. This notation will be also used with , and in this case and represent the imaginary and real parts of .

*Remark 2.2. *If the matrix is positive or negative definite, then the matrix is necessarily invertible. The proof of this fact can be found, for example, in [2, Remark 2.6].

In order to simplify the notation we abbreviate and by . Similarly, instead of and we will use .

#### 3. Time Scale Symplectic Systems

Let , , , , be piecewise rd-continuous functions on such that for all ; that is, is Hermitian and nonnegative definite, satisfying identity (1.8). In this paper we consider the linear system () introduced in the previous section. This system can be written as where the matrix is defined and has the property The system () can be written in the equivalent form where the matrix is defined through the matrices and from (1.8) and (3.1) by By using the identity in (1.8), a direct calculation shows that the matrix function satisfies Here , and is the usual conjugate number to .

*Remark 3.1. *The name time scale *symplectic system* or *Hamiltonian system* has been reserved in the literature for the system of the form
in which the matrix function satisfies the identity in (1.8); see [44–47, 57], and compare also, for example, with [58–61]. Since for a fixed the matrix from (3.3) satisfies
it follows that the system () is a true time scale symplectic system according to the above terminology only for , while strictly speaking () *is not* a time scale symplectic system for . However, since () is a perturbation of the time scale symplectic system () and since the important properties of time scale symplectic systems needed in the presented Weyl-Titchmarsh theory, such as (3.4) or (3.8), are satisfied in an appropriate modification, we accept with the above understanding the same terminology for the system () for any .

Equation (3.4) represents a fundamental identity for the theory of time scale symplectic systems (). Some important properties of the matrix are displayed below. Note that formula (3.7) is a generalization of [46, equation (10.4)] to complex values of .

Lemma 3.2. *Identity (3.4) is equivalent to the identity
**
In this case for any we have
**
and the matrices and are invertible with
*

*Proof. *Let and be fixed. If is right-dense, that is, , then identity (3.4) reduces to . Upon multiplying this equation by from the left and right side, we get identity (3.7) with . If is right scattered, that is, , then (3.4) is equivalent to (3.8). It follows that the determinants of and are nonzero proving that these matrices are invertible with the inverse given by (3.10). Upon multiplying (3.8) by the invertible matrices from the left and from the right and by using , we get formula (3.9), which is equivalent to (3.7) due to .

*Remark 3.3. *Equation (3.10) allows writing the system () in the equivalent adjoint form
System (3.11) can be found, for example, in [47, Remark 3.1(iii)] or [50, equation (3.2)] in the connection with optimality conditions for variational problems over time scales.

In the following result we show that (3.4) guarantees, among other properties, the existence and uniqueness of solutions of the initial value problems associated with ().

Theorem 3.4 (existence and uniqueness theorem). *Let , , and be given. Then the initial value problem () with has a unique solution on the interval .*

*Proof. *The coefficient matrix of system (), or equivalently of system (3.2), is piecewise rd-continuous on . By Lemma 3.2, the matrix is invertible for all , which proves that the function is regressive on . Hence, the result follows from Remark 2.1.

If not specified otherwise, we use a common agreement that -vector solutions of system () and -matrix solutions of system () are denoted by small letters and capital letters, respectively, typically by or and or .

Next we establish several identities involving solutions of system () or solutions of two such systems with different spectral parameters. The first result is the Lagrange identity known in the special cases of continuous time linear Hamiltonian systems in [11, Theorem 4.1] or [8, equation (2.23)], discrete linear Hamiltonian systems in [9, equation (2.55)] or [14, Lemma 2.2], discrete symplectic systems in [1, Lemma 2.6] or [2, Lemma 2.3], and time scale linear Hamiltonian systems in [3, Lemma 3.5] and [5, Theorem 2.2].

Theorem 3.5 (Lagrange identity). *Let and be given. If and are solutions of systems () and (), respectively, then
*

*Proof. *Formula (3.12) follows from the time scales product rule (2.1) by using the relation and identity (3.6).

As consequences of Theorem 3.5, we obtain the following.

Corollary 3.6. *Let and be given. If and are solutions of systems () and (), respectively, then for all we have
*

One can easily see that if is a solution of system (), then is a solution of system (). Therefore, Theorem 3.5 with yields a Wronskian-type property of solutions of system ().

Corollary 3.7. *Let and be given. For any solution of systems ()
*

The following result gives another interesting property of solutions of system () and ().

Lemma 3.8. *Let and be given. For any solutions and of system (), the matrix function defined by
**
satisfies the dynamic equation
**
and the identities and
*

*Proof. *Having that and are solutions of system (), it follows that and are solutions of system (). The results then follow by direct calculations.

*Remark 3.9. *The content of Lemma 3.8 appears to be new both in the continuous and discrete time cases. Moreover, when the matrix function is constant, identity (3.17) yields for any right-scattered that
It is interesting to note that this formula is very much like (3.7). More precisely, identity (3.7) is a consequence of (3.18) for the case of .

Next we present properties of certain fundamental matrices of system (), which are generalizations of the corresponding results in [46, Section 10.2] to complex . Some of these results can be proven under the weaker condition that the initial value of does depend on and satisfies . However, these more general results will not be needed in this paper.

Lemma 3.10. *Let be fixed. If is a fundamental matrix of system () such that is symplectic and independent of , then for any we have
*

*Proof. *Identity (3.19)(i) is a consequence of Corollary 3.7, in which we use the fact that is symplectic and independent of . The second identity in (3.19) follows from the first one, while the third identity is obtained from the equation .

*Remark 3.11. *If the fundamental matrix in Lemma 3.10 is partitioned into two blocks, then (3.19)(i) and (3.19)(iii) have, respectively, the form
Observe that the matrix on the left-hand side of (3.21) represents a constant matrix from Lemma 3.8 and Remark 3.9.

Corollary 3.12. *Under the conditions of Lemma 3.10, for any , we have
**
which in the notation of Remark 3.11 has the form
*

*Proof. *Identity (3.22) follows from the equation by applying formula (3.19)(ii).

#### 4. -Function for Regular Spectral Problem

In this section we consider the regular spectral problem on the time scale interval with some fixed . We will specify the corresponding boundary conditions in terms of complex matrices from the set The two defining conditions for in (4.1) imply that the matrix is unitary and symplectic. This yields the identity The last equation also implies, compare with [60, Remark 2.1.2], that

Let be fixed and consider the boundary value problem Our first result shows that the boundary conditions in (4.4) are equivalent with the boundary conditions phrased in terms of the images of the matrices which satisfy , , and .

Lemma 4.1. *Let and be fixed. A solution of system () satisfies the boundary conditions in (4.4) if and only if there exists a unique vector such that
*

*Proof. *Assume that (4.4) holds. Identity (4.3) implies the existence of vectors such that and . It follows that satisfies (4.6) with . It remains to prove that is unique such a vector. If satisfies (4.6) and also and for some , then and . Hence, and . If we multiply the latter two equalities by and , respectively, and use , then we obtain and . This yields , which shows that the vector in (4.6) is unique. The opposite direction, that is, that (4.6) implies (4.4), is trivial.

Following the standard terminology, see, for example, [62, 63], a number is an *eigenvalue* of (4.4) if this boundary value problem has a solution . In this case the function is called the *eigenfunction* corresponding to the eigenvalue , and the dimension of the space of all eigenfunctions corresponding to (together with the zero function) is called the *geometric multiplicity* of .

Given , we will utilize from now on the fundamental matrix of system () satisfying the initial condition from (4.4), that is, Then does not depend on , and it is symplectic and unitary with the inverse . Hence, the properties of fundamental matrices derived earlier in Lemma 3.10, Remark 3.11, and Corollary 3.12 apply for the matrix function .

The following assumption will be imposed in this section when studying the regular spectral problem.

*Hypothesis 4.2. *For every , we have

Condition (4.8) can be written in the equivalent form as for every nontrivial solution of system (). Assumptions (4.8) and (4.9) are equivalent by a simple argument using the uniqueness of solutions of system (). The latter form (4.9) has been widely used in the literature, such as in the continuous time case in [8, Hypothesis 2.2], [30, equation (1.3)], [26, equation (2.3)], in the discrete time case in [9, Condition (2.16)], [14, equation (1.7)], [1, Assumption 2.2], [2, Hypothesis 2.4], and in the time scale Hamiltonian case in [3, Assumption 3] and [5, Condition (3.9)].

Following Remark 3.11, we partition the fundamental matrix as where and are the solutions of system () satisfying and . With the notation we have the classical characterization of the eigenvalues of (4.4); see, for example, the continuous time in [64, Chapter 4], the discrete time in [14, Theorem 2.3, Lemma 2.4], [2, Lemma 2.9, Theorem 2.11], and the time scale case in [62, Lemma 3], [63, Corollary 1].

Proposition 4.3. *For and , we have with notation (4.11) the following. *(i)*The number is an eigenvalue of (4.4) if and only if . *(ii)*The algebraic multiplicity of the eigenvalue , that is, the number , is equal to the geometric multiplicity of . *(iii)*Under Hypothesis 4.2, the eigenvalues of (4.4) are real, and the eigenfunctions corresponding to different eigenvalues are orthogonal with respect to the semi-inner product
*

*Proof. *The arguments are here standard, and we refer to [44, Section 5], [63, Corollary 1], [3, Theorem 3.6].

The next algebraic characterization of the eigenvalues of (4.4) is more appropriate for the development of the Weyl-Titchmarsh theory for (4.4), since it uses the matrix which has dimension instead of using the matrix which has dimension . Results of this type can be found in special cases of system () in [8, Lemma 2.5], [11, Theorem 4.1], [9, Lemma 2.8], [14, Lemma 3.1], [1, Lemma 2.5], [3, Theorem 3.4], and [2, Lemma 3.1].

Lemma 4.4. *Let and be fixed. Then is an eigenvalue of (4.4) if and only if . In this case the algebraic and geometric multiplicities of are equal to .*

*Proof. *One can follow the same arguments as in the proof of the corresponding discrete symplectic case in [2, Lemma 3.1]. However, having the result of Proposition 4.3, we can proceed directly by the methods of linear algebra. In this proof we abbreviate and . Assume that is singular, that is, for some vectors , not both zero. Then , which yields that . If , then , which implies upon the multiplication by from the left that . Since not both and can be zero, it follows that and the matrix is singular. Conversely, if for some nonzero vector , then ; that is, is singular, with the vector . Indeed, by using identity (4.2) we have . From the above we can also see that the number of linearly independent vectors in is the same as the number of linearly independent vectors in . Therefore, by Proposition 4.3(ii), the algebraic and geometric multiplicities of as an eigenvalue of (4.4) are equal to .

Since the eigenvalues of (4.4) are real, it follows that the matrix is invertible for every except for at most real numbers. This motivates the definition of the -function for the regular spectral problem.

*Definition 4.5 (-function). *Let . Whenever the matrix is invertible for some value , we define the *Weyl-Titchmarsh *-*function* as the matrix

The above definition of the -function is a generalization of the corresponding definitions for the continuous and discrete linear Hamiltonian and symplectic systems in [8, Definition 2.6], [9, Definition 2.9], [14, equation (3.10)], [1, page 2859], [2, Definition 3.2] and time scale linear Hamiltonian systems in [3, equation (4.1)]. The dependence of the -function on , , and will be suppressed in the notation, and or will be used only in few situations when we emphasize the dependence on (such as at the end of Section 5) or on and (as in Lemma 4.14). By [65, Corollary 4.5], see also [44, Remark 2.2], the -function is an entire function in . Another important property of the -function is established in the following.

Lemma 4.6. *Let and . Then
*

*Proof. *We abbreviate and . By using the definition of in (4.13) and identity (3.21), we have
because . Hence, equality (4.14) holds true.

The following solution plays an important role in particular in the results concerning the square integrable solutions of system ().

*Definition 4.7 (Weyl solution). *For any matrix , we define the so-called *Weyl solution* of system () by
where and are defined in (4.10).

The function , being a linear combination of two solutions of system (), is also a solution of this system. Moreover, , and, if is invertible, then . Consequently, if we take in Definition 4.7, then ; that is, the Weyl solution satisfies the right endpoint boundary condition in (4.4).

Following the corresponding notions in [8, equation (2.18)], [9, equation (2.51)], [14, page 471], [1, page 2859], [2, equation (3.13)], [3, equation (4.2)], we define the Hermitian matrix function for system ().

*Definition 4.8. *For a fixed and , we define the matrix function
where .

For brevity we suppress the dependence of the function on and . In few cases we will need depending on (as in Theorem 5.1 and Definition 6.2) and in such situations we will use the notation . Since , it follows that is a Hermitian matrix for any . Moreover, from Corollary 3.6, we obtain the identity where we used the fact that

Next we define the Weyl disk and Weyl circle for the regular spectral problem. The geometric characterizations of the Weyl disk and Weyl circle in terms of the contractive or unitary matrices which justify the terminology “disk” or “circle” will be presented in Section 5.

*Definition 4.9 (Weyl disk and Weyl circle). *For a fixed and , the set
is called the *Weyl disk*, and the set
is called the *Weyl circle*.

The dependence of the Weyl disk and Weyl circle on will be again suppressed. In the following result we show that the Weyl circle consists of precisely those matrices with . This result generalizes the corresponding statements in [8, Lemma 2.8], [9, Lemma 2.13], [14, Lemma 3.3], [1, Theorem 3.1], [2, Theorem 3.6], and [3, Theorem 4.2].

Theorem 4.10. *Let , , and . The matrix belongs to the Weyl circle if and only if there exists such that . In this case and under Hypothesis 4.2, we have with such a matrix that as defined in (4.13).*

*Proof. *Assume that , that is, . Then, with the vector
where denotes , we have
Moreover, , because the matrices and are invertible and . In addition, the identity yields
Now, if the condition is not satisfied, then we replace by (note that , so that is well defined), and in this case
Conversely, suppose that for a given there exists such that . Then from (4.3) it follows that for the matrix . Hence,
that is, . Finally, since , then by Proposition 4.3(iii) the number is not an eigenvalue of (4.4), which by Lemma 4.4 shows that the matrix is invertible. The definition of the Weyl solution in (4.16) then yields
which implies that .

*Remark 4.11. *The matrix from the proof of Theorem 4.10 is invertible. This fact was not needed in that proof. However, we show that is invertible because this argument will be used in the proof of Lemma 4.14. First we prove that . For if for some , then from identity (4.2) we get . Therefore, . The opposite inclusion follows by the definition of . And since, by (4.16), , it follows that . Hence, as well; that is, the matrix is invertible.

The next result contains a characterization of the matrices which lie “inside” the Weyl disk . In the previous result (Theorem 4.10) we have characterized the elements of the boundary of the Weyl disk , that is, the elements of the Weyl circle , in terms of the matrices . For such we have , which yields . Comparing with that statement we now utilize the matrices which satisfy . In the special cases of the continuous and discrete time, this result can be found in [8, Lemma 2.13], [9, Lemma 2.18], and [2, Theorem 3.13].

Theorem 4.12. *Let , , and . The matrix satisfies if and only if there exists such that and . In this case and under Hypothesis 4.2, we have with such a matrix that as defined in (4.13) and may be chosen so that .*

*Proof. *For consider on the Weyl solution
Suppose first that . Then the matrices , , are invertible. Indeed, if one of them is singular, then there exists a nonzero vector such that or . Then
which contradicts . Now we set , , and . Then for this matrix we have and, by a similar calculation as in (4.29),
where we used the equality . Since and is invertible, it follows that . Conversely, assume that for a given matrix there is satisfying and . Condition is equivalent to when and to when . The positive or negative definiteness of implies the invertibility of and ; see Remark 2.2. Therefore, from the equality , we obtain , and so
The matrix is invertible, because if for some nonzero vector , then , showing that . This however contradicts which we have from the definition of the Weyl solution in (4.16). Consequently, (4.31) yields through that .

If the matrix does not satisfy , then we modify it according to the procedure described in the proof of Theorem 4.10. Finally, since , we get from Proposition 4.3(iii) and Lemma 4.4 that the matrix is invertible which in turn implies through the calculation in (4.27) that .

In the following lemma we derive some additional properties of the Weyl disk and the -function. Special cases of this statement can be found in [8, Lemma 2.9], [33, Theorem 3.1], [9, Lemma 2.14], [14, Lemma 3.2(ii)], [1, Theorem 3.7], [2, Lemma 3.7], and [3, Theorem 4.13].

Theorem 4.13. *Let and . For any matrix we have
**
In addition, under Hypothesis 4.2, we have .*

*Proof. *By identity (4.18), for any matrix , we have
which yields together with on the inequalities in (4.32). The last assertion in Theorem 4.13 is a simple consequence of Hypothesis 4.2.

In the last part of this section we wish to study the effect of changing , which is one of the parameters of the -function and the Weyl solution , when varies within the set . For this purpose we will use the -function with all its arguments in the following two statements.

Lemma 4.14. *Let and . Then
*

*Proof. *Let and be given via (4.13), and consider the Weyl solutions and defined by (4.16) with and , respectively. First we prove that the two Weyl solutions and differ by a constant nonsingular multiple. By definition, and , which implies through (4.3) that and for some matrices , which are invertible by Remark 4.11. This implies that . Consequently, , where . By the uniqueness of solutions of system (), see Theorem 3.4, we obtain that on . Upon the evaluation at we get
Since the matrices and are unitary, it follows from (4.35) that
The first row above yields that , while the second row is then written as identity (4.34).

Corollary 4.15. *Let and . With notation (4.16) and (4.13) we have
*

*Proof. *The above identity follows from (4.35) and the formula for the matrix from the end of the proof of Lemma 4.14.

#### 5. Geometric Properties of Weyl Disks

In this section we study the geometric properties of the Weyl disks as the point moves through the interval . Our first result shows that the Weyl disks are nested. This statement generalizes the results in [11, Theorem 4.5], [66, Section 3.2.1], [9, equation (2.70)], [14, Theorem 3.1], [3, Theorem 4.4], and [5, Theorem 3.3(i)].

Theorem 5.1 (nesting property of Weyl disks). *Let and . Then
*

*Proof. *Let with , and take , that is, . From identity (4.18) with and later with and by using , we have
Therefore, by Definition 4.9, the matrix belongs to , which shows the result.

Similarly for the regular case (Hypothesis 4.2) we now introduce the following assumption.

*Hypothesis 5.2. *There exists such that Hypothesis 4.2 is satisfied with ; that is, inequality (4.8) holds with for every .

From Hypothesis 5.2 it follows by that inequality (4.8) holds for every .

For the study of the geometric properties of Weyl disks we will use the following representation: of the matrix , where we define on the matrices Since is Hermitian, it follows that and are also Hermitian. Moreover, by (4.7), we have . In addition, if , then Corollary 3.7 and Hypothesis 5.2 yield for any Therefore, is invertible (positive definite) for all and monotone nondecreasing as , with a consequence that is monotone nonincreasing as . The following factorization of holds true; see also [2, equation (4.11)].

Lemma 5.3. *Let and . With the notation (5.4), for any and we have
**
whenever the matrix is invertible.*

*Proof. *The result is shown by a direct calculation.

The following identity is a generalization of its corresponding versions in [11, Lemma 4.3], [1, Lemma 3.3], [14, Proposition 3.2], [2, Lemma 4.2], [3, Lemma 4.6], and [5, Theorem 5.6].

Lemma 5.4. *Let *