Research Article | Open Access

Bilal Chanane, "Eigenvalues of Vectorial Sturm-Liouville Problems with Parameter Dependent Boundary Conditions", *Abstract and Applied Analysis*, vol. 2015, Article ID 796086, 9 pages, 2015. https://doi.org/10.1155/2015/796086

# Eigenvalues of Vectorial Sturm-Liouville Problems with Parameter Dependent Boundary Conditions

**Academic Editor:**Chun-Kong Law

#### Abstract

We generalize the *regularized sampling method* introduced in 2005 by the author to
compute the eigenvalues of scalar Sturm-Liouville problems (SLPs) to the case of vectorial SLP with parameter
dependent boundary conditions. A few problems are worked out to illustrate the effectiveness of the method
and show by the same token that we have indeed a general method capable of handling with ease very broad
classes of SLPs, whether scalar or vectorial.

#### 1. Introduction

In [1] we introduced the regularized sampling method, a method to compute the eigenvalues of scalar Sturm-Liouville problems (SLPs) with parameter dependent boundary conditions. We subsequently used this method to compute the eigenvalues of singular and non-self-adjoint Sturm-Liouville problems. The scope of the method was further extended to include the computation of the eigenvalues of discontinuous/impulsive, nonlocal ([2] and the references therein), and two-parameter SLPs [3]. Continuing our effort we will tackle in this paper vectorial SLP with parameter dependent nonseparated boundary conditions. Vectorial Sturm-Liouville problems have been considered in [4â€“13] and the references therein while corresponding inverse problems appeared in [14â€“17].

#### 2. The Characteristic Function

Consider the vectorial Sturm-Liouville problem, where is an matrix function, and are real matrix functions of the parameter such that the matrix has full rank.

Let , be the solutions of the Sturm-Liouville matrix equation subject to the initial conditions , and , , respectively, being the identity matrix and being the zero matrix.

The general solution of the Sturm-Liouville equation is given by with arbitrary constant vectors and . Replacing in the boundary conditions, we get To have a nontrivial solution a necessary and sufficient condition is that where the characteristic function is where

The eigenvalues of (1) are the square of the zeroes of . It is well known that the multiplicities of these eigenvalues are at most .

#### 3. Main Results

Let be the Paley-Wiener space and recall the celebrated Whittaker-Shannon-Kotelâ€™nikov theorem [18].

Theorem 1. *Let ; then
**
where the series converges uniformly on compact subset of and in .*

It is known that, in the case of scalar Sturm-Liouville problems, is an entire function of for each fixed . is in a Paley-Wiener space as a function of for each only in the Dirichlet case. So, we had to subtract some terms from to make the difference fall in an appropriate space. We had even to subtract terms involving multiple integrals to get sharper results when it comes to computing of the eigenvalues. The regularized sampling method has been introduced recently [1] to overcome this problem; we do not have to subtract any term involving any (multiple) integration. In fact we multiplied and by an appropriate function of and got the eigenvalues with much greater precision at a reduced cost. Here and are known simple functions.

For the vectorial Sturm-Liouville problem at hand, we will use the regularized sampling method to recover the matrices , , , and from which we obtain , the characteristic function whose zeroes are the square roots of the sought eigenvalues of the problem.

Consider the compatible vector and matrix norms given by where , . In the following we will make use of the standard estimate.

Lemma 2. *Consider
**
where is some constant (we may take ).*

To cover both cases (,â€‰â€‰ and , ) we will consider the following initial value problem: where and are matrices or -vector. We have

Our first result is the following theorem.

Theorem 3. * is an entire matrix function of for each fixed and satisfies the growth conditions
**
for some positive constants , , , and .*

*Proof. *From (11) and using standard arguments, we conclude that is an entire matrix function of for each in . Its derivative with respect to ,
is also an entire matrix function of for each in . Going back to (11) we get at once
Multiplying by , using Gronwallâ€™s lemma, and multiplying back by we get
where . Now, using the above estimate in (10), we get
where . Likewise we have
where . As in the scalar case, is in a Paley-Wiener space only in the Dirichlet case; however, is. As for , it is not; nor is since they are not square integrable over the reals for fixed in . Also,
where .

We get at once the following corollaries.

Corollary 4. *, , , , , and are entire matrix functions of for each fixed and satisfy the growth conditions
**
for some generic positive constants , , , and .*

Corollary 5. *The functions,
**
belong to the Paley-Wiener space as functions of and thus can be recovered from their samples at , using the WSK series.*

Theorem 6. *Let be positive real number and a positive integer. Consider
**
belong to the Paley-Wiener space where + as functions of for each fixed for and satisfy the growth condition , where is some positive constant ().*

*Proof. *It is enough to note that is an entire function of and satisfies the estimate in the above Lemma and the fact that is the product of two entire functions thus entire.

*Remark 7. *To avoid the first singularity of we will take .

The use of the WSK theorem allows us to recover and as where , , , and .

Hence, or can be recovered as In practice, we take for some positive integer , large enough, so that can be reconstructed whose zeros are the square roots of the sought eigenvalues.

Since and are in , Jagermanâ€™s result [18] is applicable and yields the following better estimate.

Lemma 8 (truncation error). *Let denote the truncation of , . Then, for ,
**
where .*

Lemma 9. *Consider ,
**
where .*

The approximation of and by and , respectively, induces an approximation of the characteristic function by , whose zeros are the square root of the eigenvalues of the problem.

Let denote an eigenvalue of the problem; then independent eigenfunctions associated can be obtained using basis vectors of the null space of the matrix as initial conditions to the differential equation , .

#### 4. Numerical Examples

In this section we will illustrate the power of the regularized sampling method as applied to vectorial Sturm-Liouville problems with parameter dependent boundary conditions. We will take , and a precision of for the first three examples involving two dimensional SLPs. We will also work out two three-dimensional SLPs one of them involving parameter dependent boundary conditions. In these last two examples we take different values of , namely, , and , and take and a precision of . The reported multiplicities of the eigenvalues are just the dimensions of the null space of the corresponding matrices .

*Example 1 (Chanane [1], 1D-version taken from fom Binding and Browne [19]). *Consider
where . The first three eigenvalues were obtained as , , and putting them at about from the exact eigenvalues. All these are double eigenvalues.

*Example 2. *Consider

The first four eigenvalues were obtained as , , , and . Their multiplicity is two. Figure 1 illustrates the graph of the characteristic function.

In the next example we change the boundary conditions in Example 2 to a parameter dependent one.

*Example 3. *Consider

The first ten eigenvalues were obtained as , , , , , , , , , and . All these are simple eigenvalues. Figure 2 illustrates the graph of the characteristic function.

Next we consider three-dimensional vectorial SLPs, with different boundary conditions.

*Example 4. *Consider

Here, we will take , , and , and a precision of . Figure 3 illustrates the characteristic function over the range , while Figures 4 and 5 zoom into the regions containing the eigenvalues. Note that, in the range of interest , the graphs of , , and , are on the top of each other. A precision on can be obtained with just . It appears clearly that in this example we have a simple eigenvalue and a double eigenvalue , followed by a simple eigenvalue and a double eigenvalue (Figures 7, 8, 9, and 10). To obtain the double eigenvalues we look for the roots of and then evaluate which happened to be in each case of the order of . Table 1 illustrates these as function of , the number of sampling points.

The first few coefficients in the cardinal series expansion of are given as follows: The above data have been reported with only a few digits.

*Example 5. *Consider
where

Here, we will take , , and , and a precision of . Figure 6 illustrates the characteristic function over the range . In this range, the graphs of , , and , are on the top of each other. In this example the first seven (07) eigenvalues , , are all simple. Tables 2(a) and 2(b) illustrate these as function of , the number of sampling points.

(a) | |||||||||||||||||||||||