- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2011 (2011), Article ID 610232, 30 pages

http://dx.doi.org/10.1155/2011/610232

## Discontinuous Sturm-Liouville Problems and Associated Sampling Theories

^{1}Department of Mathematics, University College, Umm Al-Qura University, Makkah, Saudi Arabia^{2}Department of Mathematics, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt

Received 23 May 2011; Revised 14 August 2011; Accepted 18 August 2011

Academic Editor: Yuming Shi

Copyright © 2011 M. M. Tharwat. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper investigates the sampling analysis associated with discontinuous Sturm-Liouville problems with eigenvalue parameters in two boundary conditions and with transmission conditions at the point of discontinuity. We closely follow the analysis derived by Fulton (1977) to establish the needed relations for the derivations of the sampling theorems including the construction of Green's function as well as the eigenfunction expansion theorem. We derive sampling representations for transforms whose kernels are either solutions or Green's functions. In the special case, when our problem is continuous, the obtained results coincide with the corresponding results in the work of Annaby and Tharwat (2006).

#### 1. Introduction

The recovery of entire functions from a discrete sequence of points is an important problem from mathematical and practical points of view. For instance, in signal processing it is needed to reconstruct (recover) a signal (function) from its values at a sequence of samples. If this aim is achieved, then an analog (continuous) signal can be transformed into a digital (discrete) one and then it can be recovered by the receiver. If the signal is band limited, the sampling process can be done via the celebrated Whittaker, Shannon, and Kotel’nikov (WKS) sampling theorem [1–3]. By a band-limited signal with band width , , that is, the signal contains no frequencies higher than cycles per second (cps), we mean a function in the Paley-Wiener space of entire functions of exponential type at most which are -functions when restricted to . This space is characterized by the following relation which is due to Paley and Wiener [4, 5]: Now WKS [6, 7] sampling theorem states the following.

Theorem 1.1 (WKS). *If , then it is completely determined from its values at the points , , by means of the formula
**
where
**
The sampling series (1.2) is absolutely and uniformly convergent on compact subsets of , uniformly convergent on and converges in the norm of , see [6, 8, 9].*

The WKS sampling theorem has been generalized in many different ways. Here we are interested in two extensions. The first is concerned with replacing the equidistant sampling points by more general ones, which is important from practical and theoretical point of view. The following theorem which is known in some literature as Paley-Wiener theorem, [5] gives a sampling theorem with a more general class of sampling points. Although the theorem in its final form may be attributed to Levinson [10] and Kadec [11], it could be named after Paley and Wiener who first derive the theorem in a more restrictive form, see [6, 7] for more details.

Theorem 1.2 (Paley and Wiener). *Let , be a sequence of real numbers satisfying
**
and let be the entire function defined by the canonical product
**
Then, for any **
The series (1.6) converges uniformly on compact subsets of .*

The WKS sampling theorem is a special case of this theorem because if we choose , then Expansion (1.6) is of Lagrange-type interpolation.

The second extension of WKS sampling theorem is the theorem of Kramer [12]. In this theorem sampling representations were given for integral transforms whose kernels are more general than .

Theorem 1.3 (Kramer). *Let I be a finite closed interval, a function continuous in such that for all , and let be a sequence of real numbers such that is a complete orthogonal set in . Suppose that
**
Then
**
Series (1.9) converges uniformly wherever as a function of is bounded.*

Again Kramer’s theorem is a generalization of WKS theorem. If we take , then (1.9) will be (1.2).

The relationship between both extensions of WKS sampling theorem has been investigated extensively. Starting from a function theory approach, cf. [13], it is proved in [14] that if , , satisfies some analyticity conditions, then Kramer’s sampling formula (1.9) turns out to be a Lagrange interpolation one, see also [15–17]. In another direction, it is shown that Kramer’s expansion (1.9) could be written as a Lagrange-type interpolation formula if and are extracted from ordinary differential operators, see the survey [18] and the references cited therein. The present work is a continuation of the second direction mentioned above. We prove that integral transforms associated with second-order eigenvalue problems with an eigenparameter appearing in the boundary conditions and also with an internal point of discontinuity can also be reconstructed in a sampling form of Lagrange interpolation type. We would like to mention that works in direction of sampling associated with eigenproblems with an eigenparameter in the boundary conditions are few, see, for example, [19, 20]. Also papers in sampling with discontinuous eigenproblems are few, see [21–23]. However sampling theories associated with eigenproblems, which contain eigenparameter in the boundary conditions and have at the same time discontinuity conditions, do not exist as for as we know. Our investigation will be the first in that direction, introducing a good example. To achieve our aim we will briefly study the spectral analysis of the problem. Then we derive two sampling theorems using solutions and Green’s function, respectively.

#### 2. The Eigenvalue Problem

In this section we define our boundary value problem and state some of its properties. Consider the boundary value problem with boundary conditions and transmission conditions where is a complex spectral parameter; for , for ; and are given real number; is a given real-valued function, which is continuous in and and has a finite limit ; , () are real numbers; , (); and are given by In some literature conditions (2.4) are called compatability conditions, see, for example, [24]. To formulate a theoretic approach to problem (2.1)–(2.4) we define the Hilbert space with an inner product where

and , . For convenience we put

For function , which is defined on and has finite limit , by and we denote the functions which are defined on and , respectively.

In the following we will define the minimal closed operator in associated with the differential expression , cf. [25, 26].

Let be the set of all such that , are absolutely continuous in , and . Define the operator by The eigenvalues and the eigenfunctions of the problem (2.1)–(2.4) are defined as the eigenvalues and the first components of the corresponding eigenelements of the operator , respectively.

Theorem 2.1. *Let . Then, the operator is symmetric.*

*Proof. *For
By two partial integration we obtain
where, as usual, by we denote the Wronskian of the functions and
Since and are satisfied the boundary condition (2.2)-(2.3) and transmission conditions (2.4) we get
Finally substituting (2.15) in (2.13) then we have
thus, the operator is Hermitian. The symmetry of arises from the well-known fact that is dense in see, for example, [24].

Corollary 2.2. *All eigenvalues of the problem (2.1)–(2.4) are real.*

We can now assume that all eigenfunctions of the problem (2.1)–(2.4) are real valued.

Corollary 2.3. *Let and be two different eigenvalues of the problem (2.1)–(2.4). Then the corresponding eigenfunctions and of this problem are orthogonal in the sense of
*

*Proof. *Formula (2.17) follows immediately from the orthogonality of corresponding eigenelements
in the Hilbert space .

Now, we will construct a special fundamental system of solutions of the equation (2.1) for not being an eigenvalue. Let us consider the next initial value problem: By virtue of Theorem 1.5 in [27] this problem has a unique solution , which is an entire function of for each fixed . Similarly, employing the same method as in proof of Theorem 1.5 in [27], we see that the problem has a unique solution which is an entire function of parameter for each fixed .

Now the functions and are defined in terms of and as follows: the initial-value problem, which contains the entire functions of eigenparameter (in the right-hand side), has unique solution for each .

Similarly, the following problem also has a unique solution :

Since the Wronskians are independent on variable () and and are the entire functions of the parameter for each (), then the functions are the entire functions of parameter .

Lemma 2.4. *If the condition is satisfied, then the equality holds for each .*

*Proof. *Taking into account (2.24) and (2.26), a short calculation gives , so for each .

Corollary 2.5. *The zeros of the functions and coincide.*

Let us construct two basic solutions of (2.1) as By virtue of (2.24) and (2.26) these solutions satisfy both transmission conditions (2.4).

Now we may introduce to the consideration the characteristic function as

Theorem 2.6. *The eigenvalues of the problem (2.1)–(2.4) are coincided zeros of the function .*

*Proof. *Let . Then , and so the functions and are linearly dependent, that is,
Consequently, satisfied the boundary condition (2.3), so the function is an eigenfunction of the problem (2.1)–(2.4) corresponding to the eigenvalue .

Now let be any eigenfunction corresponding to the eigenvalue , but . Then the functions are linearly independent on . Thus, may be represented as in the form
where at least one of the constants , , is not zero.

Consider the equations
as the homogenous system of linear equations of the variables , , and taking into account (2.24) and (2.26), it follows that the determinant of this system is
Thus, the system (2.32) has only trivial solution , , and so we get contradiction which completes the proof.

Lemma 2.7. *If is an eigenvalue, then and are linearly dependent.*

*Proof. *Since is an eigenvalue, then from Theorem 2.6 we have , . Therefore
for some , . Now, we must show that . Suppose if possible that . Taking into account the definitions of solution and , , from the equalities (2.34) we get
Since , , and it follows that
By the same procedure from the equality we can derive that
From the fact that is a solution of (2.1) on and satisfied the initial conditions (2.36) and (2.37) it follows that identically on , because of the well-known existence and uniqueness theorem for the initial value problems of the ordinary linear differential equations.

By using (2.24), (2.36), and (2.37) we may also find
For latter discussion for , it follows that identically on . Therefore identically on . But this is contradicted with (2.20), which completes the proof.

Corollary 2.8. *If is an eigenvalue, then both and are eigenfunctions corresponding to this eigenvalue.*

Lemma 2.9. *If the condition is satisfied, then all eigenvalues are simple zeros of .*

*Proof. *Since
then
for any . Since
for some , then
Substituting (2.42) in (2.40) and letting we get
Now putting
in (2.43) it yields , which completes the proof.

If , denote the zeros of , then the three-component vectors are the corresponding eigenvectors of the operator satisfying the orthogonality relation Here will be the sequence of eigenfunctions of (2.1)–(2.4) corresponding to the eigenvalues . We denote by the normalized eigenvectors Because of simplicity of the eigenvalues, we find nonzeros constants such that To study the completeness of the eigenvectors of , and hence the completeness of the eigenfunctions of (2.1)–(2.4), we construct the resolvent of as well as Green’s function of problem (2.1)–(2.4). We assume without any loss of generality that is not an eigenvalue of . Otherwise, from discreteness of eigenvalues, we can find a real number such that for all and replace the eigenparameter by . Now let not be an eigenvalue of and consider the inhomogeneous problem and is the identity operator. Since then we have Now, we can represent the general solution of (2.51) in the following form: Applying the method of variation of the constants to (2.53), thus, the functions , and , satisfy the linear system of equations Since is not an eigenvalue and , , each of the linear systems in (2.54) has a unique solution which leads where , and are arbitrary constants. Substituting (2.55) into (2.53), we obtain the solution of (2.51) Then from (2.52) and the transmission conditions (2.4) we get Then (2.56) can be written as Hence, we have where is the unique Green’s function of problem (2.1)–(2.4). Obviously is a meromorphic function of , for every , which has simple poles only at the eigenvalues. Although Green’s function looks as simple as that of Sturm-Liouville problems, cf., for example, [28], it is a rather complicated because of the transmission conditions, see the example at the end of this paper.

Lemma 2.10. *The operator is self-adjoint in .*

*Proof. *Since is a symmetric densely defined operator, then it is sufficient to show that the deficiency spaces are the null spaces and hence . Indeed, if and is a nonreal number, then taking
implies that . Since satisfies conditions (2.2)–(2.4), then . Now we prove that the inverse of exists. If , then
Since , we have . Thus , that is, . Then , the resolvent operator of , exists. Thus
Take . The domains of and are exactly . Consequently the ranges of and are also . Hence the deficiency spaces of are
Hence is self-adjoint.

The next theorem is an eigenfunction expansion theorem, which is similar to that established by Fulton in [29].

Theorem 2.11. *
(i) For **(ii) For **
with the series being absolutely and uniformly convergent in the first component for on and absolutely convergent in the second component.*

*Proof. *The proof is similar to that in [29, pages 298-299].

#### 3. Asymptotic Formulas of Eigenvalues and Eigenfunctions

Now we derive first- and second-order asymptotics of the eigenvalues and eigenfunctions similar to the classical techniques of [27, 30] and [29], see also [25, 26]. We begin by proving some lemmas.

Lemma 3.1. *Let be the solutions of (2.1) defined in Section 2, and let . Then the following integral equations hold for and :
*

*Proof. *For proving it is enough substitute and instead of and in the integral terms of the (3.1) and (3.2), respectively, and integrate by parts twice.

Lemma 3.2. *Let , . Then the functions have the following asymptotic representations for , which hold uniformly for ():
**
if ,
**
if .*

*Proof. *Since the proof of the formulae for is identical to Titchmarshs proof of similar results for (see [27, Lemma 1.7 page 9-10]), we may formulate them without proving them here. Therefore we will prove only the formulas for . Let . Then according to (3.3)
Substituting (3.7) into (3.2) (for ), we get
Multiplying (3.8) by and denoting
we get
Denoting from the last formula, it follows that
for some . From this, it follows that as , so
Substituting (3.12) into the integral on the right of (3.8) yields (3.4) for . The case of (3.4) follows by applying the same procedure as in the case . The case is proved analogically.

Lemma 3.3. *Let , . Then the characteristic function has the following asymptotic representations.**Case 1. *If and , then
*Case 2. *If and , then
*Case 3. *If and , then
*Case 4. *If and , then

*Proof. *The proof is immediate by substituting (3.4) and (3.6) into the representation

Corollary 3.4. *The eigenvalues of the problem (2.1)–(2.4) are bounded below.*

*Proof. *Putting () in the above formulae, it follows that as . Hence, for negative and sufficiently large.

Now we can obtain the asymptotic approximation formula for the eigenvalues of the considered problem (2.1)–(2.4). Since the eigenvalues coincide with the zeros of the entire function , it follows that they have no finite limit. Moreover, we know from Corollaries 2.2 and 3.4 that all eigenvalues are real and bounded below. Therefore, we may renumber them as , listed according to their multiplicity.

Theorem 3.5. *The eigenvalues , , of the problem (2.1)–(2.4) have the following asymptotic representation for , with .**Case 1. *If and , then
*Case 2. *If and , then
*Case 3. *If and , then
*Case 4. *If and , then

*Proof. *We will only consider the first case. From (3.13) we have
We will apply the well-known Rouche theorem, which asserts that if and are analytic inside and on a closed contour and on , then and have the same number of zeros inside , provided that each zero is counted according to its multiplicity. It follows that has the same number of zeros inside the contour as the leading term in (3.22). If , are the zeros of and , we have
for sufficiently large , where , for sufficiently large . By putting in (3.22) we have , so the proof is completed for Case 1. The proof for the other cases is similar.

Then from (3.3)–(3.6) (for ) and the above theorem, the asymptotic behavior of the eigenfunctions of (2.1)–(2.4) is given by, , All these asymptotic formulae hold uniformly for .

#### 4. The Sampling Theorem

In this section we derive two sampling theorems associated with problem (2.1)–(2.4). For convenience we may assume that the eigenvectors of are real valued.

Theorem 4.1. *Consider the boundary value problem (2.1)–(2.4), and let
**
be the solution defined above. Let and
**
Then is an entire function of exponential type 2 that can be reconstructed from its values at the points via the sampling formula
**
The series (4.3) converges absolutely on and uniformly on compact subset of . Here is the entire function defined in (2.29).*

*Proof. *Relation (4.2) can be rewritten as an inner product of as follows
where
Both and can be expanded in terms of the orthogonal basis on eigenfunctions, that is,
where and are the fourier coefficients
Applying Parseval’s identity to (4.4) and using (4.7), we obtain
Now we calculate and . Let not be an eigenvalue and . To prove (4.3) we need to show that
By the definition of the inner product of , we have
Since