- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Abstract and Applied Analysis
Volume 2011 (2011), Article ID 610232, 30 pages
Discontinuous Sturm-Liouville Problems and Associated Sampling Theories
1Department of Mathematics, University College, Umm Al-Qura University, Makkah, Saudi Arabia
2Department of Mathematics, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt
Received 23 May 2011; Revised 14 August 2011; Accepted 18 August 2011
Academic Editor: Yuming Shi
Copyright © 2011 M. M. Tharwat. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper investigates the sampling analysis associated with discontinuous Sturm-Liouville problems with eigenvalue parameters in two boundary conditions and with transmission conditions at the point of discontinuity. We closely follow the analysis derived by Fulton (1977) to establish the needed relations for the derivations of the sampling theorems including the construction of Green's function as well as the eigenfunction expansion theorem. We derive sampling representations for transforms whose kernels are either solutions or Green's functions. In the special case, when our problem is continuous, the obtained results coincide with the corresponding results in the work of Annaby and Tharwat (2006).
The recovery of entire functions from a discrete sequence of points is an important problem from mathematical and practical points of view. For instance, in signal processing it is needed to reconstruct (recover) a signal (function) from its values at a sequence of samples. If this aim is achieved, then an analog (continuous) signal can be transformed into a digital (discrete) one and then it can be recovered by the receiver. If the signal is band limited, the sampling process can be done via the celebrated Whittaker, Shannon, and Kotel’nikov (WKS) sampling theorem [1–3]. By a band-limited signal with band width , , that is, the signal contains no frequencies higher than cycles per second (cps), we mean a function in the Paley-Wiener space of entire functions of exponential type at most which are -functions when restricted to . This space is characterized by the following relation which is due to Paley and Wiener [4, 5]: Now WKS [6, 7] sampling theorem states the following.
Theorem 1.1 (WKS). If , then it is completely determined from its values at the points , , by means of the formula where The sampling series (1.2) is absolutely and uniformly convergent on compact subsets of , uniformly convergent on and converges in the norm of , see [6, 8, 9].
The WKS sampling theorem has been generalized in many different ways. Here we are interested in two extensions. The first is concerned with replacing the equidistant sampling points by more general ones, which is important from practical and theoretical point of view. The following theorem which is known in some literature as Paley-Wiener theorem,  gives a sampling theorem with a more general class of sampling points. Although the theorem in its final form may be attributed to Levinson  and Kadec , it could be named after Paley and Wiener who first derive the theorem in a more restrictive form, see [6, 7] for more details.
Theorem 1.2 (Paley and Wiener). Let , be a sequence of real numbers satisfying and let be the entire function defined by the canonical product Then, for any The series (1.6) converges uniformly on compact subsets of .
The WKS sampling theorem is a special case of this theorem because if we choose , then Expansion (1.6) is of Lagrange-type interpolation.
The second extension of WKS sampling theorem is the theorem of Kramer . In this theorem sampling representations were given for integral transforms whose kernels are more general than .
Theorem 1.3 (Kramer). Let I be a finite closed interval, a function continuous in such that for all , and let be a sequence of real numbers such that is a complete orthogonal set in . Suppose that Then Series (1.9) converges uniformly wherever as a function of is bounded.
The relationship between both extensions of WKS sampling theorem has been investigated extensively. Starting from a function theory approach, cf. , it is proved in  that if , , satisfies some analyticity conditions, then Kramer’s sampling formula (1.9) turns out to be a Lagrange interpolation one, see also [15–17]. In another direction, it is shown that Kramer’s expansion (1.9) could be written as a Lagrange-type interpolation formula if and are extracted from ordinary differential operators, see the survey  and the references cited therein. The present work is a continuation of the second direction mentioned above. We prove that integral transforms associated with second-order eigenvalue problems with an eigenparameter appearing in the boundary conditions and also with an internal point of discontinuity can also be reconstructed in a sampling form of Lagrange interpolation type. We would like to mention that works in direction of sampling associated with eigenproblems with an eigenparameter in the boundary conditions are few, see, for example, [19, 20]. Also papers in sampling with discontinuous eigenproblems are few, see [21–23]. However sampling theories associated with eigenproblems, which contain eigenparameter in the boundary conditions and have at the same time discontinuity conditions, do not exist as for as we know. Our investigation will be the first in that direction, introducing a good example. To achieve our aim we will briefly study the spectral analysis of the problem. Then we derive two sampling theorems using solutions and Green’s function, respectively.
2. The Eigenvalue Problem
In this section we define our boundary value problem and state some of its properties. Consider the boundary value problem with boundary conditions and transmission conditions where is a complex spectral parameter; for , for ; and are given real number; is a given real-valued function, which is continuous in and and has a finite limit ; , () are real numbers; , (); and are given by In some literature conditions (2.4) are called compatability conditions, see, for example, . To formulate a theoretic approach to problem (2.1)–(2.4) we define the Hilbert space with an inner product where
and , . For convenience we put
For function , which is defined on and has finite limit , by and we denote the functions which are defined on and , respectively.
Let be the set of all such that , are absolutely continuous in , and . Define the operator by The eigenvalues and the eigenfunctions of the problem (2.1)–(2.4) are defined as the eigenvalues and the first components of the corresponding eigenelements of the operator , respectively.
Theorem 2.1. Let . Then, the operator is symmetric.
Proof. For By two partial integration we obtain where, as usual, by we denote the Wronskian of the functions and Since and are satisfied the boundary condition (2.2)-(2.3) and transmission conditions (2.4) we get Finally substituting (2.15) in (2.13) then we have thus, the operator is Hermitian. The symmetry of arises from the well-known fact that is dense in see, for example, .
Proof. Formula (2.17) follows immediately from the orthogonality of corresponding eigenelements in the Hilbert space .
Now, we will construct a special fundamental system of solutions of the equation (2.1) for not being an eigenvalue. Let us consider the next initial value problem: By virtue of Theorem 1.5 in  this problem has a unique solution , which is an entire function of for each fixed . Similarly, employing the same method as in proof of Theorem 1.5 in , we see that the problem has a unique solution which is an entire function of parameter for each fixed .
Now the functions and are defined in terms of and as follows: the initial-value problem, which contains the entire functions of eigenparameter (in the right-hand side), has unique solution for each .
Similarly, the following problem also has a unique solution :
Since the Wronskians are independent on variable () and and are the entire functions of the parameter for each (), then the functions are the entire functions of parameter .
Lemma 2.4. If the condition is satisfied, then the equality holds for each .
Corollary 2.5. The zeros of the functions and coincide.
Now we may introduce to the consideration the characteristic function as
Proof. Let . Then , and so the functions and are linearly dependent, that is,
Consequently, satisfied the boundary condition (2.3), so the function is an eigenfunction of the problem (2.1)–(2.4) corresponding to the eigenvalue .
Now let be any eigenfunction corresponding to the eigenvalue , but . Then the functions are linearly independent on . Thus, may be represented as in the form where at least one of the constants , , is not zero.
Consider the equations as the homogenous system of linear equations of the variables , , and taking into account (2.24) and (2.26), it follows that the determinant of this system is Thus, the system (2.32) has only trivial solution , , and so we get contradiction which completes the proof.
Lemma 2.7. If is an eigenvalue, then and are linearly dependent.
Proof. Since is an eigenvalue, then from Theorem 2.6 we have , . Therefore
for some , . Now, we must show that . Suppose if possible that . Taking into account the definitions of solution and , , from the equalities (2.34) we get
Since , , and it follows that
By the same procedure from the equality we can derive that
From the fact that is a solution of (2.1) on and satisfied the initial conditions (2.36) and (2.37) it follows that identically on , because of the well-known existence and uniqueness theorem for the initial value problems of the ordinary linear differential equations.
By using (2.24), (2.36), and (2.37) we may also find For latter discussion for , it follows that identically on . Therefore identically on . But this is contradicted with (2.20), which completes the proof.
Corollary 2.8. If is an eigenvalue, then both and are eigenfunctions corresponding to this eigenvalue.
Lemma 2.9. If the condition is satisfied, then all eigenvalues are simple zeros of .
If , denote the zeros of , then the three-component vectors are the corresponding eigenvectors of the operator satisfying the orthogonality relation Here will be the sequence of eigenfunctions of (2.1)–(2.4) corresponding to the eigenvalues . We denote by the normalized eigenvectors Because of simplicity of the eigenvalues, we find nonzeros constants such that To study the completeness of the eigenvectors of , and hence the completeness of the eigenfunctions of (2.1)–(2.4), we construct the resolvent of as well as Green’s function of problem (2.1)–(2.4). We assume without any loss of generality that is not an eigenvalue of . Otherwise, from discreteness of eigenvalues, we can find a real number such that for all and replace the eigenparameter by . Now let not be an eigenvalue of and consider the inhomogeneous problem and is the identity operator. Since then we have Now, we can represent the general solution of (2.51) in the following form: Applying the method of variation of the constants to (2.53), thus, the functions , and , satisfy the linear system of equations Since is not an eigenvalue and , , each of the linear systems in (2.54) has a unique solution which leads where , and are arbitrary constants. Substituting (2.55) into (2.53), we obtain the solution of (2.51) Then from (2.52) and the transmission conditions (2.4) we get Then (2.56) can be written as Hence, we have where is the unique Green’s function of problem (2.1)–(2.4). Obviously is a meromorphic function of , for every , which has simple poles only at the eigenvalues. Although Green’s function looks as simple as that of Sturm-Liouville problems, cf., for example, , it is a rather complicated because of the transmission conditions, see the example at the end of this paper.
Lemma 2.10. The operator is self-adjoint in .
Proof. Since is a symmetric densely defined operator, then it is sufficient to show that the deficiency spaces are the null spaces and hence . Indeed, if and is a nonreal number, then taking implies that . Since satisfies conditions (2.2)–(2.4), then . Now we prove that the inverse of exists. If , then Since , we have . Thus , that is, . Then , the resolvent operator of , exists. Thus Take . The domains of and are exactly . Consequently the ranges of and are also . Hence the deficiency spaces of are Hence is self-adjoint.
The next theorem is an eigenfunction expansion theorem, which is similar to that established by Fulton in .
Theorem 2.11. (i) For (ii) For with the series being absolutely and uniformly convergent in the first component for on and absolutely convergent in the second component.
Proof. The proof is similar to that in [29, pages 298-299].
3. Asymptotic Formulas of Eigenvalues and Eigenfunctions
Lemma 3.2. Let , . Then the functions have the following asymptotic representations for , which hold uniformly for (): if , if .
Proof. Since the proof of the formulae for is identical to Titchmarshs proof of similar results for (see [27, Lemma 1.7 page 9-10]), we may formulate them without proving them here. Therefore we will prove only the formulas for . Let . Then according to (3.3) Substituting (3.7) into (3.2) (for ), we get Multiplying (3.8) by and denoting we get Denoting from the last formula, it follows that for some . From this, it follows that as , so Substituting (3.12) into the integral on the right of (3.8) yields (3.4) for . The case of (3.4) follows by applying the same procedure as in the case . The case is proved analogically.
Lemma 3.3. Let , . Then the characteristic function has the following asymptotic representations.
Case 1. If and , then
Case 2. If and , then
Case 3. If and , then
Case 4. If and , then
Proof. Putting () in the above formulae, it follows that as . Hence, for negative and sufficiently large.
Now we can obtain the asymptotic approximation formula for the eigenvalues of the considered problem (2.1)–(2.4). Since the eigenvalues coincide with the zeros of the entire function , it follows that they have no finite limit. Moreover, we know from Corollaries 2.2 and 3.4 that all eigenvalues are real and bounded below. Therefore, we may renumber them as , listed according to their multiplicity.
Theorem 3.5. The eigenvalues , , of the problem (2.1)–(2.4) have the following asymptotic representation for , with .
Case 1. If and , then
Case 2. If and , then
Case 3. If and , then
Case 4. If and , then
Proof. We will only consider the first case. From (3.13) we have We will apply the well-known Rouche theorem, which asserts that if and are analytic inside and on a closed contour and on , then and have the same number of zeros inside , provided that each zero is counted according to its multiplicity. It follows that has the same number of zeros inside the contour as the leading term in (3.22). If , are the zeros of and , we have for sufficiently large , where , for sufficiently large . By putting in (3.22) we have , so the proof is completed for Case 1. The proof for the other cases is similar.
4. The Sampling Theorem
Theorem 4.1. Consider the boundary value problem (2.1)–(2.4), and let be the solution defined above. Let and Then is an entire function of exponential type 2 that can be reconstructed from its values at the points via the sampling formula The series (4.3) converges absolutely on and uniformly on compact subset of . Here is the entire function defined in (2.29).
Proof. Relation (4.2) can be rewritten as an inner product of as follows where Both and can be expanded in terms of the orthogonal basis on eigenfunctions, that is, where and are the fourier coefficients Applying Parseval’s identity to (4.4) and using (4.7), we obtain Now we calculate and . Let not be an eigenvalue and . To prove (4.3) we need to show that By the definition of the inner product of , we have Since