Abstract

The eigenvalues of discontinuous Sturm-Liouville problems which contain an eigenparameter appearing linearly in two boundary conditions and an internal point of discontinuity are computed using the derivative sampling theorem and Hermite interpolations methods. We use recently derived estimates for the truncation and amplitude errors to investigate the error analysis of the proposed methods for computing the eigenvalues of discontinuous Sturm-Liouville problems. Numerical results indicating the high accuracy and effectiveness of these algorithms are presented. Moreover, it is shown that the proposed methods are significantly more accurate than those based on the classical sinc method.

1. Introduction

The mathematical modeling of many practical problems in mechanics and other areas of mathematical physics requires solutions of boundary value problems (see, for instance, [17]). It is well known that many topics in mathematical physics require the investigation of the eigenvalues and eigenfunctions of Sturm-Liouville-type boundary value problems. The literature on computing eigenvalues of various types of Sturm-Liouville problems is little and we refer to [815].

Sampling theory is one of the most powerful results in signal analysis. It is of great need in signal processing to reconstruct (recover) a signal (function) from its values at a discrete sequence of points (samples). If this aim is achieved, then an analog (continuous) signal can be transformed into a digital (discrete) one and then it can be recovered by the receiver. If the signal is band-limited, the sampling process can be done via the celebrated Whittaker, Shannon, and Kotel’nikov (WSK) sampling theorem [1618]. By a band-limited signal with band width , , that is, the signal contains no frequencies higher than cycles per second (cps), we mean a function in the Paley-Wiener space of the entire functions of the exponential type at most which are -functions when restricted to . Assume that . Then can be reconstructed via the Hermite-type sampling series where is the sequences of sinc functions Series (1) converges absolutely and uniformly on (cf. [1922]). Sometimes, series (1) is called the derivative sampling theorem. Our task is to use formula (1) to compute the eigenvalues numerically of differential equation with boundary conditions and transmission conditions where is a complex spectral parameter; is a given real-valued function, which is continuous in and and has a finite limit ; , , , , , and () are real numbers; , (); ; and The eigenvalue problem (3)–(6) will be denoted by when . It is a Sturm-Liouville problem which contains an eigenparameter in two boundary conditions, in addition to an internal point of discontinuity.

This approach is a fully new technique that uses the recently obtained estimates for the truncation and amplitude errors associated with (1) (cf. [23]). Both types of errors normally appear in numerical techniques that use interpolation procedures. In the following we summarize these estimates. The truncation error associated with (1) is defined to be where is the truncated series It is proved in [23] that if and is sufficiently smooth in the sense that there exists such that , then, for , , we have where the constants and are given by The amplitude error occurs when approximate samples are used instead of the exact ones, which we cannot compute. It is defined to be where and are approximate samples of and , respectively. Let us assume that the differences , , , are bounded by a positive number ; that is, . If satisfies the natural decay conditions , then, for , we have [23] where and is the Euler-Mascheroni constant.

The classical [24] sampling theorem of WKS for is the series representation where the convergence is absolute and uniform on and it is uniform on compact sets of (cf. [2426]). Series (17), which is of Lagrange interpolation type, has been used to compute eigenvalues of second-order eigenvalue problems; see, for example, [813, 15, 27, 28].

The use of (17) in numerical analysis is known as the sinc method established by Stenger et al. (cf. [2931]). In [9, 15, 28], the authors applied (17) and the regularized sinc method to compute eigenvalues of different boundary value problems with a derivation of the error estimates as given by [32, 33]. In [34], the authors used Hermite-type sampling series (1) to compute the eigenvalues of Dirac system with an internal point of discontinuity. In [14], Tharwat proved that has a denumerable set of real and simple eigenvalues.

In [35], we compute the eigenvalues of the problem numerically by using sinc-Gaussian technique. The main aim of the present work is to compute the eigenvalues of numerically by using Hermite interpolations with an error analysis. This method is based on sampling theorem and Hermite interpolations but applied to regularized functions, hence avoiding any (multiple) integration and keeping the number of terms in the Cardinal series manageable. It has been demonstrated that the method is capable of delivering higher order estimates of the eigenvalues at a very low cost; see [34]. Also, in this work, by using computable error bounds we obtain eigenvalue enclosures in a simple way which not have been proven in [35].

Notice that due to Paley-Wiener’s theorem if and only if there is such that Therefore ; that is, also has an expansion of the form (17). However, can be also obtained by term-by-term differentiation formula of (17) (see [24, page 52] for convergence). Thus the use of Hermite interpolations will not cost any additional computational efforts since the samples will be used to compute both and according to (17) and (19), respectively.

In the next section, we derive the Hermite interpolation technique to compute the eigenvalues of with error estimates. The last section contains three worked examples with comparisons accompanied by figures and numerics with Lagrange interpolation method.

2. Treatment of

In this section we derive approximate values of the eigenvalues of . Recall that has denumerable set of real and simple eigenvalues (cf. [14]). Let denote the solution of (3) satisfying the following initial conditions: Since satisfies (4), (6), then the eigenvalues of problem (3)–(6) are the zeros of the characteristic determinant (cf. [14]) According to [14], see also [3638], function is an entire function of where zeros are real and simple. We aim to approximate and hence its zeros, that is, the eigenvalues by the use of the sampling theorem. The idea is to split into two parts: one is known and the other is unknown, but lies in a Paley-Wiener space. Then we approximate the unknown part using (1) to get the approximate and then compute the approximate zeros. Using the method of variation of parameters, solution satisfies the Volterra integral equations (cf. [14]) where and are the Volterra operators Differentiating (23) we obtain where and are the Volterra-type integral operators Define and , , to be In the following, we will make use of the known estimates: where is some constant (we may take ). For convenience, we define the constants

As in [15] we split into two parts via where is the known part and is the unknown one Then function is entire in for each for which (cf. [15]) where The analyticity of as well as estimate (33) is not adequate to prove that lies in a Paley-Wiener space. To solve this problem, we will multiply by a regularization factor. Let and , , be fixed. Let be the function The regularizing factor has been introduced in [9], in the context of the regularized sampling method, which was used in [913] to compute the eigenvalues of several classes of Sturm-Liouville problems. More specifications on , will be given latter on. Then , see [15], is an entire function of which satisfies the estimate Moreover, and where What we have just proved is that belongs to the Paley-Wiener space with . Since , then we can reconstruct the functions via the following sampling formula:

Let , , and approximate by its truncated series , where Since all eigenvalues are real, then from now on we restrict ourselves to . Since , the truncation error (cf. (10)) is given for by where The samples and , in general, are not known explicitly. So we approximate them by solving numerically initial value problems at the nodes .

Let and be the approximations of the samples of and , respectively. Now we define , which approximates : Using standard methods for solving initial problems, we may assume that for for a sufficiently small . From (36) we can see that satisfies the condition (14) when , and therefore whenever we have where there is a positive constant for which (cf. (15)) Here In the following we use the technique of [27], see also [15], to determine enclosure intervals for the eigenvalues. Let be an eigenvalue; that is, Then it follows that and so Since is given and has computable upper bound, we can define an enclosure for , by solving the following system of inequalities: Its solution is an interval containing over which the graph is squeezed between the graphs Using the fact that uniformly over any compact set, and since is a simple root, we obtain for large and sufficiently small in a neighborhood of . Hence the graph of intersects the graphs at two points with abscissae , and the solution of the system of inequalities (53) is the interval and in particular . Summarizing the above discussion, we arrive at the following lemma which is similar to that of [27] for Sturm-Liouville problems.

Lemma 1. For any eigenvalue , one can find and sufficiently small such that for . Moreover

Proof. Since all eigenvalues of are simple, then for large and sufficiently small we have , in a neighborhood of . Choose such that has two distinct solutions which we denote by . The decay of as and as will ensure the existence of the solutions and as and . For the second point we recall that as and . Hence by taking the limit we obtain That is. This leads us to conclude that since is a simple root.

Let . Then (41) and (45) imply and is chosen sufficiently small for which . Therefore , must be chosen so that for Let be an eigenvalue and let be its approximation. Thus and . From (63) we have . Now we estimate the error for an eigenvalue .

Theorem 2. Let be an eigenvalue of . For sufficiently large one has the following estimate:

Proof. Since , then from (63) and after replacing by we obtain Using the mean value theorem yields that for some Since thlarge and we get (65).

3. Numerical Examples

This section includes three detailed worked examples illustrating the above technique. By and we mean the absolute errors associated with the results of the classical sinc method [9, 15] and our new method (Hermite interpolations), respectively. The first two examples are computed in [15] with the classical sinc method. We indicate in these two examples the effect of the amplitude error in the method by determining enclosure intervals for different values of . We also indicate the effect of the parameters and by several choices. Also, ine eigenvalues are simple, then for sufficiently the following two examples, we observe that the exact solutions and the zeros of are all inside the interval . In the third example, we compare our new method with the classical sinc method [9]. We would like to mention that MATHEMATICA has been used to obtain the exact values for the two examples where eigenvalues cannot be computed concretely. MATHEMATICA is also used in rounding the exact eigenvalues, which are square roots. Both numerical results and the associated figures prove the credibility of the method.

Recall that are defined by Recall also that the enclosure interval is determined by solving

Example 1. Consider the boundary value problem [15] Here , , , , and The characteristic function is The function will be

The application of Hermite interpolations method and sinc method [15] to this problem and the effect of and at are indicated in Tables 1 and 2. In Tables 3 and 4, we display the maximum absolute error of , using Hermite interpolations method and sinc method [15] with various choices of and at . From these tables, it is shown that the proposed methods are significantly more accurate than those based on the classical sinc method [15].

Tables 5 and 6 list the exact solutions for two choices of and at and different values of . It is indicated that the solutions are all inside the interval for all values of .

For , , and , Figures 1 and 2 illustrate the enclosure intervals dominating for and , respectively. The middle curve represents , while the upper and lower curves represent the curves of , , respectively. We notice that when all three curves are almost identical. Similarly, Figures 3 and 4 illustrate the enclosure intervals dominating for , , respectively.

As in Table 6, for , , and , Figures 5 and 6 illustrate the enclosure intervals dominating for and , respectively, and Figures 7 and 8 illustrate the enclosure intervals dominating for , , respectively.

Example 2. Consider the boundary value problem where , , , , , and The function will be

The application of Hermite interpolations method and sinc method [15] to this problem and the effect of and at are indicated in Tables 7 and 8. In Tables 9 and 10, we display the maximum absolute error of , using Hermite interpolations method and sinc method [15] with various choices of and at . Form these tables, it is shown that the proposed methods are significantly more accurate than those based on the classical sinc method [15].

Tables 11 and 12 list the exact solutions for two choices of and at and different values of . It is indicated that the solutions are all inside the interval for all values of .

For , , and , Figures 9 and 10 illustrate the enclosure intervals dominating for and , respectively. Similarly, Figures 11 and 12 illustrate the enclosure intervals dominating for , , respectively.

For , , and , Figures 13 and 14 illustrate the enclosure intervals dominating for and , respectively, and Figures 15 and 16 illustrate the enclosure intervals dominating for , , respectively.

Example 3. Consider the continuous boundary value problem [9] where , , , , , and . The exact characteristic function is where zero is not an eigenvalue. The application of Hermite interpolations method and sinc method [9] to this problem is indicated in Table 13. From this table, it is shown that the proposed method is significantly more accurate than that based on the sinc method [9].

Acknowledgments

This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under Grant no. 130-065-D1433. The authors, therefore, acknowledge with thanks DSR technical and financial support.