Abstract

This paper investigates the sampling analysis associated with discontinuous Sturm-Liouville problems with eigenvalue parameters in two boundary conditions and with transmission conditions at the point of discontinuity. We closely follow the analysis derived by Fulton (1977) to establish the needed relations for the derivations of the sampling theorems including the construction of Green's function as well as the eigenfunction expansion theorem. We derive sampling representations for transforms whose kernels are either solutions or Green's functions. In the special case, when our problem is continuous, the obtained results coincide with the corresponding results in the work of Annaby and Tharwat (2006).

1. Introduction

The recovery of entire functions from a discrete sequence of points is an important problem from mathematical and practical points of view. For instance, in signal processing it is needed to reconstruct (recover) a signal (function) from its values at a sequence of samples. If this aim is achieved, then an analog (continuous) signal can be transformed into a digital (discrete) one and then it can be recovered by the receiver. If the signal is band limited, the sampling process can be done via the celebrated Whittaker, Shannon, and Kotel’nikov (WKS) sampling theorem [13]. By a band-limited signal with band width 𝜎, 𝜎>0, that is, the signal contains no frequencies higher than 𝜎/2𝜋 cycles per second (cps), we mean a function in the Paley-Wiener space 𝑃𝑊2𝜎 of entire functions of exponential type at most 𝜎 which are 𝐿2()-functions when restricted to . This space is characterized by the following relation which is due to Paley and Wiener [4, 5]:𝑓(𝑡)𝑃𝑊2𝜎1𝑓(𝑡)=2𝜋𝜎𝜎𝑒𝑖𝑥𝑡𝑔(𝑥)𝑑𝑥,forsomefunction𝑔()𝐿2(𝜎,𝜎).(1.1) Now WKS [6, 7] sampling theorem states the following.

Theorem 1.1 (WKS). If 𝑓(𝑡)𝑃𝑊2𝜎, then it is completely determined from its values at the points 𝑡𝑘=𝑘𝜋/𝜎, 𝑘, by means of the formula 𝑓(𝑡)=𝑘=𝑓𝑡𝑘sinc𝜎𝑡𝑡𝑘,𝑡,(1.2) where sinc𝑡=sin𝑡𝑡,𝑡0,1,𝑡=0.(1.3) The sampling series (1.2) is absolutely and uniformly convergent on compact subsets of , uniformly convergent on and converges in the norm of 𝐿2(), see [6, 8, 9].

The WKS sampling theorem has been generalized in many different ways. Here we are interested in two extensions. The first is concerned with replacing the equidistant sampling points by more general ones, which is important from practical and theoretical point of view. The following theorem which is known in some literature as Paley-Wiener theorem, [5] gives a sampling theorem with a more general class of sampling points. Although the theorem in its final form may be attributed to Levinson [10] and Kadec [11], it could be named after Paley and Wiener who first derive the theorem in a more restrictive form, see [6, 7] for more details.

Theorem 1.2 (Paley and Wiener). Let {𝑡k}, 𝑘 be a sequence of real numbers satisfying 𝐷=sup𝑘|||𝑡𝑘𝑘𝜋𝜎|||<𝜋4𝜎,(1.4) and let 𝐺(𝑡) be the entire function defined by the canonical product 𝐺(𝑡)=𝑡𝑡0𝑘=1𝑡1𝑡𝑘𝑡1𝑡𝑘.(1.5) Then, for any 𝑓𝑃𝑊2𝜎𝑓(𝑡)=𝑘𝑓𝑡𝑘𝐺(𝑡)𝐺𝑡𝑘𝑡𝑡𝑘,𝑡.(1.6) The series (1.6) converges uniformly on compact subsets of .

The WKS sampling theorem is a special case of this theorem because if we choose 𝑡𝑘=𝑘𝜋/𝜎=𝑡𝑘, then 𝐺(𝑡)=𝑡𝑘=1𝑡1𝑡𝑘𝑡1+𝑡𝑘=𝑡𝑘=11(𝑡𝜎/𝜋)2𝑘2=sin𝑡𝜎𝜎,𝐺𝑡𝑘=(1)𝑘.(1.7) Expansion (1.6) is of Lagrange-type interpolation.

The second extension of WKS sampling theorem is the theorem of Kramer [12]. In this theorem sampling representations were given for integral transforms whose kernels are more general than exp(𝑖𝑥𝑡).

Theorem 1.3 (Kramer). Let I be a finite closed interval, 𝐾(,𝑡)𝐼× a function continuous in 𝑡 such that 𝐾(,𝑡)𝐿2(𝐼) for all 𝑡, and let {𝑡𝑘}𝑘 be a sequence of real numbers such that {𝐾(,𝑡𝑘)}𝑘 is a complete orthogonal set in 𝐿2(𝐼). Suppose that 𝑓(𝑡)=𝐼𝐾(𝑥,𝑡)𝑔(𝑥)𝑑𝑥,𝑔()𝐿2(𝐼).(1.8) Then 𝑓(𝑡)=𝑘𝑓𝑡𝑘𝐼𝐾(𝑥,𝑡)𝐾𝑥,𝑡𝑘𝑑𝑥𝐾,𝑡𝑘2𝐿2(𝐼).(1.9) Series (1.9) converges uniformly wherever 𝐾(,𝑡)𝐿2(𝐼) as a function of 𝑡 is bounded.

Again Kramer’s theorem is a generalization of WKS theorem. If we take 𝐾(𝑥,𝑡)=𝑒𝑖𝑡𝑥,𝐼=[𝜎,𝜎],𝑡𝑘=𝑘𝜋/𝜎, then (1.9) will be (1.2).

The relationship between both extensions of WKS sampling theorem has been investigated extensively. Starting from a function theory approach, cf. [13], it is proved in [14] that if 𝐾(𝑥,𝑡), 𝑥𝐼, 𝑡 satisfies some analyticity conditions, then Kramer’s sampling formula (1.9) turns out to be a Lagrange interpolation one, see also [1517]. In another direction, it is shown that Kramer’s expansion (1.9) could be written as a Lagrange-type interpolation formula if 𝐾(,𝑡) and 𝑡𝑘 are extracted from ordinary differential operators, see the survey [18] and the references cited therein. The present work is a continuation of the second direction mentioned above. We prove that integral transforms associated with second-order eigenvalue problems with an eigenparameter appearing in the boundary conditions and also with an internal point of discontinuity can also be reconstructed in a sampling form of Lagrange interpolation type. We would like to mention that works in direction of sampling associated with eigenproblems with an eigenparameter in the boundary conditions are few, see, for example, [19, 20]. Also papers in sampling with discontinuous eigenproblems are few, see [2123]. However sampling theories associated with eigenproblems, which contain eigenparameter in the boundary conditions and have at the same time discontinuity conditions, do not exist as for as we know. Our investigation will be the first in that direction, introducing a good example. To achieve our aim we will briefly study the spectral analysis of the problem. Then we derive two sampling theorems using solutions and Green’s function, respectively.

2. The Eigenvalue Problem

In this section we define our boundary value problem and state some of its properties. Consider the boundary value problem(𝑦)=𝑟(𝑥)𝑢[],(𝑥)+𝑞(𝑥)𝑢(𝑥)=𝜆𝑢(𝑥),𝑥1,0)(0,1(2.1) with boundary conditions𝑈1𝛼(𝑢)=1𝜆𝛼1𝑢𝛼(1)2𝜆𝛼2𝑢𝑈(1)=0,(2.2)2(𝛽𝑢)=1𝜆+𝛽1𝛽𝑢(1)2𝜆+𝛽2𝑢(1)=0,(2.3) and transmission conditions𝑈3(𝑢)=𝛾1𝑢(0)𝛿1𝑈𝑢(+0)=0,4(𝑢)=𝛾2𝑢(0)𝛿2𝑢(+0)=0,(2.4) where 𝜆 is a complex spectral parameter; 𝑟(𝑥)=𝑟21 for 𝑥[1,0), 𝑟(𝑥)=𝑟22 for 𝑥(0,1]; 𝑟1>0 and 𝑟2>0 are given real number; 𝑞(𝑥) is a given real-valued function, which is continuous in [1,0) and (0,1] and has a finite limit 𝑞(±0)=lim𝑥±0𝑞(𝑥); 𝛾𝑖,𝛿𝑖, 𝛼𝑖,𝛽𝑖,𝛼𝑖,𝛽𝑖 (𝑖=1,2) are real numbers; 𝛾𝑖0, 𝛿𝑖0 (𝑖=1,2); 𝜌 and 𝛾 are given by𝛼𝜌=det1𝛼1𝛼2𝛼2𝛽>0,𝛾=det1𝛽1𝛽2𝛽2>0.(2.5) In some literature conditions (2.4) are called compatability conditions, see, for example, [24]. To formulate a theoretic approach to problem (2.1)–(2.4) we define the Hilbert space 𝐇=𝐿2(1,1)2 with an inner product𝐟(),𝐠()𝐇1=𝑟2101𝑓(𝑥)1𝑔(𝑥)𝑑𝑥+𝑟2210𝑓(𝑥)1𝑔(𝑥)𝑑𝑥+𝜌𝑓1𝑔1+1𝛾𝑓2𝑔2,(2.6) where 𝑓𝐟(𝑥)=𝑓(𝑥)1𝑓2𝑔,𝐠(𝑥)=𝑔(𝑥)1𝑔2𝐇,(2.7)

𝑓(),𝑔()𝐿2(1,1) and 𝑓𝑖,𝑔𝑖, 𝑖=1,2. For convenience we put𝑅1(𝑢)𝑅1𝑅(𝑢)1(𝑢)𝑅1(𝛼𝑢)=1𝑢(1)𝛼2𝑢(1)𝛽1𝑢(1)𝛽2𝑢𝛼(1)1𝑢(1)𝛼2𝑢(1)𝛽1𝑢(1)𝛽2𝑢(1).(2.8)

For function 𝑓(𝑥), which is defined on [1,0)(0,1] and has finite limit 𝑓(±0)=lim𝑥±0𝑓(𝑥), by 𝑓(1)(𝑥) and 𝑓(2)(𝑥) we denote the functions 𝑓(1)[𝑓(𝑥)=𝑓(𝑥),𝑥1,0),𝑓(0),𝑥=0,(2)],(𝑥)=𝑓(𝑥),𝑥(0,1𝑓(+0),𝑥=0,(2.9) which are defined on 𝐼1=[1,0] and 𝐼2=[0,1], respectively.

In the following we will define the minimal closed operator in 𝐇 associated with the differential expression , cf. [25, 26].

Let 𝒟(𝐀)𝐇 be the set of all𝑅𝐟(𝑥)=𝑓(𝑥)1(𝑅𝑓)1(𝑓)𝐇(2.10) such that 𝑓(𝑖)(), 𝑓(𝑖)() are absolutely continuous in 𝐼𝑖, 𝑖=1,2,(𝑓)𝐿2(1,0)𝐿2(0,1) and 𝑈3(𝑓)=𝑈4(𝑓)=0. Define the operator 𝐀𝒟(𝐀)𝐇 by𝐀𝑅𝑓(𝑥)1(𝑅𝑓)1=𝑅(𝑓)(𝑓)1(𝑓)𝑅1,𝑅(𝑓)𝑓(𝑥)1(𝑅𝑓)1(𝑓)𝒟(𝐀).(2.11) The eigenvalues and the eigenfunctions of the problem (2.1)–(2.4) are defined as the eigenvalues and the first components of the corresponding eigenelements of the operator 𝐀, respectively.

Theorem 2.1. Let 𝛾1𝛾2=𝛿1𝛿2. Then, the operator 𝐀 is symmetric.

Proof. For 𝐟(),𝐠()𝒟(𝐀)𝐀𝐟(),𝐠()𝐇=1𝑟2101(𝑓(𝑥))1𝑔(𝑥)𝑑𝑥+𝑟2210(𝑓(𝑥))+1𝑔(𝑥)𝑑𝑥𝜌𝑅1(𝑓)𝑅1𝑔1𝛾𝑅1(𝑓)𝑅1𝑔.(2.12) By two partial integration we obtain 𝐀𝐟(),𝐠()𝐇=𝐟(),𝐀𝐠()𝐇+𝑊𝑓,𝑔;0𝑊𝑓,𝑔;1+𝑊𝑓,𝑔;1𝑊𝑓,+1𝑔;+0𝜌𝑅1(𝑓)𝑅1𝑔𝑅1(𝑓)𝑅1𝑔+1𝛾𝑅1(𝑓)𝑅1𝑔𝑅1(𝑓)𝑅1𝑔,(2.13) where, as usual, by 𝑊(𝑓,𝑔;𝑥) we denote the Wronskian of the functions 𝑓 and 𝑔𝑊(𝑓,𝑔;𝑥)=𝑓(𝑥)𝑔(𝑥)𝑓(𝑥)𝑔(𝑥).(2.14) Since 𝑓(𝑥) and 𝑔(𝑥) are satisfied the boundary condition (2.2)-(2.3) and transmission conditions (2.4) we get 𝑅1(𝑓)𝑅1𝑔𝑅1(𝑓)𝑅1𝑔=𝜌𝑊𝑓,,𝑅𝑔;11(𝑓)𝑅1𝑔𝑅1(𝑓)𝑅1𝑔=𝛾𝑊𝑓,,𝛾𝑔;11𝛾2𝑊𝑓,𝑔;0=𝛿1𝛿2𝑊𝑓,.𝑔;+0(2.15) Finally substituting (2.15) in (2.13) then we have 𝐀𝐟(),𝐠()𝐇=𝐟(),𝐀𝐠()𝐇,𝐟(),𝐠()𝒟(𝐀),(2.16) thus, the operator 𝐀 is Hermitian. The symmetry of 𝐀 arises from the well-known fact that 𝒟(𝐀) is dense in 𝐇 see, for example, [24].

Corollary 2.2. All eigenvalues of the problem (2.1)–(2.4) are real.

We can now assume that all eigenfunctions of the problem (2.1)–(2.4) are real valued.

Corollary 2.3. Let 𝜆1 and 𝜆2 be two different eigenvalues of the problem (2.1)–(2.4). Then the corresponding eigenfunctions 𝑢1 and 𝑢2 of this problem are orthogonal in the sense of 1𝑟2101𝑢1(𝑥)𝑢21(𝑥)𝑑𝑥+𝑟2210𝑢1(𝑥)𝑢21(𝑥)𝑑𝑥+𝜌𝑅1𝑢1𝑅1𝑢2+1𝛾𝑅1𝑢1𝑅1𝑢2=0.(2.17)

Proof. Formula (2.17) follows immediately from the orthogonality of corresponding eigenelements 𝐮1𝑢(𝑥)=1𝑅(𝑥)1𝑢1𝑅1𝑢1,𝐮2𝑢(𝑥)=2𝑅(𝑥)1𝑢2𝑅1𝑢2(2.18) in the Hilbert space 𝐇.

Now, we will construct a special fundamental system of solutions of the equation (2.1) for 𝜆 not being an eigenvalue. Let us consider the next initial value problem:𝑟21𝑢𝑢(𝑥)+𝑞(𝑥)𝑢(𝑥)=𝜆𝑢(𝑥),𝑥(1,0),(2.19)(1)=𝜆𝛼2𝛼2,𝑢(1)=𝜆𝛼1𝛼1.(2.20) By virtue of Theorem  1.5 in [27] this problem has a unique solution 𝑢=𝜑1(𝑥)=𝜑1𝜆(𝑥), which is an entire function of 𝜆 for each fixed 𝑥[1,0]. Similarly, employing the same method as in proof of Theorem  1.5 in [27], we see that the problem𝑟22𝑢(𝑥)+𝑞(𝑥)𝑢(𝑥)=𝜆𝑢(𝑥),𝑥(0,1),(2.21)𝑢(1)=𝜆𝛽2+𝛽2,𝑢(1)=𝜆𝛽1+𝛽1,(2.22) has a unique solution 𝑢=𝜒2(𝑥)=𝜒2𝜆(𝑥) which is an entire function of parameter 𝜆 for each fixed 𝑥[0,1].

Now the functions 𝜑2𝜆(𝑥) and 𝜒1𝜆(𝑥) are defined in terms of 𝜑1𝜆(𝑥) and 𝜒2𝜆(𝑥) as follows: the initial-value problem,𝑟22𝑢𝛾(𝑥)+𝑞(𝑥)𝑢(𝑥)=𝜆𝑢(𝑥),𝑥(0,1),(2.23)𝑢(0)=1𝛿1𝜑1𝜆(0),𝑢𝛾(0)=2𝛿2𝜑1𝜆(0),(2.24) which contains the entire functions of eigenparameter 𝜆 (in the right-hand side), has unique solution 𝑢=𝜑2𝜆(𝑥) for each 𝜆.

Similarly, the following problem also has a unique solution 𝑢=𝜒1(𝑥)=𝜒1𝜆(𝑥):𝑟21𝑢𝛿(𝑥)+𝑞(𝑥)𝑢(𝑥)=𝜆𝑢(𝑥),𝑥(1,0),(2.25)𝑢(0)=1𝛾1𝜒2𝜆(0),𝑢𝛿(0)=2𝛾2𝜒2𝜆(0).(2.26)

Since the Wronskians 𝑊(𝜑𝑖𝜆,𝜒𝑖𝜆;𝑥) are independent on variable 𝑥𝐼𝑖 (𝑖=1,2) and 𝜑𝑖𝜆(𝑥) and 𝜒𝑖𝜆(𝑥) are the entire functions of the parameter 𝜆 for each 𝑥𝐼𝑖 (𝑖=1,2), then the functions𝜔𝑖𝜑(𝜆)=𝑊𝑖𝜆,𝜒𝑖𝜆;𝑥,𝑥𝐼𝑖,𝑖=1,2,(2.27) are the entire functions of parameter 𝜆.

Lemma 2.4. If the condition 𝛾1𝛾2=𝛿1𝛿2 is satisfied, then the equality 𝜔1(𝜆)=𝜔2(𝜆) holds for each 𝜆.

Proof. Taking into account (2.24) and (2.26), a short calculation gives 𝛾1𝛾2𝑊(𝜑1𝜆,𝜒1𝜆;0)=𝛿1𝛿2𝑊(𝜑2𝜆,𝜒2𝜆;0), so 𝜔1(𝜆)=𝜔2(𝜆) for each 𝜆.

Corollary 2.5. The zeros of the functions 𝜔1(𝜆) and 𝜔2(𝜆) coincide.

Let us construct two basic solutions of (2.1) as𝜑𝜆𝜑(𝑥)=1𝜆[𝜑(𝑥),𝑥1,0),2𝜆(],𝜒𝑥),𝑥(0,1𝜆𝜒(𝑥)=1𝜆[𝜒(𝑥),𝑥1,0),2𝜆(].𝑥),𝑥(0,1(2.28) By virtue of (2.24) and (2.26) these solutions satisfy both transmission conditions (2.4).

Now we may introduce to the consideration the characteristic function 𝜔(𝜆) as𝜔(𝜆)=𝜔1(𝜆)=𝜔2(𝜆).(2.29)

Theorem 2.6. The eigenvalues of the problem (2.1)–(2.4) are coincided zeros of the function 𝜔(𝜆).

Proof. Let 𝜔(𝜆0)=0. Then 𝑊(𝜑1𝜆0,𝜒1𝜆0;𝑥)=0, and so the functions 𝜑1𝜆0(𝑥) and 𝜒1𝜆0(𝑥) are linearly dependent, that is, 𝜒1𝜆0(𝑥)=𝑘𝜑1𝜆0[](𝑥),𝑥1,0,forsome𝑘0.(2.30) Consequently, 𝜒𝜆0(𝑥) satisfied the boundary condition (2.3), so the function 𝜒𝜆0(𝑥) is an eigenfunction of the problem (2.1)–(2.4) corresponding to the eigenvalue 𝜆0.
Now let 𝑢0(𝑥) be any eigenfunction corresponding to the eigenvalue 𝜆0, but 𝜔(𝜆0)0. Then the functions 𝜑𝑖𝜆0(𝑥),𝜒𝑖𝜆0(𝑥) are linearly independent on 𝐼𝑖,𝑖=1,2. Thus, 𝑢0(𝑥) may be represented as in the form 𝑢0𝑐(𝑥)=1𝜑1𝜆0(𝑥)+𝑐2𝜒1𝜆0[𝑐(𝑥),𝑥1,0),3𝜑2𝜆0(𝑥)+𝑐4𝜒2𝜆0],(𝑥),𝑥(0,1(2.31) where at least one of the constants 𝑐𝑖,  𝑖=1,2,3,4, is not zero.
Consider the equations 𝑈𝑖𝑢0(𝑥)=0,𝑖=1,2,3,4(2.32) as the homogenous system of linear equations of the variables 𝑐𝑖,  𝑖=1,2,3,4, and taking into account (2.24) and (2.26), it follows that the determinant of this system is ||||||||||||0𝜔1𝜆00000𝜔2𝜆00𝛾1𝜑1𝜆0(0)𝛾1𝜒1𝜆0(0)𝛿1𝜑2𝜆0(0)𝛿1𝜒2𝜆0𝛾(0)2𝜑1𝜆0(0)𝛾2𝜒1𝜆0(0)𝛿2𝜑2𝜆0(0)𝛿2𝜒2𝜆0(||||||||||||0)=𝛿1𝛿2𝜔1𝜆0𝜔22𝜆00.(2.33) Thus, the system (2.32) has only trivial solution 𝑐𝑖=0,  𝑖=1,2,3,4, and so we get contradiction which completes the proof.

Lemma 2.7. If 𝜆=𝜆0 is an eigenvalue, then 𝜑𝜆0(𝑥) and 𝜒𝜆0(𝑥) are linearly dependent.

Proof. Since 𝜆0 is an eigenvalue, then from Theorem 2.6 we have 𝑊(𝜑𝑖𝜆0,𝜒𝑖𝜆𝑖;𝑥)=𝜔(𝜆0)=0,   𝑖=1,2. Therefore 𝜒𝑖𝜆0(𝑥)=𝑘𝑖𝜑𝑖𝜆0(𝑥),𝑖=1,2,(2.34) for some 𝑘𝑖0, 𝑖=1,2. Now, we must show that 𝑘1=𝑘2. Suppose if possible that 𝑘1𝑘2. Taking into account the definitions of solution 𝜑𝑖𝜆0(𝑥) and 𝜑𝑖𝜆0(𝑥), 𝑖=1,2, from the equalities (2.34) we get 𝑈3𝜒𝜆0=𝛾1𝜒𝜆0(0)𝛿1𝜒𝜆0(+0)=𝛾1𝜒1𝜆0(0)𝛿1𝜒2𝜆0(0)=𝛾1𝑘1𝜑1𝜆0(0)𝛿1𝑘2𝜑2𝜆0(0)=𝛿1𝑘1𝜑2𝜆0(0)𝛿1𝑘2𝜑2𝜆0(0)=𝛿1𝑘1𝑘2𝜑2𝜆0(0).(2.35) Since 𝑈3(𝜒𝜆0)=0, 𝛿10, and 𝑘1𝑘20 it follows that 𝜑2𝜆0(0)=0.(2.36) By the same procedure from the equality 𝑈4(𝜒𝜆0)=0 we can derive that 𝜑2𝜆0(0)=0.(2.37) From the fact that 𝜑2𝜆0(𝑥) is a solution of (2.1) on [0,1] and satisfied the initial conditions (2.36) and (2.37) it follows that 𝜑2𝜆0(𝑥)=0 identically on [0,1], because of the well-known existence and uniqueness theorem for the initial value problems of the ordinary linear differential equations.
By using (2.24), (2.36), and (2.37) we may also find 𝜑1𝜆0(0)=𝜑1𝜆0(0)=0.(2.38) For latter discussion for 𝜑2𝜆0(𝑥), it follows that 𝜑1𝜆0(𝑥)=0 identically on [1,0]. Therefore 𝜑𝜆0(𝑥)=0 identically on [1,0)(0,1]. But this is contradicted with (2.20), which completes the proof.

Corollary 2.8. If 𝜆=𝜆0 is an eigenvalue, then both 𝜑𝜆0(𝑥) and 𝜒𝜆0(𝑥) are eigenfunctions corresponding to this eigenvalue.

Lemma 2.9. If the condition 𝛾1𝛾2=𝛿1𝛿2 is satisfied, then all eigenvalues 𝜆𝑛 are simple zeros of 𝜔(𝜆).

Proof. Since 1𝑟2101𝜑𝜆𝜑(𝑥)𝜆01(𝑥)𝑑𝑥+𝑟2210𝜑𝜆𝜑(𝑥)𝜆0=1(𝑥)𝑑𝑥𝑟2101𝜑𝜆𝜑(𝑥)𝜆01(𝑥)𝑑𝑥+𝑟2210𝜑𝜆𝜑(𝑥)𝜆0𝜑(𝑥)𝑑𝑥+𝑊𝜆,𝜑𝜆0𝜑;1𝑊𝜆,𝜑𝜆0,;1(2.39) then 𝜆𝜆01𝑟2101𝜑𝜆(𝑥)𝜑𝜆01(𝑥)𝑑𝑥+𝑟2210𝜑𝜆(𝑥)𝜑𝜆0𝜑(𝑥)𝑑𝑥=𝑊𝜆,𝜑𝜆0;1𝜆𝜆0𝜌,(2.40) for any 𝜆. Since 𝜒𝜆0(𝑥)=𝑘0𝜑𝜆0[],(𝑥),𝑥1,0)(0,1(2.41) for some 𝑘00, then 𝑊𝜑𝜆,𝜑𝜆𝑛=1;1𝑘𝑛𝑊𝜑𝜆,𝜒𝜆𝑛=1;1𝑘𝑛𝜆𝑛𝑅1𝜑𝜆+𝑅1𝜑𝜆=1𝑘𝑛𝜔(𝜆)𝜆𝜆𝑛𝑅1𝜑𝜆=𝜆𝜆𝑛1𝑘𝑛𝜔(𝜆)𝜆𝜆𝑛𝑅1𝜑𝜆.(2.42) Substituting (2.42) in (2.40) and letting 𝜆𝜆𝑛 we get 1𝑟2101𝜑𝜆𝑛(𝑥)21𝑑𝑥+𝑟2210𝜑𝜆0(𝑥)21𝑑𝑥=𝑘𝑛𝜔𝜆𝑛𝑅1𝜑𝜆𝑛𝜌.(2.43) Now putting 𝑅1𝜑𝜆𝑛=1𝑘𝑛𝑅1𝜒𝜆𝑛=𝛾𝑘𝑛(2.44) in (2.43) it yields 𝜔(𝜆𝑛)0, which completes the proof.

If 𝜆𝑛, 𝑛=0,1,2, denote the zeros of 𝜔(𝜆), then the three-component vectors𝚽𝑛𝜑(𝑥)=𝜆𝑛𝑅(𝑥)1𝜑𝜆𝑛𝑅1𝜑𝜆𝑛(2.45) are the corresponding eigenvectors of the operator 𝐀 satisfying the orthogonality relation𝚽𝑛(),𝚽𝑚()𝐇=0for𝑛𝑚.(2.46) Here {𝜑𝜆𝑛()}𝑛=0 will be the sequence of eigenfunctions of (2.1)–(2.4) corresponding to the eigenvalues {𝜆𝑛}𝑛=0. We denote by Ψ𝑛() the normalized eigenvectors𝚿𝑛𝚽(𝑥)=𝑛(𝑥)𝚽𝑛()𝐇=𝜓𝑛𝑅(𝑥)1𝜓𝑛𝑅1𝜓𝑛.(2.47) Because of simplicity of the eigenvalues, we find nonzeros constants 𝑘𝑛 such that𝜒𝜆𝑛(𝑥)=𝑘𝑛𝜑𝜆𝑛[](𝑥),𝑥1,0)(0,1,𝑛=0,1,.(2.48) To study the completeness of the eigenvectors of 𝐀, and hence the completeness of the eigenfunctions of (2.1)–(2.4), we construct the resolvent of 𝐀 as well as Green’s function of problem (2.1)–(2.4). We assume without any loss of generality that 𝜆=0 is not an eigenvalue of 𝐀. Otherwise, from discreteness of eigenvalues, we can find a real number 𝑐 such that 𝑐𝜆𝑛 for all 𝑛 and replace the eigenparameter 𝜆 by 𝜆𝑐. Now let 𝜆 not be an eigenvalue of 𝐀 and consider the inhomogeneous problem𝑓(𝜆𝐼𝐀)𝐮(𝑥)=𝐟(𝑥),for𝐟(𝑥)=𝑓(𝑥)1𝑓2𝑅𝐇,𝐮(𝑥)=𝑢(𝑥)1(𝑅𝑢)1(𝑢)𝒟(𝐀),(2.49) and 𝐼 is the identity operator. Since𝑅(𝜆𝐼𝐀)𝐮(𝑥)=𝜆𝑢(𝑥)1(𝑅𝑢)1𝑅(𝑢)(𝑢(𝑥))1(𝑢)𝑅1=𝑓(𝑢)𝑓(𝑥)1𝑓2,(2.50) then we have[],𝑓(𝜆)𝐮(𝑥)=𝐟(𝑥),𝑥1,0)(0,1(2.51)1=𝜆𝑅1(𝑢)𝑅1(𝑢),𝑓2=𝜆𝑅1(𝑢)+𝑅1(𝑢).(2.52) Now, we can represent the general solution of (2.51) in the following form:𝐴𝑢(𝑥,𝜆)=1𝜑1𝜆(𝑥)+𝐵1𝜒1𝜆[𝐴(𝑥),𝑥1,0),2𝜑2𝜆(𝑥)+𝐵2𝜒2𝜆(].𝑥),𝑥(0,1(2.53) Applying the method of variation of the constants to (2.53), thus, the functions 𝐴1(𝑥,𝜆), 𝐵1(𝑥,𝜆) and 𝐴2(𝑥,𝜆), 𝐵2(𝑥,𝜆) satisfy the linear system of equations𝐴1(𝑥,𝜆)𝜑1𝜆(𝑥)+𝐵1(𝑥,𝜆)𝜒1𝜆(𝑥)=0,𝐴1(𝑥,𝜆)𝜑1𝜆(𝑥)+𝐵1(𝑥,𝜆)𝜒1𝜆(𝑥)=𝑓(𝑥)𝑟21[𝐴,𝑥1,0),2(𝑥,𝜆)𝜑2𝜆(𝑥)+𝐵2(𝑥,𝜆)𝜒2𝜆(𝑥)=0,𝐴2(𝑥,𝜆)𝜑2𝜆(𝑥)+𝐵2(𝑥,𝜆)𝜒2𝜆(𝑥)=𝑓(𝑥)𝑟22].,𝑥(0,1(2.54) Since 𝜆 is not an eigenvalue and 𝑊(𝜑𝑖𝜆(𝑥),𝜒𝑖𝜆(𝑥);𝑥)0, 𝑖=1,2, each of the linear systems in (2.54) has a unique solution which leads𝐴11(𝑥,𝜆)=𝑟21𝜔(𝜆)0𝑥𝑓(𝜉)𝜒1𝜆(𝜉)𝑑𝜉+𝐴1,𝐵11(𝑥,𝜆)=𝑟21𝜔(𝜆)𝑥1𝑓(𝜉)𝜑1𝜆(𝜉)𝑑𝜉+𝐵1,[𝐴𝑥1,0),21(𝑥,𝜆)=𝑟22𝜔(𝜆)1𝑥𝑓(𝜉)𝜒2𝜆(𝜉)𝑑𝜉+𝐴2,𝐵21(𝑥,𝜆)=𝑟22𝜔(𝜆)𝑥0𝑓(𝜉)𝜑2𝜆(𝜉)𝑑𝜉+𝐵2,],𝑥(0,1(2.55) where 𝐴1,𝐴2,𝐵1, and 𝐵2 are arbitrary constants. Substituting (2.55) into (2.53), we obtain the solution of (2.51)𝜑𝑢(𝑥,𝜆)=1𝜆(𝑥)𝑟21𝜔(𝜆)0𝑥𝑓(𝜉)𝜒1𝜆𝜒(𝜉)𝑑𝜉+1𝜆(𝑥)𝑟21𝜔(𝜆)𝑥1𝑓(𝜉)𝜑1𝜆(𝜉)𝑑𝜉+𝐴1𝜑1𝜆(𝑥)+𝐵1𝜒1𝜆[𝜑(𝑥),𝑥1,0),2𝜆(𝑥)𝑟22𝜔(𝜆)1𝑥𝑓(𝜉)𝜒2𝜆𝜒(𝜉)𝑑𝜉+2𝜆(𝑥)𝑟22𝜔(𝜆)𝑥0𝑓(𝜉)𝜑2𝜆(𝜉)𝑑𝜉+𝐴2𝜑2𝜆(𝑥)+𝐵2𝜒2𝜆].(𝑥),𝑥(0,1(2.56) Then from (2.52) and the transmission conditions (2.4) we get 𝐴1=1𝑟22𝜔(𝜆)10𝑓(𝜉)𝜒2𝜆𝑓(𝜉)𝑑𝜉+2𝜔(𝜆),𝐵1𝑓=1,𝐴𝜔(𝜆)2=𝑓2𝜔(𝜆),𝐵2=1𝑟21𝜔(𝜆)01𝑓(𝜉)𝜑1𝜆𝑓(𝜉)𝑑𝜉1.𝜔(𝜆)(2.57) Then (2.56) can be written as𝑓𝑢(𝑥,𝜆)=2𝜑𝜔(𝜆)𝜆𝑓(𝑥)1𝜒𝜔(𝜆)𝜆𝜒(𝑥)+𝜆(𝑥)𝜔(𝜆)𝑥1𝑓(𝜉)𝜑𝑟(𝜉)𝜆𝜑(𝜉)𝑑𝜉+𝜆(𝑥)𝜔(𝜆)1𝑥𝑓(𝜉)𝜒𝑟(𝜉)𝜆[].(𝜉)𝑑𝜉,𝑥,𝜉1,0)(0,1(2.58) Hence, we have𝐮(𝑥)=(𝜆𝐼𝐀)1𝑓𝐟(𝑥)=2𝜑𝜔(𝜆)𝜆𝑓(𝑥)1𝜒𝜔(𝜆)𝜆(𝑥)+11𝐺(𝑥,𝜉,𝜆)𝑓(𝜉)𝑅𝑟(𝜉)𝑑𝜉1𝑅(𝑢)1(𝑢),(2.59) where𝜒𝐺(𝑥,𝜉,𝜆)=𝜆(𝑥)𝜑𝜆(𝜉)𝜒𝜔(𝜆),1𝜉𝑥1,𝑥0,𝜉0,𝜆(𝜉)𝜑𝜆(𝑥)𝜔(𝜆),1𝑥𝜉1,𝑥0,𝜉0,(2.60) is the unique Green’s function of problem (2.1)–(2.4). Obviously 𝐺(𝑥,𝜉,𝜆) is a meromorphic function of 𝜆, for every (𝑥,𝜉)([1,0)(0,1])2, which has simple poles only at the eigenvalues. Although Green’s function looks as simple as that of Sturm-Liouville problems, cf., for example, [28], it is a rather complicated because of the transmission conditions, see the example at the end of this paper.

Lemma 2.10. The operator 𝐀 is self-adjoint in 𝐇.

Proof. Since 𝐀 is a symmetric densely defined operator, then it is sufficient to show that the deficiency spaces are the null spaces and hence 𝐀=𝐀. Indeed, if 𝐟(𝑥)=𝑓𝑓(𝑥)1𝑓2𝐇 and 𝜆 is a nonreal number, then taking 𝑓𝐮(𝑥)=2𝜑𝜔(𝜆)𝜆𝑓(𝑥)1𝜒𝜔(𝜆)𝜆(𝑥)+11𝐺(𝑥,𝜉,𝜆)𝑓(𝜉)𝑅𝑟(𝜉)𝑑𝜉1𝑅(𝑢)1(𝑢)(2.61) implies that 𝐮𝒟(𝐀). Since 𝐺(𝑥,𝜉,𝜆) satisfies conditions (2.2)–(2.4), then (𝐀𝜆𝐼)𝐮(𝑥)=𝐟(𝑥). Now we prove that the inverse of (𝐀𝜆𝐼) exists. If 𝐀𝐮(𝑥)=𝜆𝐮(𝑥), then 𝜆𝜆𝐮(),𝐮()𝐇=𝐮(),𝜆𝐮()𝐇𝜆𝐮(),𝐮()𝐇=𝐮(),𝐀𝐮()𝐇𝐀𝐮(),𝐮()𝐇=0(since𝐀issymmetric).(2.62) Since 𝜆, we have 𝜆𝜆0. Thus 𝐮(),𝐮()𝐇=0, that is, 𝐮=0. Then 𝑅(𝜆;𝐀)=(𝐀𝜆𝐼)1, the resolvent operator of 𝐀, exists. Thus 𝑅(𝜆;𝐀)𝐟=(𝐀𝜆𝐼)1𝐟=𝐮.(2.63) Take 𝜆=±𝑖. The domains of (𝐀𝑖𝐼)1 and (𝐀+𝑖𝐼)1 are exactly 𝐇. Consequently the ranges of (𝐀𝑖𝐼) and (𝐀+𝑖𝐼) are also 𝐇. Hence the deficiency spaces of 𝐀 are 𝑁𝑖𝐀=𝑁+𝑖𝐼=𝑅(𝐀𝑖𝐼)=𝐇𝑁={0},𝑖𝐀=𝑁𝑖𝐼=𝑅(𝐀+𝑖𝐼)=𝐇={0}.(2.64) Hence 𝐀 is self-adjoint.

The next theorem is an eigenfunction expansion theorem, which is similar to that established by Fulton in [29].

Theorem 2.11. (i) For 𝐮()𝐇(𝐮)2𝐇=𝑛=|𝐮(),𝚿𝑛()𝐇|2.(2.65)(ii) For 𝐮()𝒟(𝐀)𝐮(𝑥)=𝑛=𝐮(),𝚿𝑛()𝐇𝚿𝑛(𝑥),(2.66) with the series being absolutely and uniformly convergent in the first component for on [1,0)(0,1] and absolutely convergent in the second component.

Proof. The proof is similar to that in [29, pages 298-299].

3. Asymptotic Formulas of Eigenvalues and Eigenfunctions

Now we derive first- and second-order asymptotics of the eigenvalues and eigenfunctions similar to the classical techniques of [27, 30] and [29], see also [25, 26]. We begin by proving some lemmas.

Lemma 3.1. Let 𝜑𝜆(𝑥) be the solutions of (2.1) defined in Section 2, and let 𝜆=𝑠2. Then the following integral equations hold for 𝑘=0 and 𝑘=1: 𝑑𝑘𝑑𝑥𝑘𝜑1𝜆(𝑥)=𝛼2+𝑠2𝛼2𝑑𝑘𝑑𝑥𝑘cos𝑠(𝑥+1)𝑟1𝛼1+𝑠2𝛼1𝑟1𝑠𝑑𝑘𝑑𝑥𝑘sin𝑠(𝑥+1)𝑟1+1𝑟1𝑠𝑥1𝑑𝑘𝑑𝑥𝑘sin𝑠(𝑥𝑦)𝑟1𝑞(𝑦)𝜑1𝜆𝑑(𝑦)𝑑𝑦,(3.1)𝑘𝑑𝑥𝑘𝜑2𝜆𝛾(𝑥)=1𝛿1𝜑1𝜆𝑑(0)𝑘𝑑𝑥𝑘cos𝑠𝑥𝑟2+𝑟2𝛾2𝛿2𝑠𝜑1𝜆𝑑(0)𝑘𝑑𝑥𝑘sin𝑠𝑥𝑟2+1𝑟2𝑠𝑥0𝑑𝑘𝑑𝑥𝑘sin𝑠(𝑥𝑦)𝑟2𝑞(𝑦)𝜑2𝜆(𝑦)𝑑𝑦.(3.2)

Proof. For proving it is enough substitute 𝑠2𝜑1𝜆(𝑦)+𝑟(𝑦)𝜑1𝜆(𝑦) and 𝑠2𝜑2𝜆(𝑦)+𝑟(𝑦)𝜑2𝜆(𝑦) instead of 𝑞(𝑦)𝜑1𝜆(𝑦) and 𝑞(𝑦)𝜑2𝜆(𝑦) in the integral terms of the (3.1) and (3.2), respectively, and integrate by parts twice.

Lemma 3.2. Let 𝜆=𝑠2, Im𝑠=𝑡. Then the functions 𝜑𝑖𝜆(𝑥) have the following asymptotic representations for |𝜆|, which hold uniformly for 𝑥𝐼𝑖 (𝑖=1,2): 𝑑𝑘𝑑𝑥𝑘𝜑1𝜆(𝑥)=𝑠2𝛼2𝑑𝑘𝑑𝑥𝑘cos𝑠(𝑥+1)𝑟1+𝒪|𝑠|𝑘+1𝑒|𝑡|((𝑥+1)/𝑟1)𝑑,𝑘=0,1,(3.3)𝑘𝑑𝑥𝑘𝜑2𝜆(𝑥)=𝑠2𝛼2𝛾1𝛿1𝑠cos𝑟1𝑑𝑘𝑑𝑥𝑘cos𝑠𝑥𝑟2𝑟2𝛾2𝛿2𝑟1𝑠sin𝑟1𝑑𝑘𝑑𝑥𝑘sin𝑠𝑥𝑟2|+𝒪𝑠|𝑘+1𝑒|𝑡|((𝑟1𝑥+𝑟2)/𝑟1𝑟2),𝑘=0,1,(3.4) if  𝛼20, 𝑑𝑘𝑑𝑥𝑘𝜑1𝜆(𝑥)=𝑠𝑟1𝛼1𝑑𝑘𝑑𝑥𝑘sin𝑠(𝑥+1)𝑟1+𝒪|𝑠|𝑘𝑒|𝑡|((𝑥+1)/𝑟1)𝑑,𝑘=0,1,(3.5)𝑘𝑑𝑥𝑘𝜑2𝜆(𝑥)=𝑠𝛼1𝛾1𝑟1𝛿1𝑠sin𝑟1𝑑𝑘𝑑𝑥𝑘cos𝑠𝑥𝑟2+𝑟2𝛾2𝛿2𝑠cos𝑟1𝑑𝑘𝑑𝑥𝑘sin𝑠𝑥𝑟2|+𝒪𝑠|𝑘𝑒|𝑡|((𝑟1𝑥+𝑟2)/𝑟1𝑟2),𝑘=0,1,(3.6) if  𝛼2=0.

Proof. Since the proof of the formulae for 𝜑1𝜆(𝑥) is identical to Titchmarshs proof of similar results for 𝜑𝜆(𝑥) (see [27, Lemma  1.7 page 9-10]), we may formulate them without proving them here. Therefore we will prove only the formulas for 𝜑2𝜆(𝑥). Let 𝛼20. Then according to (3.3) 𝜑1𝜆(0)=𝑠2𝛼2𝑠cos𝑟1+𝒪|𝑠|𝑒|𝑡|/𝑟1,𝜑1𝜆𝑠(0)=3𝛼2𝑟1𝑠sin𝑟1+𝒪|𝑠|2𝑒|𝑡|/𝑟1.(3.7) Substituting (3.7) into (3.2) (for 𝑘=0), we get 𝜑2𝜆𝛾(𝑥)=1𝑠2𝛼2𝛿1𝑠cos𝑟1cos𝑠𝑥𝑟2𝑟2𝑠2𝛼2𝛾2𝛿2𝑟1𝑠sin𝑟1sin𝑠𝑥𝑟2+1𝑟2𝑠𝑥0sin𝑠(𝑥𝑦)𝑟2𝑞(𝑦)𝜑2𝜆(𝑦)𝑑𝑦+𝒪|𝑠|𝑒|𝑡|((𝑟1𝑥+𝑟2)/𝑟1𝑟2).(3.8) Multiplying (3.8) by |𝑠|2𝑒|𝑡|((𝑟1𝑥+𝑟2)/𝑟1𝑟2) and denoting 𝐹𝜆(𝑥)=|𝑠|2𝑒|𝑡|((𝑟1𝑥+𝑟2)/𝑟1𝑟2)𝜑2𝜆(𝑥)(3.9) we get 𝐹𝜆(𝑥)=|𝑠|2𝑒|𝑡|((𝑟1𝑥+𝑟2)/𝑟1𝑟2)𝑠2𝛼2𝛾1𝛿1𝑠cos𝑟1cos𝑠𝑥𝑟2𝑟2𝑠2𝛼2𝛾2𝛿2𝑟1𝑠sin𝑟1sin𝑠𝑥𝑟2+1𝑟2𝑠𝑥0𝑠sin(𝑥𝑦)𝑟2𝑞(𝑦)𝑒|𝑡|((𝑥𝑦)/𝑟2)𝐹𝜆(𝑦)𝑑𝑦+𝒪|𝑠|1.(3.10) Denoting 𝑀(𝜆)=max0𝑥1|𝐹𝜆(𝑥)| from the last formula, it follows that ||𝛾𝑀(𝜆)1||||𝛼2||||𝛿1||+𝑟2||𝛼2||||𝛾2||||𝛿2||𝑟1+𝑀(𝜆)𝑟2|𝑠|10||||𝑀𝑞(𝑦)𝑑𝑦+0|𝑠|(3.11) for some 𝑀0>0. From this, it follows that 𝑀(𝜆)=𝒪(1) as 𝜆, so 𝜑2𝜆(𝑥)=𝒪|𝑠|2𝑒|𝑡|((𝑟1𝑥+𝑟2)/𝑟1𝑟2).(3.12) Substituting (3.12) into the integral on the right of (3.8) yields (3.4) for 𝑘=0. The case 𝑘=1 of (3.4) follows by applying the same procedure as in the case 𝑘=0. The case 𝛼2=0 is proved analogically.

Lemma 3.3. Let 𝜆=𝑠2, Im𝑠=𝑡. Then the characteristic function 𝜔(𝜆) has the following asymptotic representations.
Case 1. If 𝛽20 and 𝛼20, then 𝜔(𝜆)=𝛼2𝛽2𝑠5𝛾1𝑟2𝛿1𝑠cos𝑟1𝑠sin𝑟2+𝛾2𝛿2𝑟1𝑠sin𝑟1𝑠cos𝑟2+𝒪|𝑠|4𝑒|𝑡|((𝑟1+𝑟2)/𝑟1𝑟2).(3.13)
Case 2. If 𝛽20 and 𝛼2=0, then 𝜔(𝜆)=𝛼1𝛽2𝑠4𝛾1𝑟1𝑟2𝛿1𝑠sin𝑟1𝑠sin𝑟2+𝛾2𝛿2𝑠cos𝑟1𝑠cos𝑟2+𝒪|𝑠|3𝑒|𝑡|((𝑟1+𝑟2)/𝑟1𝑟2).(3.14)
Case 3. If 𝛽2=0 and 𝛼20, then 𝜔(𝜆)=𝛽1𝛼2𝑠4𝛾1𝛿1𝑠cos𝑟1𝑠cos𝑟2𝑟2𝛾2𝛿2𝑟1𝑠sin𝑟1𝑠sin𝑟2+𝒪|𝑠|3𝑒|𝑡|((𝑟1+𝑟2)/𝑟1𝑟2).(3.15)
Case 4. If 𝛽2=0 and 𝛼2=0, then 𝜔(𝜆)=𝛽1𝛼1𝑠3𝛾1𝑟1𝛿1𝑠sin𝑟1𝑠cos𝑟2+𝑟2𝛾2𝛿2𝑠cos𝑟1𝑠sin𝑟2+𝒪|𝑠|2𝑒|𝑡|((𝑟1+𝑟2)/𝑟1𝑟2).(3.16)

Proof. The proof is immediate by substituting (3.4) and (3.6) into the representation 𝜔(𝜆)=𝜆𝛽1+𝛽1𝜑2𝜆(1)𝜆𝛽2+𝛽2𝜑2𝜆(1).(3.17)

Corollary 3.4. The eigenvalues of the problem (2.1)–(2.4) are bounded below.

Proof. Putting 𝑠=𝑖𝑡 (𝑡>0) in the above formulae, it follows that 𝜔(𝑡2) as 𝑡. Hence, 𝜔(𝜆)0 for 𝜆 negative and sufficiently large.

Now we can obtain the asymptotic approximation formula for the eigenvalues of the considered problem (2.1)–(2.4). Since the eigenvalues coincide with the zeros of the entire function 𝜔(𝜆), it follows that they have no finite limit. Moreover, we know from Corollaries 2.2 and 3.4 that all eigenvalues are real and bounded below. Therefore, we may renumber them as 𝜆0𝜆1𝜆2,, listed according to their multiplicity.

Theorem 3.5. The eigenvalues 𝜆𝑛=𝑠2𝑛, 𝑛=0,1,2,, of the problem (2.1)–(2.4) have the following asymptotic representation for 𝑛, with 𝛾1𝛿2𝑟1𝛾2𝛿1𝑟2=0.
Case 1. If 𝛽20 and 𝛼20, then 𝑠𝑛=𝑟1𝑟2𝑟1+𝑟2𝑛(𝑛1)𝜋+𝒪1.(3.18)
Case 2. If 𝛽20 and 𝛼2=0, then 𝑠𝑛=𝑟1𝑟2𝑟1+𝑟21𝑛2𝑛𝜋+𝒪1.(3.19)
Case 3. If 𝛽2=0 and 𝛼20, then 𝑠𝑛=𝑟1𝑟2𝑟1+𝑟21𝑛2𝑛𝜋+𝒪1.(3.20)
Case 4. If 𝛽2=0 and 𝛼2=0, then 𝑠𝑛=𝑟1𝑟2𝑟1+𝑟2𝑛𝑛𝜋+𝒪1.(3.21)

Proof. We will only consider the first case. From (3.13) we have 𝛼𝜔(𝜆)=2𝛽2𝛾2𝑠5𝛿2𝑟1𝑟sin1+𝑟2𝑟1𝑟2𝑠+𝒪|𝑠|4𝑒|𝑡|((𝑟1+𝑟2)/𝑟1𝑟2).(3.22) We will apply the well-known Rouche theorem, which asserts that if 𝑓(𝜆) and 𝑔(𝜆) are analytic inside and on a closed contour 𝐶 and |𝑔(𝜆)|<|𝑓(𝜆)| on 𝐶, then 𝑓(𝜆) and 𝑓(𝜆)+𝑔(𝜆) have the same number of zeros inside 𝐶, provided that each zero is counted according to its multiplicity. It follows that 𝜔(𝜆) has the same number of zeros inside the contour as the leading term in (3.22). If 𝜆0𝜆1𝜆2,, are the zeros of 𝜔(𝜆) and 𝜆𝑛=𝑠2𝑛, we have 𝑠𝑛=𝑟1𝑟2𝑟1+𝑟2(𝑛1)𝜋+𝛿𝑛,(3.23) for sufficiently large 𝑛, where |𝛿𝑛|<𝜋/4, for sufficiently large 𝑛. By putting in (3.22) we have 𝛿𝑛=𝒪(𝑛1), so the proof is completed for Case 1. The proof for the other cases is similar.

Then from (3.3)–(3.6) (for 𝑘=0) and the above theorem, the asymptotic behavior of the eigenfunctions𝜑𝜆𝑛𝜑(𝑥)=1𝜆𝑛[𝜑(𝑥),𝑥1,0),2𝜆𝑛],(𝑥),𝑥(0,1(3.24) of (2.1)–(2.4) is given by, 𝛾1𝛿2𝑟1𝛾2𝛿1𝑟2=0,𝜑𝜆𝑛(𝛼𝑥)=2𝑟cos2(𝑛1)𝜋𝑟1+𝑟2𝑛(𝑥+1)+𝒪1[𝛾,𝑥1,0),1𝛼2𝛿1cos(𝑛1)𝜋𝑟1+𝑟2𝑟1𝑥+𝑟2𝑛+𝒪1],,𝑥(0,1if𝛽20,𝛼20,𝑟1𝛼1𝑟sin2(𝑛1/2)𝜋𝑟1+𝑟2𝑛(𝑥+1)+𝒪1[𝛾,𝑥1,0),1𝑟1𝛼1𝛿1sin(𝑛1/2)𝜋𝑟1+𝑟2𝑟1𝑥+𝑟2𝑛+𝒪1],,𝑥(0,1if𝛽20,𝛼2𝛼=0,2𝑟cos2(𝑛1/2)𝜋𝑟1+𝑟2(𝑛𝑥+1)+𝒪1[𝛾,𝑥1,0),1𝛼2𝛿1cos(𝑛1/2)𝜋𝑟1+𝑟2𝑟1𝑥+𝑟2𝑛+𝒪1],,𝑥(0,1if𝛽2=0,𝛼20,𝑟1𝛼1𝑟sin2𝑛𝜋𝑟1+𝑟2𝑛(𝑥+1)+𝒪1[𝛾,𝑥1,0),1𝑟1𝛼1𝛿1sin𝑛𝜋𝑟1+𝑟2𝑟1𝑥+𝑟2𝑛+𝒪1],,𝑥(0,1if𝛽2=0,𝛼2=0.(3.25) All these asymptotic formulae hold uniformly for 𝑥.

4. The Sampling Theorem

In this section we derive two sampling theorems associated with problem (2.1)–(2.4). For convenience we may assume that the eigenvectors of 𝐀 are real valued.

Theorem 4.1. Consider the boundary value problem (2.1)–(2.4), and let 𝜑𝜆𝜑(𝑥)=1𝜆[𝜑(𝑥),𝑥1,0),2𝜆(],𝑥),𝑥(0,1(4.1) be the solution defined above. Let 𝑔()𝐿2(1,1) and 1𝐹(𝜆)=𝑟2101𝑔(𝑥)𝜑1𝜆1(𝑥)𝑑𝑥+𝑟2210𝑔(𝑥)𝜑2𝜆(𝑥)𝑑𝑥.(4.2) Then 𝐹(𝜆) is an entire function of exponential type 2 that can be reconstructed from its values at the points {𝜆𝑛}𝑛=0 via the sampling formula 𝐹(𝜆)=𝑛=0𝐹𝜆𝑛𝜔(𝜆)𝜆𝜆𝑛𝜔𝜆𝑛.(4.3) The series (4.3) converges absolutely on and uniformly on compact subset of . Here 𝜔(𝜆) is the entire function defined in (2.29).

Proof. Relation (4.2) can be rewritten as an inner product of 𝐇 as follows 𝐹(𝜆)=𝐠(),𝚽𝜆()𝐇=1𝑟2101𝑔(𝑥)𝜑1𝜆1(𝑥)𝑑𝑥+𝑟2210𝑔(𝑥)𝜑2𝜆(𝑥)𝑑𝑥,(4.4) where 00𝐠(𝑥)=𝑔(𝑥),𝚽𝜆𝜑(𝑥)=𝜆𝑅(𝑥)1(𝑅𝜙𝜆)1(𝜙𝜆)𝐇.(4.5) Both 𝐠() and Φ𝜆() can be expanded in terms of the orthogonal basis on eigenfunctions, that is, 𝐠(𝑥)=𝑛=0̂𝚽𝐠(𝑛)𝑛(𝑥)𝚽𝑛()2𝐇,𝚽𝜆(𝑥)=𝑛=0𝚽𝜆(𝚽𝑛)𝑛(𝑥)𝚽𝑛()2𝐇,(4.6) where ̂𝐠(𝑛) and Φ𝜆(𝑛) are the fourier coefficients ̂𝐠(𝑛)=𝐠(),𝚽𝑛()𝐇=1𝑟2101𝑔(𝑥)𝜑1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝑔(𝑥)𝜑2𝜆𝑛𝜆(𝑥)𝑑𝑥=𝐹𝑛.(4.7) Applying Parseval’s identity to (4.4) and using (4.7), we obtain 𝐹(𝜆)=𝑛=0𝐹𝜆𝑛𝚽𝑛(),𝚽𝜆()𝐇𝚽𝑛()2𝐇.(4.8) Now we calculate Φ𝜆(𝑛)=Φ𝑛(),Φ𝜆()𝐇 and Φ𝑛()𝐇. Let 𝜆 not be an eigenvalue and 𝑛. To prove (4.3) we need to show that 𝚽𝑛(),𝚽𝜆()𝐇𝚽𝑛()2𝐇=𝜔(𝜆)𝜆𝜆𝑛𝜔(𝜆),𝑛=0,1,2,.(4.9) By the definition of the inner product of 𝐇, we have 𝚽𝜆(),𝚽𝑛()𝐇=1𝑟2101𝜑1𝜆(𝑥)𝜑1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝜑2𝜆(𝑥)𝜑2𝜆𝑛+1(𝑥)𝑑𝑥𝜌𝑅1𝜑𝜆𝑅1𝜑𝜆𝑛+1𝛾𝑅1𝜑𝜆𝑅1𝜑𝜆𝑛.(4.10) Since 1𝑟2101𝜑1𝜆𝜑(𝑥)1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝜑2𝜆𝜑(𝑥)2𝜆𝑛=1(𝑥)𝑑𝑥𝑟2101𝜑1𝜆𝜑(𝑥)1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝜑2𝜆𝜑(𝑥)2𝜆𝑛𝜑(𝑥)𝑑𝑥+𝑊1𝜆,𝜑1𝜆𝑛𝜑;0𝑊1𝜆,𝜑1𝜆𝑛𝜑;1𝑊2𝜆,𝜑2𝜆𝑛𝜑;+0+𝑊2𝜆,𝜑2𝜆𝑛,;1(4.11) then, from (2.20) and (2.24), (4.11) becomes 𝜆𝜆𝑛1𝑟2101𝜑1𝜆(𝑥)𝜑1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝜑2𝜆(𝑥)𝜑2𝜆𝑛𝜑(𝑥)𝑑𝑥=𝑊2𝜆,𝜑2𝜆𝑛;1𝜆𝜆𝑛𝜌.(4.12) Thus 1𝑟2101𝜑1𝜆(𝑥)𝜑1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝜑2𝜆(𝑥)𝜑2𝜆𝑛𝑊𝜑(𝑥)𝑑𝑥=2𝜆,𝜑2𝜆𝑛;1𝜆𝜆𝑛𝜌.(4.13) From (2.48), (2.22), and (2.8), the Wronskian of 𝜑2𝜆𝑛 and 𝜑2𝜆 at 𝑥=1 will be 𝑊𝜑2𝜆,𝜑2𝜆𝑛;1=𝜑2𝜆(1)𝜑2𝜆𝑛(1)𝜑2𝜆(1)𝜑2𝜆𝑛(1)=𝑘𝑛1𝜒2𝜆𝑛(1)𝜑2𝜆(1)𝜒2𝜆𝑛(1)𝜑2𝜆(1)=𝑘𝑛1𝛽1𝜆𝑛+𝛽1𝜑2𝜆𝛽(1)2𝜆𝑛+𝛽2𝜑2𝜆(1)=𝑘𝑛1𝜆𝜔(𝜆)+𝑛𝑅𝜆1𝜑𝜆.(4.14) Relations (2.48) and 𝑅1(𝜒𝜆𝑛)=𝛾 and the linearity of the boundary conditions yield 1𝛾𝑅1𝜑𝜆𝑅1𝜑𝜆𝑛=𝑘𝑛1𝛾𝑅1𝜑𝜆𝑅1𝜒𝜆𝑛=𝑘𝑛1𝑅1𝜑𝜆.(4.15) Substituting from (4.13), (4.14), (4.15), and 𝑅1(𝜑𝜆)=𝑅1(𝜑𝜆𝑛)=𝜌 into (4.10), we get 𝚽𝜆(),𝚽𝑛()𝐇=𝑘𝑛1𝜔(𝜆)𝜆𝜆𝑛.(4.16) Letting 𝜆𝜆𝑛 in (4.16) and since the zeros of 𝜔(𝜆) are simple, we have 𝚽𝑛(),𝚽𝑛()𝐇=𝚽𝑛()2𝐇=𝑘𝑛1𝜔𝜆𝑛.(4.17) Therefore from (4.16) and (4.17) we establish (4.9). Since 𝜆 and 𝑛 are arbitrary, then (4.3) is proved with a pointwise convergence on , since the case 𝜆=𝜆𝑛 is trivial.
Now we investigate the convergence of (4.3). First we prove that it is absolutely convergent on . Using Cauchy-Schwarz’ inequality for 𝜆, 𝑛=0||||𝐹𝜆𝑛𝜔(𝜆)𝜆𝜆𝑛𝜔𝜆𝑛||||𝑛=0||𝐠(),𝚽𝑛()𝐇||2𝚽𝑛()2𝐇1/2𝑛=0||𝚽𝑛(),𝚽𝜆()𝐇||2𝚽𝑛()2𝐇1/2.(4.18) Since 𝐠(), Φ𝜆()𝐇, then both series in the right-hand side of (4.18) converge. Thus series (4.3) converges absolutely on . For uniform convergence let 𝑀 be compact. Let 𝜆𝑀 and 𝑁>0. Define 𝜎𝑁(𝜆) to be 𝜎𝑁|||||(𝜆)=𝐹(𝜆)𝑁𝑛=0𝐹𝜆𝑛𝜔(𝜆)𝜆𝜆𝑛𝜔𝜆𝑛|||||.(4.19) Using the same method developed above 𝜎𝑁(𝜆)𝑛=𝑁+1||𝐠(),𝚽𝑛()𝐇||2𝚽𝑛()2𝐇1/2𝑛=𝑁+1||𝚽𝑛(),𝚽𝜆()𝐇||2𝚽𝑛()2𝐇1/2.(4.20) Therefore 𝜎𝑁(𝚽𝜆)𝜆()𝐇𝑛=𝑁+1||𝐠(),𝚽𝑛()𝐇||2𝚽𝑛()2𝐇1/2.(4.21) Since [1,1]×𝑀 is compact, then, cf., for example, [31, page 225], we can find a positive constant 𝐶𝑀 such that 𝚽𝜆()𝐇𝐶𝑀,𝜆𝑀.(4.22) Then 𝜎𝑁(𝜆)𝐶𝑀𝑛=𝑁+1||𝐠(),𝚽𝑛()𝐇||2𝚽𝑛()2𝐇1/2(4.23) uniformly on 𝑀. In view of Parseval’s equality, 𝑛=𝑁+1||𝐠(),𝚽𝑛()𝐇||2𝚽𝑛()2𝐇1/20as𝑁.(4.24) Thus 𝜎𝑁(𝜆)0 uniformly on 𝑀. Hence (4.3) converges uniformly on 𝑀. Thus 𝐹(𝜆) is analytic on compact subsets of and hence it is entire. From the relation ||||1𝐹(𝜆)𝑟2101||||||𝜑𝑔(𝑥)1𝜆||1(𝑥)𝑑𝑥+𝑟2210||||||𝜑𝑔(𝑥)2𝜆||(𝑥)𝑑𝑥(4.25) and the fact that 𝜑1𝜆(𝑥) and 𝜑2𝜆(𝑥) are entire function of exponential type 2, we conclude that 𝐹(𝜆) is also of exponential type 2.

Remark 4.2. To see that expansion (4.3) is a Lagrange-type interpolation, we may replace 𝜔(𝜆) by the canonical product 𝜔(𝜆)=𝑛=0𝜆1𝜆𝑛𝜆,ifnoneoftheeigenvaluesiszero;𝑛=1𝜆1𝜆𝑛,ifoneoftheeigenvalues,say𝜆0=0.(4.26) From Hadamard’s factorization theorem, see [4], 𝜔(𝜆)=(𝜆)𝜔(𝜆), where (𝜆) is an entire function with no zeros. Thus, 𝜔(𝜆)𝜔𝜆𝑛=(𝜆)𝜔(𝜆)𝜆𝑛𝜔𝜆𝑛(4.27) and (4.2), (4.3) remain valid for the function 𝐹(𝜆)/(𝜆). Hence 𝐹(𝜆)=𝑛=0𝐹𝜆𝑛(𝜆)𝜔(𝜆)𝜆𝑛𝜔𝜆𝑛𝜆𝜆𝑛.(4.28) We may redefine (4.2) by taking kernel 𝜑𝜆()/(𝜆)=𝜑𝜆() to get 𝐹(𝜆)=𝐹(𝜆)=(𝜆)𝑛=0𝐹𝜆𝑛𝜔(𝜆)𝜆𝜆𝑛𝜔𝜆𝑛.(4.29)

The next theorem is devoted to give interpolation sampling expansions associated with problem (2.1)–(2.4) for integral transforms whose kernels defined in terms of Green’s function. There are many results concerning the use of Green’s function in sampling theory, cf., for example, [22, 3234]. As we see in (2.60), Green’s function 𝐺(𝑥,𝜉,𝜆) of problem (2.1)–(2.4) has simple poles at {𝜆𝑛}𝑛=0. Define the function 𝐺(𝑥,𝜆) to be 𝐺(𝑥,𝜆)=𝜔(𝜆)𝐺(𝑥,𝜉0,𝜆), where 𝜉0[1,0)(0,1] is a fixed point and 𝜔(𝜆) is the function defined in (2.29) or it is the canonical product (4.26).

Theorem 4.3. Let 𝑔()𝐿2(1,1) and 𝐹(𝜆) the integral transform 1𝐹(𝜆)=𝑟2101𝐺(𝑥,𝜆)1𝑔(𝑥)𝑑𝑥+𝑟2210𝐺(𝑥,𝜆)𝑔(𝑥)𝑑𝑥.(4.30) Then 𝐹(𝜆) is an entire function of exponential type 2 which admits the sampling representation 𝐹(𝜆)=𝑛=0𝐹𝜆𝑛𝜔(𝜆)𝜆𝜆𝑛𝜔𝜆𝑛.(4.31) Series (4.31) converges absolutely on and uniformly on compact subsets of .

Proof. The integral transform (4.30) can be written as 𝐹(𝜆)=𝕲(,𝜆),𝐠()𝐇00𝑅,(4.32)𝐠(𝑥)=𝑔(𝑥),𝕲(𝑥,𝜆)=𝐺(𝑥,𝜆)1𝑅(𝐺(𝑥,𝜆))1(𝐺(𝑥,𝜆))𝐇.(4.33) Applying Parseval’s identity to (4.32) with respect to {Φ𝑛()}𝑛=0, we obtain 𝐹(𝜆)=𝑛=0𝒢(,𝜆),𝚽𝑛()𝐇𝐠(),𝚽𝑛()𝐇𝚽𝑛()2𝐇.(4.34) Let 𝜆𝜆𝑛. Since each Φ𝑛() is an eigenvector of 𝐀, then (𝜆𝐼𝐀)𝚽𝑛(𝑥)=𝜆𝜆𝑛𝚽𝑛(𝑥).(4.35) Thus (𝜆𝐼𝐀)1𝚽𝑛1(𝑥)=𝜆𝜆𝑛𝚽𝑛(𝑥).(4.36) From (2.59) and (4.36) we obtain 𝑅1𝜑𝜆𝑛𝜔𝜑(𝜆)𝜆𝜉0𝑅1𝜑𝜆𝑛𝜔𝜒(𝜆)𝜆𝜉0+1𝑟2101𝐺𝑥,𝜉0𝜑,𝜆1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝐺𝑥,𝜉0𝜑,𝜆2𝜆𝑛=1(𝑥)𝑑𝑥𝜆𝜆𝑛𝜑𝜆𝑛𝜉0.(4.37) Using 𝑅1(𝜑𝜆𝑛)=𝜌, (2.48), and 𝑅1(𝜒𝜆𝑛)=𝛾, (4.37) becomes 𝛾𝑘𝑛1𝜑𝜔(𝜆)𝜆𝜉0+𝜌𝜒𝜔(𝜆)𝜆𝜉0+1𝑟2101𝐺𝑥,𝜉0𝜑,𝜆1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝐺𝑥,𝜉0𝜑,𝜆2𝜆𝑛=1(𝑥)𝑑𝑥𝜆𝜆𝑛𝜑𝜆𝑛𝜉0.(4.38) Hence (4.38) can be rewritten as 𝛾𝑘𝑛1𝜑𝜆𝜉0+𝜌𝜒𝜆𝜉0+1𝑟2101𝐺(𝑥,𝜆)𝜑1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝐺(𝑥,𝜆)𝜑2𝜆𝑛(𝑥)𝑑𝑥=𝜔(𝜆)𝜆𝜆𝑛𝜑𝜆𝑛𝜉0.(4.39) From the definition of 𝕲(,𝜆), we have 𝕲(,𝜆),𝚽𝑛()𝐇=1𝑟2101𝐺(𝑥,𝜆)𝜑1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝐺(𝑥,𝜆)𝜑2𝜆𝑛+1(𝑥)𝑑𝑥𝜌𝑅1(𝐺(𝑥,𝜆))𝑅1𝜑𝜆𝑛+1𝛾𝑅1(𝐺(𝑥,𝜆))𝑅1𝜑𝜆𝑛.(4.40) From formula (2.60), we get 𝑅1(𝐺(𝑥,𝜆))=𝜒𝜆𝜉0𝑅1𝜑𝜆,𝑅1(𝐺(𝑥,𝜆))=𝜑𝜆𝜉0𝑅1𝜒𝜆.(4.41) Combining (4.41), 𝑅1(𝜑𝜆)=𝑅1(𝜑𝜆𝑛)=𝜌, 𝑅1(𝜒𝜆)=𝑅1(𝜒𝜆𝑛)=𝛾, and (2.48) together with (4.40) yields 𝕲(,𝜆),𝚽𝑛()𝐇=1𝑟2101𝐺(𝑥,𝜆)𝜑1𝜆𝑛1(𝑥)𝑑𝑥+𝑟2210𝐺(𝑥,𝜆)𝜑2𝜆𝑛(𝑥)𝑑𝑥+𝛾𝑘𝑛1𝜑𝜆𝜉0+𝜌𝜒𝜆𝜉0.(4.42) Substituting from (4.39) and (4.42) gives 𝕲(,𝜆),𝚽𝑛()𝐇=𝜔(𝜆)𝜆𝜆𝑛𝜑𝜆𝑛𝜉0.(4.43) As an element of 𝐇, 𝕲(,𝜆) has the eigenvectors expansion 𝕲(𝑥,𝜆)=𝑖=0𝕲(,𝜆),𝚽𝐢()𝐇𝚽𝑖(𝑥)𝚽𝑖()2𝐇=𝑖=0𝜔(𝜆)𝜆𝜆𝑖𝜑𝜆𝑖𝜉0𝚽𝑖(𝑥)𝚽𝑖()2𝐇.(4.44) Taking the limit when 𝜆𝜆𝑛 in (4.32), we get 𝐹𝜆𝑛=lim𝜆𝜆𝑛𝕲(,𝜆),𝐠()𝐇.(4.45) The interchange of the limit and summation processes is justified by the uniform convergence of the eigenvector expansion of 𝕲(𝑥,𝜆) on [1,1] on compact subsets of , cf., (2.60), (3.3)–(3.6), and (3.18)–(3.21). Making use of (4.44), we may rewrite (4.45) as 𝐹𝜆𝑛=lim𝜆𝜆𝑛𝑖=0𝜔(𝜆)𝜆𝜆𝑖𝜑𝜆𝑖𝜉0𝚽𝑖(),𝐠()𝐇𝚽𝑖()2𝐇=𝜔𝜆𝑛𝜑𝜆𝑛𝜉0𝚽𝑛(),𝐠()𝐇𝚽𝑛()2𝐇.(4.46) The interchange of the limit and summation is justified by the asymptotic behavior of Φ𝑖(𝑥) and 𝜔(𝜆). If 𝜑𝜆𝑛(𝜉0)0, then (4.46) gives 𝐠(),𝚽𝑛()𝐇𝚽𝑛()2𝐇=𝐹𝜆𝑛𝜔𝜆𝑛𝜑𝜆𝑛𝜉0.(4.47) Combining (4.43), (4.47), and (4.34) we get (4.31) under the assumption that 𝜑𝜆𝑛(𝜉0)0 for all 𝑛. If 𝜑𝜆𝑛(𝜉0)=0, for some 𝑛, the same expansion holds with 𝐹(𝜆𝑛)=0. The convergence properties as well as the analytic and growth properties can be established as in Theorem 4.1.

Now, we give an example exhibiting the obtained results.

Example 4.4. The boundary value problem, 𝑦[],𝑦(𝑥)+𝑞(𝑥)𝑦(𝑥)=𝜆𝑦(𝑥),𝑥1,0)(0,1(1)=𝜆𝑦(1),𝑦(1)=𝜆𝑦(1),2𝑦(0)𝑦(+0)=0,𝑦(0)2𝑦(+0)=0,(4.48) is special case of the problem (2.1)–(2.4) when 𝛼1=𝛼2=𝛽1=𝛽2=0, 𝛽2=𝛽1=𝛼2=𝛼1=𝑟1=𝑟2=1, 𝛾1=𝛿2=2,𝛾2=𝛿1=1, and [].𝑞(𝑥)=1,𝑥1,0),2,𝑥(0,1(4.49) Then 𝜌=𝛾=1>0. The solutions 𝜑𝜆() and 𝜒𝜆() are 𝜑𝜆𝜑(𝑥)=1𝜆(𝑥)=2𝜁21𝜆𝜁22𝜆sin𝜁1𝜆(𝑥+1)𝜁1𝜆cos𝜁1𝜆[𝜑(𝑥+1),𝑥1,0),2𝜆(𝑥)=22𝜁21𝜆𝜁22𝜆sin𝜁1𝜆𝜁1𝜆cos𝜁1𝜆cos𝜁2𝜆𝑥+2𝜁21𝜆𝜁22𝜆cos𝜁1𝜆+𝜁1𝜆sin𝜁1𝜆2𝜁2𝜆sin𝜁2𝜆],𝜒𝑥,𝑥(0,1𝜆𝜒(𝑥)=1𝜆1(𝑥)=2cos𝜁2𝜆2𝜁21𝜆𝜁22𝜆𝜁2𝜆sin𝜁2𝜆cos𝜁1𝜆𝑥𝜁+22𝜆𝜁1𝜆sin𝜁2𝜆+2𝜁21𝜆𝜁22𝜆𝜁1𝜆cos𝜁2𝜆sin𝜁1𝜆[𝜒𝑥,𝑥1,0),2𝜆(𝑥)=cos𝜁2𝜆(𝑥1)+2𝜁21𝜆𝜁22𝜆𝜁2𝜆sin𝜁2𝜆],(𝑥1),𝑥(0,1(4.50) where 𝜁1𝜆=𝜆+1 and 𝜁2𝜆=𝜆+2. Here the characteristic function is 1𝜔(𝜆)=2𝜁1𝜆𝜁2𝜆𝜁1𝜆cos𝜁1𝜆52𝜁21𝜆𝜁22𝜆𝜁2𝜆cos𝜁2𝜆+𝜁42𝜆8𝜁21𝜆4sin𝜁2𝜆+sin𝜁1𝜆𝜁21𝜆+42𝜁21𝜆𝜁22𝜆2𝜁2𝜆cos𝜁2𝜆+2𝜁21𝜆𝜁22𝜆4+5𝜁12𝜆sin𝜁2𝜆.(4.51) By Theorem 4.1, the transform 𝐹(𝜆)=01𝑔(𝑥)2𝜁21𝜆𝜁22𝜆sin𝜁1𝜆(𝑥+1)𝜁1𝜆cos𝜁1𝜆+(𝑥+1)𝑑𝑥102𝑔(𝑥)2𝜁21𝜆𝜁22𝜆sin𝜁1𝜆𝜁1𝜆cos𝜁1𝜆cos𝜁2𝜆𝑥+2𝜁21𝜆𝜁22𝜆cos𝜁1𝜆+𝜁1𝜆sin𝜁1𝜆2𝜁2𝜆sin𝜁2𝜆𝑥𝑑𝑥(4.52) has the following expansion: 𝐹(𝜆)=𝑛=0𝐹𝜆𝑛𝜔(𝜆)2𝜁21𝜆𝜁22𝜆2𝜁21𝜆𝑛𝜁22𝜆𝑛𝜔𝜆𝑛,𝜔(4.53)𝜆𝑛1=4𝜁31𝜆𝑛𝜁32𝜆𝑛sin𝜁1𝜆𝑛2𝜁2𝜆𝑛1+24𝜆+34𝜆2+11𝜆11cos𝜁2𝜆𝑛+54+104𝜆+59𝜆25𝜆4sin𝜁2𝜆𝑛+𝜁1𝜆𝑛cos𝜁1𝜆𝑛𝜁2𝜆𝑛3045𝜆6𝜆2+5𝜆3cos𝜁2𝜆𝑛+8+24𝜆+41𝜆2+13𝜆3sin𝜁2𝜆𝑛.(4.54) The Green’s function has the following form: =1𝐺(𝑥,𝜉,𝜆)2𝜁1𝜆𝜁2𝜆𝜁1𝜆cos𝜁1𝜆52𝜁21𝜆𝜁22𝜆𝜁2𝜆cos𝜁2𝜆+𝜁42𝜆8𝜁21𝜆4sin𝜁2𝜆+sin𝜁1𝜆𝜁21𝜆+42𝜁21𝜆𝜁22𝜆2𝜁2𝜆cos𝜁2𝜆+2𝜁21𝜆𝜁22𝜆4+5𝜁12𝜆sin𝜁2𝜆×2𝜁21𝜆𝜁22𝜆sin𝜁1𝜆(𝑥+1)𝜁1𝜆cos𝜁1𝜆×1(𝑥+1)2cos𝜁2𝜆2𝜁21𝜆𝜁22𝜆𝜁2𝜆sin𝜁2𝜆cos𝜁1𝜆𝜁𝜉+22𝜆𝜁1𝜆sin𝜁2𝜆+2𝜁21𝜆𝜁22𝜆𝜁1𝜆cos𝜁2𝜆sin𝜁1𝜆𝜉,1𝑥𝜉<0,2𝜁21𝜆𝜁22𝜆sin𝜁1𝜆(𝜉+1)𝜁1𝜆cos𝜁1𝜆×1(𝜉+1)2cos𝜁2𝜆2𝜁21𝜆𝜁22𝜆𝜁2𝜆sin𝜁2𝜆cos𝜁1𝜆𝜁𝑥+22𝜆𝜁1𝜆sin𝜁2𝜆+2𝜁21𝜆𝜁22𝜆𝜁1𝜆cos𝜁2𝜆sin𝜁1𝜆𝑥,1𝜉𝑥<0,2𝜁21𝜆𝜁22𝜆sin𝜁1𝜆(𝜉+1)𝜁1𝜆cos𝜁1𝜆(𝜉+1)cos𝜁2𝜆(𝑥1)+2𝜁21𝜆𝜁22𝜆𝜁2𝜆sin𝜁2𝜆,(𝑥1)1𝜉<0,0<𝑥1,2𝜁21𝜆𝜁22𝜆sin𝜁1𝜆(𝑥+1)𝜁1𝜆cos𝜁1𝜆(𝑥+1)cos𝜁2𝜆(𝜉1)+2𝜁21𝜆𝜁22𝜆𝜁2𝜆sin𝜁2𝜆,(𝜉1)1𝑥<,0<𝜉1,cos𝜁2𝜆(𝑥1)+2𝜁21𝜆𝜁22𝜆𝜁2𝜆sin𝜁2𝜆×2(𝑥1)2𝜁21𝜆𝜁22𝜆sin𝜁1𝜆𝜁1𝜆cos𝜁1𝜆cos𝜁2𝜆𝜉+2𝜁21𝜆𝜁22𝜆cos𝜁1𝜆+𝜁1𝜆sin𝜁1𝜆2𝜁2𝜆sin𝜁2𝜆𝜉,0<𝜉𝑥1,cos𝜁2𝜆(𝜉1)+2𝜁21𝜆𝜁22𝜆𝜁2𝜆sin𝜁2𝜆×2(𝜉1)2𝜁21𝜆𝜁22𝜆sin𝜁1𝜆𝜁1𝜆cos𝜁1𝜆cos𝜁2𝜆𝑥+2𝜁21𝜆𝜁22𝜆cos𝜁1𝜆+𝜁1𝜆sin𝜁1𝜆2𝜁2𝜆sin𝜁2𝜆𝑥,0<𝑥𝜉1.(4.55) Taking 𝜉[1,0)(0,1], the transform 𝐹(𝜆)=01𝐺(𝑥,𝜆)𝑔(𝑥)𝑑𝑥+410𝐺(𝑥,𝜆)𝑔(𝑥)𝑑𝑥.(4.56) has a sampling representation of the type.