Research Article | Open Access
Pedro Almenar, Lucas Jódar, "The Distance between Points of a Solution of a Second Order Linear Differential Equation Satisfying General Boundary Conditions", Abstract and Applied Analysis, vol. 2014, Article ID 126713, 17 pages, 2014. https://doi.org/10.1155/2014/126713
The Distance between Points of a Solution of a Second Order Linear Differential Equation Satisfying General Boundary Conditions
This paper presents a method to obtain lower and upper bounds for the minimum distance between points and of the solution of the second order linear differential equation satisfying general separated boundary conditions of the type and . The method is based on the recursive application of a linear operator to certain functions, a recursive application that makes these bounds converge to the exact distance between and as the recursivity index grows. The method covers conjugacy and disfocality as particular cases.
In a recent paper of the authors (see ) it was shown that the recursive application of the operator defined by where is continuous on and strictly positive almost everywhere on that same interval, provided a method to determine if the second order linear differential equation was either left disfocal or left nondisfocal in the interval , the concept of left disfocal alluding to the nonexistence of a nontrivial solution of (2) with zeroes in such that (a similar definition exists for right disfocal). The method was based on two features of the operator , namely, (i)the fact that is positive and monotonic, so that for on one has on ;(ii)the fact that if (2) is left nondisfocal in an interval interior to and for any value of as long as (2) is left disfocal in .
The purpose of this paper is to extend the results of  to (2) with the more general boundary conditions: which contain conjugacy () and nondisfocality as particular cases, by means of the recursive application of the operator defined by where (or , as we will name it in some sections) is the Green function of the problem In consequence this paper will yield criteria (in fact infinitely many) to obtain lower and upper bounds for the minimum distance between points and satisfying (3) when is a solution of (2), bounds of which will be shown to converge to the exact values of such a minimum distance as the index grows. Besides extending the results of  to the problem (2)-(3), other results based on the same strategies will also be introduced and it will be shown that they improve the results of  in many cases.
Operators of the type (4) have often been used in the determination of Lyapunov inequalities for different types of equations and boundary conditions since Nehari  first noted that a solution of (2) such that satisfied with which implied that is, Beesack , Hinton , Levin [5, 6], and Reid [7–9] are other remarkable examples of such a use, as the excellent monography on Lyapunov inequalities of [10, Chapter 1] shows. Although formulae like (6) can be applied recursively if and to obtain more complex versions of (9), the fact is that the iterative application of has rarely been proposed in any papers, with the exception of Harris , who suggested its application for the disfocal case— or —without getting to prove that it guaranteed any improvement. We will show in this paper that under certain conditions (3) the function is positive and the recursive application is possible and provides lower and upper bounds which improve all existing results as of today. We will also show that even in the case that gets to be negative it is still sometimes possible to obtain upper and lower bounds for the distance between and , which are as close to the real distance between and as desired.
It is also worth remarking the significant interest that the calculation of lower bounds (i.e., Lyapunov kind-of inequalities) has enjoyed in comparison with the problem of the determination of upper bounds, regardless of the type of boundary conditions (3). This fact was already noted by Došlý in  for the conjugate case () and by the authors in [1, 13, 14] for the nondisfocal case. References [12, 15–20] are notable exceptions to this trend.
As indicated in the first paragraph, throughout the paper we will assume that is continuous on an interval such that and that is strictly positive almost everywhere on . This allows defining the internal product for , continuous on (it is easy to prove that (10) satisfies all the conditions required by an internal product) and the associated norm defined by
Likewise, we will use the notation to name the operator defined in (4), or to name the function with domain resulting from the application of to , and or to name the value of the function at the point and when other extremes of integration , potentially different from , are used in (4). In that latter case, any norm appearing in the same formula will be assumed to be calculated with and as integration extremes in (11).
We will say that the points , are interior to an interval if and .
The organization of the paper is as follows. Section 2 will present the main properties of the operator . Sections 3 and 4 will apply these properties in different ways to find upper and lower bounds for the minimum distance between points and satisfying (3). Section 5 will introduce some formulae which simplify the calculations required in Section 3. Section 6 will apply the method to several examples. Finally Section 7 will draw several conclusions.
2. The Operator
The purpose of this section will be to present the main properties of the operator defined in (4) for as specified in the Introduction. As was done in , for the sake of clarity such properties will be presented in several lemmas which will lead to Theorem 5, which can be regarded as the key result of this section.
Lemma 1. The operator is linear. In addition, if is positive almost everywhere for , then is positive and monotonic.
Proof. The linearity of is quite evident. If is positive almost everywhere for , the positiveness (and in consequence monotonicity) of is a consequence of being positive almost everywhere on by hypothesis.
Lemma 2. The operator is compact.
Lemma 3. If any of the following conditions are met, then the operator is self-adjoint.
Proof. Let us define as the solution of and as the solution of A straightforward calculation shows that if any of the conditions (12) are met, then the Wronskian of and , namely, is not zero, which implies that and are not linearly dependent and therefore no nontrivial solutions of (2)-(3) exist. In consequence from [22, Theorem IX.3] one has Now, in order to prove self-adjointness, we need to prove that, given , . Thus, from (10) we have Combining (17) and (16) one gets to And integrating by parts the right-hand side of (18) one finally has
Lemma 4. The operator is bounded with the norm and verifies where the norm is defined as in (11).
Theorem 5. If any of conditions (12) are met, then the operator has a countably infinite number of eigenvalues and associated orthonormal eigenfunctions , which allow expressing , , as Moreover, (i)if no nontrivial solution of (2) satisfies (3) either at , or at any , interior to , then (ii)if there is a nontrivial solution of (2) that satisfies (3) at some , interior to , then In addition, if and is such that , then the sign corresponding to that of .
Proof. Let us consider the eigenvalue problem
From the theory of ordinary differential equations (see [22, Theorems V.8 and V.9]) it is known that there exist a countably infinite number of eigenvalues which form an increasing sequence with , each of which has its corresponding orthonormal (with the norm (11)) eigenfunction , and that the set of eigenfunctions forms an orthonormal basis of . Applying the operator to these eigenfunctions and integrating by parts it is easy to show that
which implies that are also the eigenfunctions of the operator with corresponding eigenvalues . Since from Lemmas 1, 2 and 3 is linear, compact, and self-adjoint, we can apply [21, Theorem 7.5.2] and represent in the canonical form
Applying again to (30) it yields
given that and for . Applying recursively to (31) one gets to
which is in fact (22). And the application of Parseval’s identity (see [21, Lemmas 1.5.14]) to (32) leads to
Since form an increasing sequence, from (33) it is clear that
Now, let us note that if (2) does not have any nontrivial solution that satisfies (3) either at , or at any , interior to , the first eigenvalue (and therefore all the others) must be strictly greater than 1. In that case, from (34) one has (23) and
From (35) and Lemma 4 one gets (24).
Likewise, if there is a nontrivial solution of (2) that satisfies (3) at some , interior to , then must be such that . From this and (33) one gets (25). If, in addition, , from (33) we get to which is (26). On the other hand, we can write We can divide both sides of (37) by to yield Applying Parseval’s identity to (38) one gets Since , from (39) one yields which implies that there exists an index such that From Lemma 4 and (41) one has for ; that is, for . Since by hypothesis and , (43) leads to (27).
Remark 6. Theorem 5 provides two types of methods to obtain upper and lower bounds for the minimum distance between points satisfying (3), one based on comparing the norm of with some other constants and another based on comparing the value of at concrete points of for different functions . These will be addressed separately in the next two sections.
3. Bounds for the Distance between and Based on
As stated before, Theorem 5 provides methods to get upper and lower bounds for the minimum distance between points , satisfying (3), which are based on the comparison of with some thresholds for different values of the extremes of integration and of (4) and (11).
Thus, on the one hand, if , are such that there is a nontrivial solution of (2) which satisfies (3) at some , interior to , from (26) (and as long as , i.e., is “close” to ) it is clear that will grow with the index regardless of the choice of and accordingly there will be an index such that (23) is violated. This allows us to define an algorithm to find progressively better “outer” bounds of the values satisfying (3) by fixing one of them, say , and calculating the minimum values of the extremes which give for different values of , as the following theorem shows.
Theorem 7. Assume that there exists a nontrivial solution of (2) satisfying (3) at , . Let the sequence be defined by (44), where is a continuous function on such that and , fulfill any of conditions (12) for each . Then and tends to as .
Proof. The fact that is obvious from (23). Now, let us pick a . From (26) , which means that there exists an index such that Given that for , from (23) and the continuity of on (continuity guaranteed by the hypothesis), there must exist a value such that for each . This proves the second assertion of the theorem.
The application of the method based on Theorem 7 is quite straightforward given that the right-hand side of (44), that is, , is easy to calculate once fixed . However, it is worth remarking that, from (34), the closer the selected function is to , the smaller the terms for will be and the faster the sequence will converge to . Therefore, although the method can work with any continuous function such that , it is desirable to select one that may be as close as possible to the expected associated to the problem (28).
On the other hand, if , are such that no nontrivial solution of (2) satisfies (3) either at , or at some , interior to , from (24) it is clear that will shrink with the index regardless of the choice of and accordingly there will be an index such that (25) is violated. As happened before, this allows us to define an algorithm to find progressively better “inner” bounds of the values satisfying (3) by fixing one of them, say , and calculating the values of the extremes which give where is any lower bound of the right-hand side of (25), which has (in turn) a positive lower bound that does not depend on . This is shown in the following theorem.
Theorem 8. Assume that there exists a nontrivial solution of (2) satisfying (3) at , . Let the sequence be defined by (46), where is a continuous function on and , fulfill any of conditions (12) for each . Then and tends to as .
Proof. Again, the fact that is obvious from (25) and (46). Now, let us pick a . From (24), . This and the fact that is bounded below by a positive amount which does not vary with imply that there exists an index such that Given that for , from (25) and the continuity of on , there must exist a value such that for each . This proves the second assertion of the theorem.
Unlike what happens with the method based on Theorem 7, the method based on Theorem 8 presents some difficulties in its application due to the fact that neither nor the eigenvalues are known. We can overcome them partially by discarding, in the right-hand side of (25), the term of the series of the eigenfunctions beyond the first one, that is, by converting (25) into given that the discarded terms are positive (in fact may be a possible positive lower bound for some types of terms used in (46), which does not depend on ). The resulting method to obtain lower bounds for the distance between and will work in the same way as the one described in the previous paragraphs, at the expense of requiring greater values of the index to violate (48). But even with such a simplification there is still a need to obtain a lower bound for which is not evident at all.
The following lemmas aim at finding cases (fundamentally choices of and boundary conditions (3)) where the values of , , and are bounded in a way that allows getting a lower bound for the right-hand side of (25) (or for the right-hand side of (48) if there is no way to do it with (25)), overcoming the problem.
Proof. From (10) one has And since satisfies (28) one yields As is the Green function of the problem (5), is the value at of the function satisfying (3) whose second derivative is , that is, . This proves (49). As for (50), note simply that
Lemma 10. Assume that either or and that either or . Let and be defined as in Theorem 5. Let one suppose that on and that it can be decomposed as with and being nondecreasing functions. Depending on the values of , , , and , one has (i)if , then (ii)if , then (iii)if , then (iv)if , then
Proof. First of all, let us note that can always be decomposed in the mentioned manner, given that and therefore exists in and can be expressed as the difference between two increasing functions and ; that is, which, taking exponentials, becomes with and nondecreasing. Now let us consider the functional defined by Using integration by parts and the fact that is orthonormal with respect to the norm (11) it is easy to prove that From (3), (61), and the hypothesis one gets And given that and are increasing on and that verifies (28) for , one has that is, is increasing on , and that is, is decreasing on . The application of (60), (63), and (64) to (62) leads to From (3), (60), and (65) it is straightforward to obtain (54)–(57).
Lemma 11. Let and be defined as in Theorem 5. Then one has with
Proof. From [23, Theorem 8], if we define the function angle by where is a real constant, then satisfies the equation being, therefore, a decreasing function. Let us fix as the range of the function. In that case, (i)if we will set . Else, we will set ;(ii)if we will define . Else,(a)if we will define ;(b)we will define . With this in mind, we will also define is a constant which depends only on and the boundary conditions (3) and is related to the angle distance between and due to (3). Since is a function satisfying (3) with one more zero in than , integrating (69) and taking (70) into account we obtain Taking in (71) one has The problem for (72) to be used to get upper bounds for is the dependence of on . We can eliminate it by obtaining an upper bound for . Accordingly we will define From (70), (72), and (73) one yields Taking squares in (74) one finally gets to (66).
Theorem 12. Let and be defined as in Theorem 5. Let and be defined as in Lemma 10. Let and be defined as in Lemma 11. Assume that there is a nontrivial solution of (2) that satisfies (3) at some , interior to . Assume also that , satisfy any of conditions (12) of Lemma 3, that either or , and that either or . Then the following inequalities hold. (i)If then Otherwise, if is the first integer greater than 1 such that , then (ii)If then Otherwise, if is the first integer greater than 1 such that , then
Proof. Let us focus first on (75). From (25), the fact that (from the hypothesis) and Lemmas 9-10 it is straightforward to show that
On the other hand, from (66) one has
But given that
from (79)–(81) one gets (75).
As for (76), again from (25) and Lemmas 9-10 it is straightforward to show that On the other hand, from (66) one has From the definition of , it is clear that And given that from (82)–(85) and the fact that by the hypothesis, one obtains (76). The proofs of (77) and (78) are very similar mutatis mutandi.
Remark 13. It is possible to improve the results of Theorem 12 by using better lower bounds for , , , and than the ones displayed in Lemma 10 and better upper bounds for than the one presented in Lemma 11 or calculating more terms in the sums of (75)–(78) before using the integrals to get lower bounds for the remainders of the series.
4. Bounds for the Distance between and Based on Concrete Values of
This section will elaborate on the results of Section 2 in order to obtain upper and lower bounds for the distance between the extremes , in a similar way as it was done in . A critical condition for that is the positivity of the Green function , which is ensured under certain boundary conditions by the next lemma.
Lemma 14. Let , be any real numbers such that and . If the boundary conditions (3) verify any of the following hypotheses for , , then is positive almost everywhere for except if or in case (86), in case (87), and in case (88).
Proof. As was mentioned in the proof of Lemma 3, if we define and as the solutions of respectively, then we can represent