Abstract

This paper presents a method to determine whether the second-order linear differential equation is either disfocal or nondisfocal in a fixed interval. The method is based on the recursive application of a linear operator to certain functions and yields upper and lower bounds for the distances between a zero and its adjacent critical points, which will be shown to converge to the exact values of such distances as the recursivity index grows.

1. Introduction

The purpose of this paper is to show that the recursive application of the operator defined by where is continuous on and strictly positive almost everywhere on that same interval, provides criteria to determine if the second-order linear differential equation is either left disfocal or left nondisfocal in a given interval (the right disfocal case will also be covered similarly), criteria which converge into simultaneously necessary and sufficient conditions for disfocality as the number of times that is applied recursively increases (i.e., as the index of grows). Following [1], let us recall that (2) is left (right) disfocal in if no nontrivial solution such that () has a zero in ; that is, if () has no left (right) focal point such that . Otherwise we will say that (2) is focal or nondisfocal in (note that this use of the term focal is not common in the literature). As a result, this method will yield upper and lower bounds for the distances between zeroes and their adjacent critical points which will converge to the exact values of such distances as the index grows.

The operator defined in (1) is not unknown in the analysis of disfocality of linear differential equations since it was applied by Kwong in [1] to the function in order to devise a Lyapunov inequality for disfocality for (2). Kwong also used it in a different manner in [2] to determine an oscillation condition for (2), taking advantage of the concave nature of the solution . Likewise Harris indicated in [3, Section 3] that a recursive application of to Kwong’s function could provide more precise Lyapunov inequalities than that proved by Kwong, but he neither got to prove that the improvement was guaranteed with the recursivity nor did he get to apply it to other functions and to the opposite problem, that is, to show if (2) is focal in an interval. This paper will address and solve these open questions, setting a theoretical frame for the recursive application of to different functions.

To complete the historical picture, many other disfocal Lyapunov inequalities have appeared since the first one from Kwong, Brown and Hinton's inequality (see [4]) being possibly the most successful one, as a careful reading of the excellent survey of Pachpatte on the topic suggests (see [5]). On the contrary, the opposite problem (the analysis of the maximum distance between zeroes and critical points or the determination of “focality” conditions on an interval) is a problem which has received much less attention in the last decades, apart from [6, 7], in which the authors obtained focality conditions in their quest for conjugacy conditions of (2), [8, 9], where the authors used Prüfer transformation techniques to elaborate different methods to tackle that problem, and [10], where Kwong’s idea on the concave nature of was further exploited and extended to the half-linear differential equation.

The reason for such little interest is unclear, but it also affects the “sibling” problem of the determination of conjugacy of (2) on an interval (i.e., the determination of the maximum distance between consecutive zeroes of a solution of (2)), as Došlý noted when addressing such a problem in [7].

As indicated in the first paragraph, throughout the paper and with the exception of Section 6 we will assume that is continuous on an interval such that and that is strictly positive almost everywhere on . This allows to define the internal product for , being continuous on (it is easy to prove that (3) satisfies all the conditions required by an internal product), and the associated norm defined by Likewise, we will use the notation to name the operator defined in (1), or to name the function with domain resulting from the application of to , , or to name the value of the function at the point and when other extremes of integration potentially different from are used in (1).

The organization of the paper is as follows. Section 2 will present the main properties of the operator . Section 3 will apply them to find criteria to assess whether (2) is (left or right) focal or disfocal in a given interval. Section 4 will introduce some formulae which simplify the calculations required in Section 3. Section 5 will apply the method to several examples. Section 6 will deal with the case where can be negative in a subset of positive measure and finally Section 7 will draw several conclusions.

2. The Operator

The purpose of this section will be to present the main properties of the operator defined in (1) for as specified in the Introduction. For the sake of clarity, such properties will be presented in several lemmas which will lead to Theorem 5, which can be regarded as the key result of this section.

Lemma 1. The operator is linear, positive, and monotonic (i.e., if then ). Furthermore, if on then for an if and only if almost everywhere on .

Proof. The proof is straightforward by simple inspection of (1), the fact that is positive almost everywhere on , and the equivalence of the properties positive and monotonic when applied to linear operators.

Lemma 2. The operator is compact.

Proof. Following [11, Theorem ], will be compact with the norm (and therefore with the norm defined in (4)) if we can represent it as with being continuous on . But a simple integration by parts of (1) allows showing that can be expressed as It is straightforward to show that is continuous on .

Lemma 3. The operator is self-adjoint.

Proof. To prove self-adjointness, we need to prove that given that , then . Thus, from (3) we have Integrating by parts the right hand side of (7) one has Integrating by parts again (8) one finally gets that

Lemma 4. The operator is bounded with the norm and verifies where the norm is defined as in (4).

Proof. From the representation of given in (6) one has Equation (10) is an obvious consequence of (12), and (11) follows from (12) by application of Cauchy-Schwarz inequality.

Theorem 5. The operator has a countably infinite number of eigenvalues and associated orthonormal eigenfunctions , which allow expressing , as Moreover one has the following. (i)If (2) is left disfocal in , then (ii)If (2) is left focal in but it is left disfocal in any interval interior to , then (iii)If (2) is left focal in an interval interior to (i.e., ), and , then the sign corresponding to that of .

Proof. Let us consider the eigenvalue problem From the theory of ordinary differential equations (see [12, Theorems V.8 and V.9]) it is known that there exists a countably infinite number of eigenvalues which form an increasing sequence with , each of which has its corresponding orthonormal (with the norm (4)) eigenfunction and that the set of eigenfunctions forms an orthonormal basis of . Applying the operator to these eigenfunctions and integrating by parts it is easy to show that which implies that are also the eigenfunctions of the operator with corresponding eigenvalue . Since from Lemmas 1, 2, and 3, is linear, self-adjoint, and compact, we can apply [11, Theorem ] and represent in the canonical form Applying again to (20) yields given that and for . Applying recursively to (21), one gets that which is in fact (13).
To prove (14), let us note that if (2) is left disfocal in the first eigenvalue (and therefore all the others) must be strictly greater than 1. In that case, by Parseval’s identity (see [11, Theorem ]) one has From (23) and given that one has From (24) and Lemma 4 one gets (14).
Let us focus now on (15). If (2) is left focal in but it is left disfocal in any interval interior to , then must be the first eigenvalue of (18). Then we can write Applying Parseval’s formula to the right hand side of (25) one has And given that , from (26) one gets Again, from (27) and Lemma 4 one gets (15).
Finally, let us prove now (16) and (17). Since (2) is left focal in an interval interior to , then at least the first eigenvalue must be smaller than 1. We can apply Parseval’s identity to obtain Since by hypothesis and , from (28) we get that which is (16). One the other hand, we can write We can divide both sides of (30) by to yield Applying Parseval’s identity to (31) one gets Since from (32) one yields which implies that there exists an index such that From Lemma 4 and (34) one has that is, Since (otherwise and ) and , (36) leads to (17).

Last, but not least, we will prove the next lemma, which will be of interest for the analysis of the next section.

Lemma 6. For enclosed intervals and provided that is positive almost everywhere on , the operator verifies

Proof. The proof is obvious by simple inspection of (6) and the fact that and are positive almost everywhere on .

Remark 7. It is straightforward to show that both Lemmas 16 and Theorem 5 are also applicable to the operator for the same conditions of , just changing all the references to left focal and left disfocal by right focal and right disfocal, respectively.

3. Application to the Distance between a Zero and Its Adjacent Critical Points

This section will elaborate on the results of the previous section in order to obtain conditions for focality or disfocality of (2) in the interval . Critical pieces for that will be the next theorem and its corollary.

Theorem 8. Let one suppose that there exists a nontrivial solution of (2) such that and .
For any such that , if on and on then For any such that , if on then In particular

Proof. Let us prove first (39). Since is linear and positive we can apply Lemma 1 recursively to yield And given that on , from Lemma 6 and (43) one gets (39) and (40).
As for (41), applying again Lemmas 1 and 6 recursively one has which gives (41) and (42).

Corollary 9. Under the same conditions of Theorem 8 one has

Proof. Equation (45) can be easily obtained by setting in (40). In turn, (46) can be obtained by setting (with ) and in (41) and taking into account that is concave (its second derivative is negative almost everywhere on since is positive almost everywhere and is positive on ).

Remark 10. The inequality (45) was proposed by Harris in [3, Section 3], and the inequality (46) was obtained by the authors in [10, Corollary 1] for the case .

Equations (45) and (46) are obvious selections of , but in many cases it is interesting to use other functions “closer” in a way to the solution of (2) in order to ease the convergence of the sequence . The following corollary gives examples of such functions.

Corollary 11. Under the same conditions of Theorem 8, the functions defined by satisfy

Proof. It is straightforward from Theorem 8 by taking the solution of (2) such that and noting that on and on .

Theorem 8 and Corollaries 9 and 11 provide separated necessary and sufficient conditions to assess the left disfocality of (2) in . However, at least in the way they have been presented, they do not allow determining if (2) is exactly left focal or left disfocal in that interval. That will be the purpose of the next theorems.

Theorem 12. Let one suppose that there exists a nontrivial solution of (2) with . Let be such that on and let be a sequence of real values such that with on . Then for and tends to as .

Proof. From the fact that on , on , (40) and (50), it is clear that (i.e., ) for . Now, let us assume that does not have a limit in . In that case there exist a and a subsequence of such that . But then, from Theorem 5 and Lemma 6 one has Therefore for every there will exist infinitely many such that which contradicts that fact that, from (50), . This proves the assertion.

Theorem 13. Let one suppose that there exists a nontrivial solution of (2) with . Let be such that on and let be a sequence of real values such that Then for and tends to as .

Proof. From the fact that on , (42) and (53), it is clear that (i.e., ) for . Now, let us assume that does not have a limit in . In that case there exist a and a subsequence of such that . But then, from Theorem 5 and Lemma 6 one has the sign coming from the fact that on and therefore in Theorem 5. Therefore for every there will exist infinitely many such that which contradicts that fact that, from (53), . This proves the assertion.

The following variant of Theorem 13 will also be very useful in some cases.

Theorem 14. Let one suppose that there exists a solution of (2) with . Let be a sequence of real values such that where for . Let one suppose that there exists a function such that for . Then for and tends to as .

Proof. From the fact that on , (41) and (56), it is clear that (i.e., ) for . Now, let us assume that does not have a limit in . In that case there exist a and a subsequence of such that . But then, from Theorem 5 and Lemmas 1 and 6 one has the sign coming from the fact that on and therefore in Theorem 5. Therefore for every there will exist infinitely many such that which contradicts that fact that, from (56), . This proves the assertion.

Remark 15. Theorem 14 is specially relevant when the functions to be used to calculate the sequence are or the function defined in (48) with . In the first case, the function of the hypothesis can be, for example, .

Remark 16. Theorems 1214 guarantee that, given , one can determine if (2) is left disfocal in such an interval by calculating the sequences defined by (50), (53), and (56). All these sequences will converge to the value such that (2) is exactly left focal in . If , then (2) will be left disfocal in , being left focal otherwise.

Remark 17. As indicated in Remark 7, all the results presented in this section are applicable to the right focal case; that is, , with the operator defined in (38). In particular, one has

4. Calculating

Section 3 has reflected the importance of the calculation of for different functions in order to determine whether (2) is left disfocal in a given interval or not. Bearing this in mind, our aim for this section is to find manners to facilitate the calculation of . That will be done with the following theorems, in all of which the internal product will be understood with as integration variable.

Theorem 18. The operator defined in (1) verifies Furthermore,

Proof. Equation (60) is straightforward from the definition of the internal product (3) and the representation of given in (6). And applying to (60) the fact that for one gets easily (61).

Theorem 19. The operator defined in (1) verifies And in particular

Proof. From (61) one has and given that is self-adjoint (Lemma 3), we can apply that property recursively to (64) to obtain Equation (63) is a particular case of (62) for .

Remark 20. The advantage of the formula (63) is that, given (2) and fixing the value we want to apply to , it allows testing easily different functions , (as many as we want) in the Theorems 814 in a simple way, just leaving the complication of the method to the calculation of .

Remark 21. The combination of (40) and (63) (a similar argument can be made with (42) and (63)) gives the inequality for any such that on . Equation (66) defines clearly two roles: (i)the role of , which can be regarded as a function “similar” to the solution of (2) but whose value shrinks severely with the index if is left disfocal in , thereby forcing the integration extremes to be “closer” to the values which make focal in that interval, in order to satisfy (66);(ii)the role of , which is the starting function for the iteration. In consequence, (66) reflects the importance of the selection of the function to make the method converge quickly: the “closer” such is to the solution , the lesser the internal product will be for any , which in turn will reduce the number of iterations required to establish a lower bound for (i.e., to find a which violates (66)). This favours the use of functions like that defined in (47) instead of (see (45)), whose convergence is normally quite slow.

Remark 22. It is very easy to show that, for the right focal case (), (63) becomes with the operator defined by (38).

5. Some Examples

Throughout this section we will introduce examples where Corollaries 9 and 11 and Theorem 19 will be used to determine conditions for focality and disfocality of (2) for different functions and several values of the recursivity index . For the sake of simplicity, the analysis will fix the value of the starting point (in all the examples a zero) and will search for upper and lower bounds of the adjacent right critical point (the case being a critical point and being a zero can be treated in a similar way). A comparison between these bounds and the bounds obtained via other methods, like Brown and Hinton’s one (see [4]), will also be done.

In all examples the calculation of will be done numerically with two starting functions : the one defined by (in fact ), which will be used to determine upper bounds of , and the one defined in (47), which will be used to deal with lower bounds of and which in the examples will be denoted by . It is also worth noting that we have decided to set up a top level of iterations in all the calculations for simplicity reasons, top level which varies between 5 and 10 for and between 12 and 23 for . Of course more accurate bounds can be obtained by increasing these thresholds.

Example 1. Let us consider the following linear differential equation: for different values of the constant .
The application of Corollaries 9 and 11 and Theorem 19 gives Table 1.
As can be seen in Table 1, with this method it is possible to find very accurate bounds of the critical point for all values of the constant , bounds which could be even closer to the value of if more iterations of were calculated. It is also worth remarking, on one hand, the excellent approximation that Brown and Hinton’s method (with comparatively little computation effort) gives for the lower bound, which can only be improved by the present method after many iterations ( if ), and on the other hand the fact that it seems possible to detect upper and lower bounds of without having to guarantee the fulfillment of inequalities given by Corollaries 9 and 11, just by checking whether grows or shrinks with , a behaviour which is in line with the results of Theorem 5. These two phenomena will be common to all the analysed cases.

Example 2. Let us consider the following linear differential equation: for different values of the constant .
The application of Corollaries 9 and 11 and Theorem 19 gives Table 2, where one can observe results and trends similar to those mentioned in the previous example.

Example 3. Let us consider the following linear differential equation: for different values of the constant .
The application of Corollaries 9 and 11 and Theorem 19 gives Table 3.

6. The Case

The previous sections have addressed the case almost everywhere on , but it is logical to wonder if the method presented here can be extended to the case in a subset of of positive measure. The answer to that question is no, since in that case fails to satisfy one of the properties required by an internal product, namely, that if and only if . That in turn implies that the operator defined by (1) cannot be self-adjoint. Likewise fails to be positive and monotonic, which difficults the setup of order relationships like those presented in Theorem 8 between a function and a solution of (2).

However, if we define it is easy to notice that ; that is, is a Sturmian majorant of and this in turn is a Sturmian majorant of . That implies that if (2) has a solution with a zero in and an adjacent right critical point , then it is possible to find a solution of left focal on an interval , and, in turn, to find a solution of left focal on an interval .

With this in mind, our strategy will be to start from the case , which does allow the application of all the results of Sections 24, and treat the case as a limit when . This will provide us with results for the case and therefore with results around disfocality of the more general case (just disfocality since the Sturmian majorant character of versus avoids the extrapolation to of focality results related to ).

To this end we will define the operators and we will make use of the following lemmas.

Lemma 23. If on such that , then the operators defined by (76) satisfy

Proof. The proof of (77) is straightforward given that in a set of positive measure. As for (78), can only be zero in if for all , which means that must be zero almost everywhere on and contradicts the hypothesis. And (79) is straightforward from the fact that, by (73), .

Lemma 24. Let and be nontrivial solutions of (74) and (75), respectively, such that and . Then,

Proof. If we use the matrix notation we can write (74) and (75) as which implies that Integrating (84) from to one gets Taking norms in (85) one has Now we can apply Gronwall-Bellman’s inequality (see [13]) to (86) to yield From (87) and the fact that by (72), (73), and (82), one obtains for which implies  (80).

Lemma 25. Let and be defined as in Lemma 24. Let and be the first critical point of and , respectively, at the right of . Then

Proof. Let us suppose, on the contrary, that . Then we have But from Lemma 24 for every we can find an such that which if applied to (92) gives Since can be as small as required, one gets that which contradicts the fact that, by definition, .

With the help of the previous lemmas we can extrapolate the results of Section 3 to this case, as the following theorems will show.

Theorem 26. Let be defined as in Lemma 24. Let be the first critical point of at the right of . For any such that , if on and on , then

Proof. We can apply Lemma 23 recursively to yield which are (96) and (97), respectively.

Remark 27. Theorem 26 is the extension of Theorem 8, (39)-(40), to the function defined by (73).

Theorem 28. Let and be solutions of (74) and (75), respectively, such that . Let and be the first critical points of and at the right of , and let one suppose that (for this it suffices to apply the proper multiplication constants to and ). Let be such that on and on for . If is defined by then and .

Proof. By Theorem 26 and (99) it is clear that which is the first assertion of the theorem. Now let us define the sequence From Theorem 12 one has that and . And from Lemma 23 and (101) one gets which implies that Applying Lemma 25 to (103) yields Finally from (100) and (104) one gets the desired result .

Corollary 29. Let be a solution of (74) such that and . Let be defined by Then and .

Proof. It is a consequence of picking the function and in Theorem 28, given that on and on .

Remark 30. In a similar manner to Theorem 12, Theorem 28 proves that it is possible to obtain, for the function defined by (73), a sequence of lower bounds of which converges to .

Remark 31. As in the previous sections, the results of this one are applicable to the right focal case replacing by the operator defined in (38) and evaluating at .

7. Conclusions

The method described in this paper provides criteria (in fact, infinitely many) to determine if the second-order differential equation (2) is left/right disfocal or nondisfocal in a given interval provided that is continuous and positive almost everywhere in such an interval. This is relevant since although there exist several criteria for disfocality for such an equation in the literature, the number of criteria to determine nondisfocality is very low. It also does it in a way that most of the calculations required to assess disfocality can be reused to analyse nondisfocality and the other way around, as Remark 20 indicates.

The most differential feature of this method is the fact that, unlike other existing methods, it allows generating sequences of points which converge to the values of the extremes which make (2) exactly left/right focal; that is, it allows determining if (2) is exactly disfocal or nondisfocal in a given interval.

The method also yields criteria to determine if that equation is left/right disfocal for the case being continuous but nonpositive (i.e., null or negative) in a subset of the interval of positive measure. A convergent sequence similar to that of the previous paragraphs can also be obtained in that case.

As main drawbacks, we can comment on two. On one hand, the fact that, although the method converges, the speed of convergence can be low, so that it may take many iterations to determine the focal/disfocal character of (2) in the given interval, depending on the selected starting function . On the other hand, the method does not take advantage of the negative values of in the calculations (this is a problem, however, shared with most of the disfocality criteria published so far). But even taking these constraints into consideration, we, the authors, believe that its advantages surpass its drawbacks largely and that it can become a very powerful tool to assess the disfocality/nondisfocality nature of (2), whether directly or by means of other methods based on this which in turn improves it.

Acknowledgment

This work has been supported by the Spanish Ministry of Science and Innovation Project DPI2010-C02-01.