Abstract

This paper presents two methods to obtain upper bounds for the distance between a zero and an adjacent critical point of a solution of the second-order half-linear differential equation , with and piecewise continuous and , and being real such that . It also compares between them in several examples. Lower bounds (i.e., Lyapunov inequalities) for such a distance are also provided and compared with other methods.

1. Introduction

In a recent paper of the authors (see [1]), a method to calculate upper bounds for the distance between a zero and an adjacent critical point of a solution of the linear equation was introduced. The purpose of this paper is to extend the results described there to the half-linear differential equation where , and is a real number such that .

As happened in the linear case and was commented in [1], the existing literature on the analysis of the distribution of the zeroes and the critical points of the half-linear equation (1.2) has focused on the determination of the oscillation or nonoscillation nature of the solutions of (1.2) (see, e.g., [2, Sections 5 and 9] for a compilation of such works), on the derivation of formulae for the asymptotic distance between consecutive zeroes (see [3]), and on the calculation of lower bounds for the distance between consecutive zeroes and between zeroes and adjacent critical points, that is, on extensions of the Lyapunov inequality to the half-linear case ([2, 49] are good examples of that). In contrast, it is really difficult (if not impossible) to find any papers that provide insights on how big the distance between consecutive zeroes or between a zero and a critical point (or, in other words, between a critical point and its adjacent focal points) may be for the half-linear equation. The present paper aims at solving that by providing two alternative bounding methods and comparing weaknesses and advantages of each. Since the results obtained here also allow us to define methods to obtain lower bounds for the aforementioned distance, a comparison of these methods and those of [2, 69] will also be provided.

Following [2, Section 1.2], we will denote by the solution of the half-linear differential equation given by the initial conditions and . If we define it is possible to show (see again [2, Section 1.2]) that the behaviour of this solution is very similar to that of the classical sine function, in the sense that is increasing on and (resp., decreasing on and , according to (1.5)).

This resemblance increases for the generalized sine function defined in the whole real line as the -periodic odd extension of the function given by which obviously satisfies being increasing on and decreasing on for any integer , with and (resp., being increasing on and decreasing on for any integer , with and ).

In fact for is the classical sine function . It is straightforward to show that also satisfies (1.3) for the whole real line.

With this in mind, we introduce the half-linear cotangent function as Throughout the paper will be the conjugate number of , that is, . It is straightforward to show that The organization of the paper is as follows. Section 2 will prove the main results. Section 3 will apply them to devise methods to calculate upper and lower bounds for the distance between a zero and the critical point immediately preceding or following it. Section 4 will provide some examples to illustrate advantages and disadvantages of the methods presented here for the calculation of upper bounds. Section 5 will compare the lower bounds obtained from the main results with other bounds existing in the literature. Finally, Section 6 will state several conclusions.

2. Main Results

The following extended mean value theorem for integrals condenses Theorems 3 and 4 of [10] and will be the key for the results of this paper, as happened in [1].

Theorem 2.1 (extended mean value theorem for integrals). Let , , be piecewise continuous functions on with , and on . Let be defined by Then, if is increasing on , one has If is decreasing on , one has

Before replicating the theorems of [1], we will need to define a similar Prüfer transformation that allows us to obtain a first order differential equation for a sort of “angle” function constructed with the solution of (1.2) and its derivative and with the aid of the cotangent function defined in (1.8). That is the purpose of the next lemma.

Lemma 2.2. Let be a nonidentically zero solution of (1.2) with and piecewise continuous and positive on an interval . Let be a real number. Let and be defined by the Prüfer transformation Then, one has

Proof. Dividing (1.2) by , one gets the equivalent equation From [2, Section 1.2], (1.9), and (2.6), it is straightforward to show that the angle function defined in (2.4) satisfies (2.5).

With the aid of Theorem 2.1 and Lemma 2.2, it is possible to prove the next theorem.

Theorem 2.3. Let be a non-identically zero solution of (1.2) with and piecewise continuous and positive on an interval . Let , be real numbers such that . Suppose that is any positive piecewise continuous function on .
Then, if , one has If , one has

Proof. Let us focus first on proving (2.7) for the case . From Lemma 2.2, it is clear that the Prüfer transformation defined by (2.4) for any satisfies the first-order differential equation (2.5).
Since on , the right-hand side of the equation is positive. Therefore, must be a nondecreasing function with and for an integer . That implies that must be decreasing and increasing on .
Integrating (2.5) between and , one has Since is increasing on , we can apply (2.2) to yield Likewise, we can apply (2.3) to yield Let us take now From (2.12) and (2.14), one has Likewise, from (2.13) and (2.14), one has From (1.7), (2.11), (2.15), and (2.16), one gets (2.7).
The proof of (2.8)–(2.10) is similar and will not be repeated here.

Remark 2.4. Theorem 2 of [1] is a particular case of Theorem 2.3, when .

Remark 2.5. The conditions on of Theorem 2.3 are only used to guarantee in (2.5) that is increasing on . These constraints can be relaxed if it can be proved that their violation will not force to cease being increasing on the interval of interest (e.g., because the negativity of either function happens very close to one of the extrema or of the interval where the associated term with is close to zero).
Setting and , respectively, in Theorem 2.3, it is straightforward to obtain the following corollaries, which will be used in the examples of Section 4.

Corollary 2.6. Let be a non-identically zero solution of (1.2) with and piecewise continuous and positive on an interval . Let , be real numbers such that .
Then, if , one has If , one has

Corollary 2.7. Let be a non-identically zero solution of (1.2) with and piecewise continuous and positive on an interval . Let , be real numbers such that .
Then, if , one has If , one has

The following corollary allows to slightly simplify the expressions on inequalities (2.7) and (2.9).

Corollary 2.8. Let be a non-identically zero solution of (1.2) with and piecewise continuous and positive on an interval . Let , be real numbers such that . Suppose that is any positive piecewise continuous function on .
Then, if , one has If , one has

Proof. From Hölder inequality, it is straightforward to prove that Now, from (2.7) and (2.25), one gets (2.23). Likewise, from (2.9) and (2.26), one gets (2.24).

To complete this section we will use Corollary 2.7 to establish a result on the distance between zeroes and consecutive critical points of the solutions of (1.2) when is monotonic.

Theorem 2.9. Let , be positive piecewise continuous functions on an interval . Let , be non-identically zero solutions of (1.2) such that ; with . Let and be defined such that , that is, is the critical point of consecutive to and the zero of consecutive to , and let one suppose that and .
If is decreasing on , then .
On the contrary, if is increasing on , then .

Proof. Let us first suppose that is decreasing on . From (2.19), one has From (2.22), one has From (2.27) and (2.28) one gets which obviously gives .
The case being increasing on can be easily proved in the same manner.

3. Methods to Obtain Upper and Lower Bounds

The results presented in the previous section allow us to define two methods to calculate upper and lower bounds for the distance between a zero and a critical point of a solution of (1.2).

Thus, on one hand, Lemma 2.2 can be leveraged to derive a method similar to that of Moore (see [11, Theorem 8]) for the linear equation (1.1), as the following theorem shows.

Theorem 3.1. Let be a non-identically zero solution of (1.2) with and piecewise continuous and on an interval . Let , be real numbers such that . Let be any positive real number.
If or , one has

Proof. Integrating (2.5) from to and applying (1.5), one has regardless of being a zero or a critical point. Equation (3.2) can be obtained in the same manner.

Remark 3.2. Note that unlike what happens in Theorems 2.3 and 2.9 and Corollaries 2.6, 2.7, and 2.8, Theorem 3.1 allows to be negative or zero on the interval , which makes it applicable to a wider set of problems of the form (1.2).

Since the previous theorem is applicable regardless of the concrete value of , it is evident that if we search for an that minimizes the value of the extreme that yields equality in (3.1) for fixed and , we will found the best upper bound for obtainable with this method. Likewise, if we search for the that maximizes the value of the extreme that yields equality in (3.2) for fixed and , we will found the best lower bound for obtainable with this method.

On the other hand, Theorem 2.3 and Corollary 2.6 can also be used to establish upper and lower bounds for the distance between a zero and a critical point of a solution of (1.2). This is quite evident in the case of the lower bound, since the left-hand sides of (2.8) and (2.10) are both increasing functions of the extreme . Therefore, the smallest that gives an equality on (2.8)—for the case —or on (2.10)—for the case —will be the lower bound for the searched zero or critical point of the solution of (1.2), as any smaller will not satisfy (2.8) or (2.10), respectively, and therefore cannot be a zero or a critical point.

In the case of upper bounds, the left hand sides of (2.7) and (2.9) are not necessarily increasing with , regardless of them being products of positive integrals. The reason for that lies on the fact that the integrands of all integrals appearing in such inequalities contain minima of functions on an interval: if the length of the interval grows (as grows), the minima can become smaller.

However, if one increases from a starting point (e.g., a critical point of a solution of (1.2)), until one finds a such that then such a value will be an upper bound for the next zero of the solution of (1.2). To prove that let us pick any and let us select according to (2.14) in Theorem 2.3. From (2.12), (2.13), (3.4), and (3.5), one has given that on . It is clear that (3.6) violates (2.11), and therefore cannot be the next zero of . A similar result can be proved for the case being a zero and being a critical point.

The previous reasoning gives also a possible approach for determining upper bounds when the left-hand sides of (2.7) or (2.9), depending on the case, find a maximum lower than for a value . Fixed basically consists in determining the lowest such that with for the case , and for the case . From (2.12), (2.13), and (3.7)–(3.9), one has It is important to remark, in any case, that this mechanism does not guarantee success in the search of the upper bound in all cases, as the integral between and may not get to be large enough to fulfil the difference between and .

4. Some Examples

Throughout this section, we will introduce examples where upper bounds for the distance between a zero and a critical point of a solution of (1.2) will be provided by means of the two methods presented in the previous section: the one based on Theorem 2.3 (or Corollary 2.6, depending on the case) and the extension of Moore's results (see [11]) to the half-linear equation introduced in Theorem 3.1. All the examples will address the cases and to cope with half-linear equations whose exponent is lower and greater (resp.) than the value of the linear case (). To simplify the comparison, the analysis will fix the value of the starting point (either a zero or a critical point) and will search for an upper bound of the corresponding point (next critical point or zero, resp., adjacent to ).

Example 4.1. Let us consider the following half-linear differential equation: Corollary 2.6 allows us to find an upper bound for by searching for the lowest value that satisfies in the case , and in the case .
Likewise, Theorem 3.1 allows us to find an upper bound for by searching for the lowest value that satisfies across all possible values of .
The results of the mentioned calculations are summarized in Table 1.
As it can be seen in Table 1, for both values and the bounds obtained using Corollary 2.6 improve those of Theorem 3.1 in the case , and are worse than those provided by Theorem 3.1 in the case .

Example 4.2. Let us consider the following half-linear differential equation The application of Corollary 2.6 and Theorem 3.1 to this example yields Table 2.
As it can be seen in Table 2, for both values and , the bounds obtained using Corollary 2.6 improve those of Theorem 3.1 in the case and are worse than those provided by Theorem 3.1 in the case .

Example 4.3. Let us consider the following half-linear differential equation: The application of Theorem 2.3 (with ) and Theorem 3.1 to this example yields Table 3.
As it can be seen in Table 3, for the value , the bounds obtained using Theorem 2.3 improve that of Theorem 3.1 in both cases and . In the case of the exponent , one gets exactly the opposite, the bounds obtained with Theorem 2.3 being slightly worse than those provided by Theorem 3.1 in both cases and . It is worth remarking that in this latter case the formulae obtained by means of Theorem 2.3 are the same for both cases and .

Example 4.4. Let us consider the following half-linear differential equation: The application of Corollary 2.6 and Theorem 3.1 to this example yields Table 4.
As it can be seen in Table 4, for both values and , the bounds obtained using Corollary 2.6 improve those of Theorem 3.1 in the case and are worse than those provided by Theorem 3.1 in the case (in fact in that case for the exponent , Corollary 2.6 does not give any valid upper bound at all).

5. Comparison of Lower Bounds

Although the focus of this paper so far has been the calculation of upper bounds for the distance between adjacent zeroes and critical points, it is interesting to compare the lower bounds that can be obtained by means of Corollary 2.6 (for the sake of simplicity we will use such a corollary instead of Theorem 2.3) with similar results obtained by other authors (see [2, 49]). To facilitate this, we will concentrate on the case in (1.2).

In all the cases under comparison, the lower bound can be determined from a Lyapunov inequality of the form , where is an integral whose integrand contains the function of (1.2) and whose limits are the zero and the critical point, and is a real value which depends only on . Bearing this in mind, we can summarize the cases under analysis in Table 5.

A first glance at Table 5 allows to notice the difficulty to compare Corollary 2.6 with the rest of methods due to the presence of max functions in the integrand of . Such difficulty disappears (in general) in the case when is decreasing, and in the case when is increasing, since in both cases the max function becomes and the integral which results is the same as the integral which appears in other methods. If we focus on those cases, one can easily check that the bound obtained from Corollary 2.6 always improves those of [2, 7, 8] and [9, Lemma 4.1] (in all these latter cases, the value is 1, which is lower than ; this allows Corollary 2.6 to yield greater—and therefore better—lower bounds). It also improves that of [9, Proposition 3.3] in the case (for , one has ). That cannot be taken as a general rule if the character of is not as stated before, and in that case, it is relatively easy to find examples where the inequalities given in [2, 79] improve those obtained with Corollary 2.6.

In turn, the comparison with Yang’s extension (see, [6]) of Brown and Hinton’s version of the Lyapunov inequality to the half-linear case is more difficult. One can argue that in most cases [6] will give better (i.e., greater) lower bounds than those of Corollary 2.6. To see that, let us pick the case and positive and increasing. Integrating by parts and applying Theorem 2.1 one can prove Yang’s formula for such a case is which combined with the previous equation gives The inequality (5.3) shares the same form of the integral of Corollary 2.6 and a value that is greater for (in consequence for these cases, Yang’s formula will give much better lower bounds than Corollary 2.6) and slightly lower (2,4 percent lower in the worst case, which is a very low value) than the value associated with Corollary 2.6 (in these cases, one can only guarantee that Corollary 2.6 will give better lower bounds than (5.3), but not that those lower bounds will be better than the ones calculated with Yang’s formula). That does not mean, of course, that Corollary 2.6 cannot ever give better lower bounds than [6], and in fact the case constant is a good counter example. However, it is true that the task of finding functions where Corollary 2.6 improves [6] is not easy at all.

Remark 5.1. We have decided not to include the Lyapunov inequality (3.2) of Theorem 3.1 in the Table 5 to simplify the comparison among lower bounds. The reason behind is that, despite it sharing the form with being an integral depending on , there is no case where the integrand of can be easily compared with the integrand of of the rest of the cases, unlike what happens with Corollary 2.6. In consequence, the comparison has to be made numerically with concrete examples, which, although certainly interesting, would have increased the length of the paper excessively.

6. Conclusions

The methods described in this paper provide upper and lower bounds for the distance between a zero and a critical point of a solution of the half-linear differential equation (1.2).

The advantage of them over other methods ([2, 79]) which provide lower bounds for such a distance is quite clear if the monotonic conditions stated in Section 5 for are satisfied (it can be far from being really an advantage otherwise). In contrast, their advantage over the extension of Brown and Hinton’s method described by [6] in the same case is lesser, although it does exist for some (in fact for infinitely many) functions .

In turn, their value in the determination of upper bounds is high, since no other alternatives to estimate such bounds seem to exist in the literature associated with (1.2), as far as the authors are aware.

As for the comparison between both methods of calculating upper bounds, the main disadvantage of the method associated with Theorem 2.3 is that it imposes the need for both functions and of (1.2) to be strictly positive on the interval where the calculation is performed. That does not happen in the case of the method associated with Theorem 3.1, which can be used even with or being negative. However, the application of this latter method is usually much more difficult due to the need to minimize as a function of the parameter , minimization which can be tricky and difficult to calculate, specially when compared with the method of Theorem 2.3.

The provided examples do not favour either method over the other: basically, it depends on the concrete case, and in fact in almost all of them (Example 4.3 is an exception), the method that was better in the case proved to be worse in the case . We cannot guarantee that this is a general rule, anyway, although one can conjecture, just by simple examination of the resulting formulae, that cases where and are monotonic in the same sense may be likely to follow that pattern. More work is required in this area to find underlying rules, therefore.

Acknowledgment

This work has been supported by the Spanish Ministry of Science and Innovation Project DPI2010-C02-01.