#### Abstract

We consider the following eigenvalue problem: , , , , where is an arbitrary fixed parameter and is an odd smooth function. First, we prove that for each integer there exists a radially symmetric eigenfunction which possesses precisely zeros being regarded as a function of . For sufficiently small, such an eigenfunction is unique for each . Then, we prove that if is sufficiently small, then an arbitrary sequence of radial eigenfunctions , where for each the th eigenfunction possesses precisely zeros in , is a basis in ( is the subspace of that consists of radial functions from . In addition, in the latter case, the sequence is a Bari basis in the same space.

#### 1. Introduction, Notation and Definitions, and Results

In the present paper, we consider the problem where is an odd continuously differentiable function, is a spectral parameter, and is an arbitrary positive fixed parameter. Problems of this type may arise in particular in the solid state physics, heat and diffusion theory, in the theory of nonlinear waves, and so forth. Hereafter in the paper, all the quantities we deal with are real.

We restrict our attention to the radial eigenfunctions of problem (1.1)–(1.3), that is, to the eigenfunctions that depend only on . Under our assumptions, the problem has an infinite sequence of radial eigenfunctions such that for each integer the th eigenfunction regarded as a function of has precisely zeros in the interval . The main question we are interested in the present paper is whether such a sequence of eigenfunctions is a basis in a commonly used space, such as the subspace of that consists of all radial functions from . According to our result below, this is true if is sufficiently small.

For a discussion of the pertinence of our formulation of the problem (note that problem (1.1)–(1.3) includes an unusual normalization condition (1.2)) and for a longer list of references, we refer the reader to our quite recent review paper [1]. Here we note only that our formulation of the problem is “good’’ in the sense that, as in the linear case, our problem has an infinite sequence of radial eigenfunctions , where for each integer the th eigenfunction , regarded as a function of the argument , possesses precisely zeros and, if is sufficiently small, such a sequence of eigenfunctions is a basis in the space . But if one excludes the normalization condition (1.2) from the statement of the problem, then the set of all eigenfunctions becomes too wide; it would contain “a lot of’’ bases. It is a separate question what normalization condition should be imposed. The author believes that this question may be answered only in the future if/when the field becomes developed sufficiently; in particular, an applied problem may give such an answer. In this context, the reader may consider our system (1.1)–(1.3) as a model problem.

We mention especially our paper [2] (see also [1]) in which a problem analogous to (1.1)–(1.3) was studied in the spatial dimension . It is proved in these two articles that if assumption (f) is valid (see below) and if in addition is a nondecreasing function of , then this one-dimensional problem has a unique sequence of eigenfunctions such that for each the th eigenfunction has precisely zeros in and this sequence of eigenfunctions is a basis (in addition, a Riesz basis) in while the sequence of normalized eigenfunctions is a Bari basis in the same space (we will establish precise definitions in what follows).

Now, we will introduce some *notation* and *definitions*. Let be the standard Lebesgue space of functions square integrable over , equipped with the scalar product and the corresponding norm . By we denote the subspace of the space that consists of all radial functions from and is equipped with the same scalar product and the norm. Let denote the usual weighted Lebesgue space of functions measurable in for which
The space is equipped with the corresponding scalar product. In fact, , , and are Hilbert spaces.

Let be a separable Hilbert space over the field of real numbers in which the scalar product and the norm are denoted and , respectively. We recall that a sequence is called a *(Schauder) basis* in if for any there exists a unique sequence of real numbers such that
Two sequences and from are called *quadratically close to each other* (or the sequence is called *quadratically close* to the sequence ) if
A basis in quadratically close to an orthonormal basis in is called a * Bari basis* in . According to Corollary in [1], if is an arbitrary sequence of elements of and if
where is an orthonormal basis in , then is a Bari basis in . Bari bases being compared with bases have additional nice properties that we do not discuss in the present paper (on this subject, see, e.g., [3]). Some general aspects of the theory of nonorthogonal expansions in a Hilbert space are considered in [1, 3].

We call a sequence of radial eigenfunctions of problem (1.1)–(1.3) * standard* if for each integer the th eigenfunction regarded as a function of possesses precisely zeros in the interval . Everywhere we assume the following.

*Let*

*be a continuously differentiable odd function in*

*and let*.

Note that the assumption that is not restrictive. One can achieve this for an arbitrary odd continuously differentiable function by a shift of the spectrum.

Consider the following linear eigenvalue problem: where is a spectral parameter. Denote by the sequence of the radial eigenfunctions of problem (1.8) where, for each integer , the th eigenfunction regarded as a function of possesses precisely zeros. By we denote the corresponding sequence of eigenvalues. Note that is an orthogonal basis in . Our main results here are as follows.

Theorem 1.1. *Under assumption (f)*(a)*for any integer problem (1.1)–(1.3) has a radial eigenfunction which, being regarded as a function of , possesses precisely zeros in the interval ;*(b)* for any and for an arbitrary radial eigenfunction of problem (1.1)–(1.3);*(c)*let in addition to assumption (f) be a nondecreasing function of . Then, the positive radial eigenfunction is unique;*(d)*under assumption (f) there exists such that for any and any integer the radial eigenfunction of problem (1.1)–(1.3) that, being regarded as a function of , has precisely zeros in is unique. *

Theorem 1.2. *Let assumption (f) be valid. Then, there exists defined for all and going to as such that for any **
for an arbitrary standard sequence of eigenfunctions of problem (1.1)–(1.3). Consequently, if is sufficiently small, an arbitrary standard sequence of eigenfunctions, which is unique for sufficiently small by Theorem 1.1, is a basis in and, in addition, the sequence is a Bari basis in the same space.*

*Remark 1.3. *In view of Theorem 1.2 and the Bari's theorem (see [1, 3]), if one proves the linear independence, in the sense of the space , of a standard sequence of eigenfunctions of problem (1.1)–(1.3) when is not necessarily sufficiently small, then this sequence of eigenfunctions will be proved to be a basis in and the sequence of the normalizations of these eigenfunctions to is a Bari basis in the same space, too. However, in the present paper, we leave open the question about the linear independence of such a system when is not small.

In the next section, we will prove Theorem 1.1, and in Section 3, Theorem 1.2.

#### 2. Proof of Theorem 1.1

Proofs of results of the type of Theorem 1.1(a) are known now (on this subject, see, e.g., [4]), so we will establish only a sketch of the proof of this claim. In the class of radial solutions, problem (1.1)–(1.3) reduces to the following one: where the prime means the derivative in . Equation (2.1) can also be rewritten in the following equivalent form: We supply (2.4) with the following initial data: A solution of (2.4) and (2.5) that satisfies the condition is a solution of problem (2.1)–(2.3). In (2.1) and (2.4), is a singular point. However, for problem (2.4)-(2.5), local existence, uniqueness, and continuous dependence theorems in their usual form are valid (for proofs of these claims, see, e.g., [4]). Let be a solution of (2.4)-(2.5). Then, , the derivative exists, is continuous and it satisfies the equations As above, for problem (2.6)-(2.7) local existence and uniqueness theorems in their usual form, as far as the theorem of the continuous dependence on , take place. In addition, since (2.6) is linear with respect to , the solution of problem (2.6)-(2.7) exists for all those values of , for which the solution of problem (2.4)-(2.5) exists.

Let us prove statement (a) of Theorem 1.1. Multiply (2.4) by and integrate the result from to . Then, we obtain the identity where . Denote .

Lemma 2.1. *If is such that , where is the corresponding solution of problem (2.4)-(2.5), then there is no point such that . In particular, for any so that is not an eigenfunction of problem (2.1)–(2.3).*

*Proof. *On the contrary, suppose that and there exists such that . But then, , therefore which contradicts (2.8).

Lemma 2.2. *Let . Then, for all .*

*Proof. *It can be made by analogy (if , then by the uniqueness theorem).

Note that Lemmas 2.1 and 2.2 yield statement (b) of Theorem 1.1. Note in addition that if , then the solution of problem (2.4)-(2.5) is global (i.e., it can be continued on the entire half-line ).

Observe now that for all sufficiently large , , and for all sufficiently large . By Lemma 2.2, for all sufficiently large. Therefore, comparing (2.4)-(2.5) and (1.8) (one should rewrite system (1.8) in the form analogous to (2.4)-(2.5)), we see that, according to the standard comparison theorem, the number of zeros in of increases unboundedly when unboundedly increases.

Take an arbitrary integer and denote by the set of all values of for each of which the solution of (2.4)-(2.5) has at least zeros in . Let . Denote by the solution of problem (2.4)-(2.5) taken with . Then, for all so that this solution is global. Observe that, if is a solution of (2.4) and if , then by the uniqueness theorem. Therefore, zeros of are isolated and hence, has a finite number of zeros in . If , then there exists sufficiently close to such that the corresponding solution of (2.4)-(2.5) has no less than zeros in this interval, which contradicts our definition of the set . By analogy, if or if and , then any solution of (2.4) and (2.5) taken for sufficiently close to has no more than zeros in which contradicts our definition of the set . Thus, has precisely zeros in the interval and . So, claim (a) of Theorem 1.1 is proved.

Let us prove claim (c). On the contrary, suppose that there exist two positive eigenfunctions and of problem (2.1)–(2.3) corresponding to the eigenvalues and , respectively, where . By (2.8), for any . Indeed, if it would be , then , while if and for and if , then for all in view of (2.4) and by the same arguments as in the proof of Lemma 2.1.

Now, we apply a variant of the result from [5].

Lemma 2.3. *One has for any .*

*Proof. *We have , therefore
in a right half-neighborhood of the point . Let us prove that (2.9) holds everywhere in . Suppose that the first inequality (2.9) holds in some interval , where . Integrate it from to . Then
therefore until the first inequality (2.9) holds.

Suppose that the first inequality (2.9) is valid in an interval , , and that it is violated at the point . Note that, as is proved above, . But then, by (2.4),
hence, by continuity (2.11) is still valid in a left half-neighborhood of the point so that it must be , which is a contradiction.

Now, suppose that (2.9) holds everywhere in and that , . Denote , , , and . From (2.4), In a neighborhood of the point , one has and by analogy So, we see that the difference goes to as . But then in a left half-neighborhood of the point (because by (2.9) in ), and since in addition in , and we arrive at a contradiction as earlier. Thus, claim (c) of Theorem 1.1 is proved.

Now, we turn to proving claim (d) of Theorem 1.1. Let us prove the following.

Lemma 2.4. *There exist and such that for any , , , and the corresponding solution of (2.4)-(2.5) which satisfies one has
*

*Proof. *First, multiply (2.6) by and integrate the result from to . Then, after integration by parts, we obtain
Second, multiply (2.6) by and integrate the result from to . Then, after integration by parts,
Now, add (2.18) to (2.17). Then, we obtain the following identity:

Now, take an arbitrary and integrate (2.19) for one more time from to . Then, applying the inequality , ,
for constants and independent of , , and . Consequently, since for sufficiently small by the comparison theorem, for sufficiently small
where the constants and do not depend on , , and . Now, the statement of Lemma 2.4 follows by the Gronwell's lemma.

Lemma 2.5. *There exists such that for any there is no and for which .*

*Proof. *On the contrary, suppose that there exist arbitrary small , and for which . Multiply (2.4) by , (2.6) by , subtract the results from each other and integrate the obtained identity from to . Then, after integration by parts,
where as because . Therefore
for sufficiently small, which contradicts Lemma 2.4.

Let us prove Theorem 1.1(d). Let now . Take an arbitrary integer and let , as earlier. Then, a simple corollary of Lemma 2.5 is that there exists a right half-neighborhood of belonging to in which the st zero of the solution of problem (2.4)-(2.5) is a strictly decreasing function of (so that in particular ). Letting increase further, by Lemma 2.5, again, one sees that the st zero of the corresponding solution of (2.4)-(2.5) continue to decrease strictly so that in the half-line there is no value for which has precisely zeros in the interval and . Claim (d) of Theorem 1.1 is proved. Our proof of Theorem 1.1 is complete.

#### 3. Proof of Theorem 1.2

Denote . Then where . By the comparison theorem and Theorem 1.1(b) where goes to as and does not depend on . In addition, by the standard comparison theorem, again, there exists such that for all . By (3.3) and Theorem 1.1(b), where goes to as uniformly in .

Now, we proceed as when proving Lemma 2.4. First, multiply (3.1) by and integrate the result from to . Then, after integration by parts, Second, multiply (3.1) by and integrate the result from to . Then, in view of the boundary conditions (3.2), after integration by parts, Add (3.7) to (3.6). Then, we obtain Integrate (3.8) for one more time from to . Then, as when deriving (2.20), Hence, in view of (3.3) and (3.5), where is a constant independent of and and as . Now, we obtain from (3.10) by the Gronwell's lemma the following: where does not depend on and and goes to as . Thus, finally from (3.4) and (3.9), we have for a constant independent of and going to as .

Now, it follows from (3.12) that where is defined for all and goes to as . So, we have In view of this estimate, to prove Theorem 1.2, it suffices to show that where goes to as . But we have by the proved part of Theorem 1.2, where as . Thus, (3.15) follows. Our proof of Theorem 1.2 is complete.