Abstract

We study the spectrum structure of discrete second-order Neumann boundary value problems (NBVPs) with sign-changing weight. We apply the properties of characteristic determinant of the NBVPs to show that the spectrum consists of real and simple eigenvalues; the number of positive eigenvalues is equal to the number of positive elements in the weight function, and the number of negative eigenvalues is equal to the number of negative elements in the weight function. We also show that the eigenfunction corresponding to the th positive/negative eigenvalue changes its sign exactly times.

1. Introduction

Let be an integer, . Let us consider the discrete second-order linear Neumann eigenvalue problem where , and satsifies(A0), , ;(A1) on and changes its sign on , that is, there exists a proper subset , such that Let be the number of elements in and let be the number of elements in . Then

When the weight function is of one sign, Atkinson [1] and Jirari [2] studied the eigenvalue of the second-order problem and obtained that (5), (6) have real eigenvalues, which can be ordered as . Here and is constant. It can be seen that if we take , , then (5), (6) will convert to (1), (2).

However, these two results do not give any information on the sign-changing of the eigenfunction of (5), (6).

In 1991, Kelley and Peterson [3] considered the linear eigenvalue problems (5), (6) with , where on , is defined and real valued on and on . They obtained that (5), (6) have exactly real and simple eigenvalues , which satisfies and the eigenfunction corresponding to changes its sign exactly times.

Furthermore, when , Agarwal et al. [4] generalized the above results to the dynamic equations with Sturm-Liouville boundary condition. Moreover, under the assumption that the weight functions are of one sign, for further important results in linear Hamiltonian difference systems, including the oscillation properties of solutions, one can see Shi and Chen [5], Bohner [6], and the references therein. The spectrum results for the continuous case have been studied and used to deal with several nonlinear problems; see, for example, [713] and the references therein.

However, there are few results on the spectrum of discrete second-order linear eigenvalue problems when changes its sign on . In 2007, Ji and Yang [14, 15] studied the structure of the eigenvalues of (5), (6) with changing its sign, and they obtained the number of positive eigenvalues equal to the number of positive elements in the weight function, and the number of negative eigenvalues equals to the number of negative elements in the weight function. It is worth remaking that they provided no information on the distribution of these eigenvalues of (1), (2) and no information on the sign-changing of the corresponding eigenfunctions.

Naturally, there are two interesting questions: (a) how to distribute of the eigenvalues of (1), (2) and (b) how the sign-changing of the corresponding eigenfunctions occur.

It is the purpose of this paper to establish the structure of eigenvalues and the oscillatory properties of the corresponding eigenfunctions of (1), (2).

The main result of our paper is the following theorem.

Theorem 1. Suppose that (A0), (A1) hold. Then one has the following.(i)If , then (1), (2) have real and simple eigenvalues, which can be ordered as follows: Moreover, for , the eigenfunction corresponding to the eigenvalue has exactly simple generalized zeros; for , the eigenfunction corresponding to the eigenvalue has exactly simple generalized zeros.(ii)If , then (1), (2) have real and simple eigenvalues, which can be ordered as follows: Moreover, for , the eigenfunction corresponding to the eigenvalue has exactly simple generalized zeros; for , the eigenfunction corresponding to the eigenvalue has exactly simple generalized zeros.(iii)If , then is an eigenvalue of (1), (2) and other eigenvalues are real and simple, which can be ordered as follows: Moreover, for , the eigenfunction corresponding to the eigenvalue has exactly simple generalized zeros; for , the eigenfunction corresponding to the eigenvalue has exactly simple generalized zeros.

Remark 2. It is worth remarking that the number of sign changing of eigenfunction is given in Theorem 1. Thus, this result is a generalization of the main results in [15].

Remark 3. Applying Theorem 1 and the well-known Rabinowitz global bifurcation theorem, it is easy to obtain existence results of sign-changing solutions for the nonlinear analogue of (1)-(2); see Ma and Gao [12, 16] for some related results.

The rest of the paper is devoted to proving Theorem 1. To do this, we make use of the law of inertia for quadratic forms and some techniques form oscillation matrices [17].

2. Proof of the Main Result

Let for , , . Then (1), (2) can be written as a linear pencil problem as follows: where

Let denote the th principal submatrix of and the th principal submatrix of . It is easy to verify that is positive definite for , is positive semidefinite and

In fact, for any real vector , it follows that From , , . If , then for and , leading to . So, is positive definite. By the same method, with obvious changes, we can conclude that is also positive definite for .

For any real vector , we have . Thus, is positive semidefinite.

For , let denote the th principal subdeterminant of and suppose that ; then , and where , , and .

As we know, to find the eigenvalues of (1), (2) is equivalent to find the roots of . Thus, it is necessary to discuss some properties of the sequence (15).

For , let be the number of the elements in for som , and the number of the elements in for some .

Lemma 4. For , one has

Proof. For , it is evident that is a polynomial of degree precisely , and

Lemma 5. The roots of are real, . Moreover, has positive roots and negative roots.

Proof. Since is positive definite matrix for and is positive semidefinite matrix, it follows that the roots of are real, .
For the , , there exists a unique lower triangular real matrix such that (this is the well-known Cholesky decomposition; see [18, Corollary 7.2.9]). It is easy to check that the matrix is real and symmetric, and is a root of if and only if is an eigenvalue of .
The fact that is real and symmetric indicates that there exists an orthogonal matrix such that where are all eigenvalues of . Let . It is seen from (19) that are two representations of the real quadratic form . In view of the law of inertia for quadratic forms [19, Theorem 1, p. 297], we immediately deduce that the number of positive and the number of negative elements in the set are and , respectively.

Lemma 6. Two consecutive polynomials , have no common zeros for .

Proof. Suppose on the contrary that there exists such that . Then by the recurrence relation (15), we get . Furthermore, we can get . However, this contradicts .

Lemma 7. Suppose that is a root of . Then for .

Proof. Since , by the Lemma 6, we have . By the recurrence relation (15), , which implies that . This completes the assertion.

Lemma 8. Assume that (A0), (A1) hold. Then(1); (2)If , then .

Proof. (1) From (11), we have
So, by simple computation, it follows that
(2) If , then . Moreover, let be elements in with Let Let us arrange the elements of in the increasing order, It is easy to check that the number of the elements on is . We denote the th elements in (25) by , . For given , define .
By computing and simplifying, we get that and here denote the principal minor determinant of order of , which is obtained by deleting the elements of column and row and deleting the elements of column and row. Thus, from (A0) and , it follows that

Lemma 9. For , the roots of are simple. Moreover, one has the following.(i) The largest negative root and the smallest positive root of and the largest negative root and the smallest positive root of   satisfy (ii) For , the positive roots of and separate one another; the negative roots of and separate one another.(iii) If , then the roots of are simple and is a simple root; if , then is a double roots of and the other root are simple.

Proof. First, we deal with the case .
Obviously, . If , then If , then
Recall . If , , then and has two different roots and as follows: It is easy to see that .
If , , then and has two different roots and as follows: It is easy to see that .
If , , then and has two different roots and as follows: It is easy to see that .
If , , then and has two different roots and as follows: It is easy to see that . Thus, the assertion is true for .
Second, suppose that for , and the relations of and are true, that is, the following two assertions hold.
If , then , and accordingly,
If , then , and accordingly,
Now, we consider the case . It is enough to verify the following four cases.
Case 1. and .
In this case, , , we need to prove that
Case 2. and .
In this case, , , we need to prove that
Case 3. and .
In this case, , , we need to prove that
Case 4. and .
In this case, , , we need to prove that
We only deal with Case 1. The other cases could be dealt via the same method.
First, we show that (42) holds.
Since , it follows from Lemma 4 that
We only deal with the case that is even. The case is odd could be treated by the same way.
Thus (47) reduces to Recall (39) as follows: and the fact that It follows from (47), (48), and (49) that Combining this with (50) and using Lemma 7, it concludes that In particular, By Lemma 5, has exactly zeros in . This together with (52), (53), and the fact that implies that there exist , , and , such that Therefore, (42) is valid.
Next, we show that (43) is true.
Obviously, , yields In the following, we only deal with the case that is even. The case is odd could be treated by the same way.
From Lemma 4, we have that Combining this with (40) as and using the fact that it concludes that This together with Lemma 7 implies that In particular, for , This together with the third inequality in (57) implies that for some .
Using (61) with , we get which together with the fact implies that for some .
Now, for , there exist , such that Therefore, (43) is valid.
Finally, for , the relation and are also true. From above conclusions, we have has negative roots and positive roots satisfying
If , we have that ,. By a similar argument and together with the fact and Lemma 8, it follows that(i) if , then (ii) if , then (iii) if , then
If , we have that , . By a similar method and together with and Lemma 8, we get that(i) if , then (ii) if , then (iii) if , then Thus the proof is complete.

Lemma 10. Let be the number of sign changes in the sequence (15). Then for , where means that in and means that in .

Proof. It is motivated by the proof of Strums Theorem; see [20, Theorem ] and its proof.
The idea of the proof is to follow the changes in as passes through the interval . In particular, we will show that is a monotonically increasing function and that each root of and only a root of make jump by .
If for some , then for we have from Lemma 7 that and have opposite, but constant signs, since and cannot be zero in a sufficiently small neighborhood and thus cannot change sign. Hence, whatever the sign of is in , it does not change the overall sign change count (to see this, note that and have opposite signs, and hence if the sign sequence before is , it is after and the number of sign changes remains the same. The same for the other cases). In other word, stays constant when passing through a root of from some .
It is easy to see from Lemma 9 that
Next, we show that each root of and only a root of make jump by .
In fact, for , , which implies that there exists a neighborhood of , such that From the definition of , The chain of signs switches from “” to “” when passing through , so increases by .
For , The chain of signs switches from “” to “” when passing through , so increases by .
Repeating the above argument, we may deduce that This completes the proof.

Lemma 11. If satisfies (1), (2) with , then

Proof. Let be a solution of (1), (2). Then Clearly, (83) is equivalent to with and determined by (2).
Let Then Obviously, and satisfy the same recurrence formula (15), and it follows that and accordingly,

Proof of Theorem 1. From Lemmas 5 and 9, we can obtain the following consequence.(i) If , then (1), (2) has real and simple eigenvalues, which can be ordered as follows: (ii) If , then (1), (2) has real and simple eigenvalues, which can be ordered as follows: (iii) If , then is an eigenvalue of (1), (2) and other eigenvalues are real and simple, which can be ordered as follows:
Now, we consider the numbers of sign changing of eigenfunction. From Lemma 11, we may determine that the number of sign changing of via that of since (2) implies that and .
Let be the number of sign changes in the sequence Using the same method to prove Lemma 10, with obvious changes, we may obtain that for , Thus, for , Lemma 9 yields This together with (95) and the fact that is nondecreasing in implies and accordingly For the case , since for , it follows that This together with the facts and implies that
If , then it has been done! If , then by the same method, with obvious changes, we may get
Finally, by using the above method, with obvious changes, we may prove that the number of sign changes is .

Remark 12. If one extra term is present in (1), then is generally not an eigenvalue of anymore. Recall the fact that is essential in Lemma 8 and its proof. Therefore, the spectrum structure of the more general problem (103) is still open.

Acknowledgments

The authors are very grateful to the anonymous referees for their valuable suggestions. This study is supported by the NSFC (no. 11061030), SRFDP (no. 20126203110004) and Gansu Provincial National Science Foundation of China (no. 1208RJZA258).