Abstract

A version of the inverse spectral problem for two spectra of finite-order real Jacobi matrices (tridiagonal symmetric matrices) is investigated. The problem is to reconstruct the matrix using two sets of eigenvalues: one for the original Jacobi matrix and one for the matrix obtained by deleting the last row and last column of the Jacobi matrix.

1. Introduction

The Jacobi matrices (tridiagonal symmetric matrices) appear in variety of applications. A distinguishing feature of the Jacobi matrices from others is that they are related to certain three-term recursion equations (second-order linear difference equations). Therefore, these matrices can be viewed as the discrete analogue of Sturm-Liouville operators, and their investigation have many similarities with Sturm-Liouville theory [1].

An (real) Jacobi matrix is a matrix of the form where for each and are arbitrary real numbers such that is different from zero: Let be the truncated matrix obtained by deleting from the last row and last column: Denote the eigenvalues of the matrices and by and , respectively. The (finite) sequences and are called the two spectra of the matrix .

The subject of the present paper is the solution of the inverse problem consisting of the following parts.(i)Is the matrix determined uniquely by its two spectra?(ii)To indicate an algorithm for the construction of the matrix from its two spectra;(iii)To find necessary and sufficient conditions for two given sequences of real numbers and to be the two spectra for some matrix of the form (1.1) with entries from class (1.2).

This problem was solved earlier in [2, 3]. In the present paper we offer another and more effective, as it seems to us, method of solution for this problem.

Other versions of the inverse problem for two spectra are investigated in [1, 49].

The paper consists, besides this introductory section, of two sections. Section 2 is auxiliary and presents briefly the solution of the inverse problem for finite Jacobi matrices in terms of the eigenvalues and normalizing numbers. A solution to this problem is presented in [1, Section  4.6] and [10]. In Section 3, we solve our main problem formulated above. At the basis of this solution is the formula where These formulae express the normalizing numbers of a finite Jacobi matrix in terms of two of its spectra. The formulae (1.4) and (1.5) give a conditional solution (i.e., assuming that there exists a matrix of the form (1.1) which has the sequences and as two of its spectra) of the inverse problem in terms of two spectra because once we know the numbers and , we can form the matrix by the prescription given in Section 2. Next, we give necessary and sufficient conditions for two sequences of real numbers and to be two spectra of a Jacobi matrix of the form (1.1) with entries in the class (1.2), that is, we solve the main problem of this paper. The conditions consist of the following single and simple condition: that is, the numbers and interlace.

2. Preliminaries on the Inverse Spectral Problem

In this section, we follow the author’s paper [10]. Given a Jacobi matrix of the form (1.1) with the entries (1.2), consider the eigenvalue problem for a column vector , that is equivalent to the second-order linear difference equation for , with the boundary conditions: Denote by and the solutions of (2.1) satisfying the initial conditions: For each , is a polynomial of degree and is called a polynomial of first kind and is a polynomial of degree and is known as a polynomial of second kind. The equality holds so that the eigenvalues of the matrix coincide with the zeros of the polynomial . If , then is an eigenvector of corresponding to the eigenvalue . Any eigenvector of corresponding to the eigenvalue is a constant multiple of .

As shown in [10, Section  8], the equations hold, where the prime denotes the derivative with respect to .

Since the real Jacobi matrix of the form (1.1), (1.2) is self-adjoint, its eigenvalues are real. Let be a zero of the polynomial . The zero is an eigenvalue of the matrix by (2.5), and hence it is real. Putting in (2.7) and using , we get The right-hand side of (2.8) is different from zero because the polynomials have real coefficients and hence are real for real values of , and besides . Therefore, , that is, the zero of the polynomial is simple. Hence the , as a polynomial of degree , has distinct zeros. Thus, any real Jacobi matrix of the form (1.1), (1.2) has precisely real and distinct eigenvalues.

Let be the resolvent of the matrix (by we denote the identity matrix of needed dimension) and the -dimensional column vector with the components . The rational function we call the resolvent function of the matrix , where stands for the standard inner product in . This function is known also as the Weyl-Titchmarsh function of .

In [10, Section  5] it is shown that the entries of the matrix (resolvent of ) are of the form where Therefore, according to (2.9) and using initial conditions (2.3) and (2.4), we get

We often will use the following well-known simple useful lemma. We bring it here for easy reference.

Lemma 2.1. Let and be polynomials with complex coefficients and . Next, suppose that , where are distinct complex numbers and is a nonzero complex number. Then, there exist uniquely determined complex numbers such that for all values of different from . The numbers are given by the equation

Proof. For each , define the polynomial of degree and set where is defined by (2.14). Obviously is a polynomial and (recall that ). Since we have for all . Thus, the polynomial of degree has distinct zeros . Then and we get This proves (2.13). Note that the decomposition (2.13) is unique as for the in this decomposition (2.14) necessarily holds.

Denote by all the zeros of the polynomial (which coincide by (2.5) with the eigenvalues of the matrix and which are real and distinct): where is a nonzero constant. Therefore applying Lemma 2.1 to (2.12), we can get for the resolvent function the following decomposition: where Further, putting in (2.6) and (2.7) and taking into account that , we get respectively. It follows from (2.23) that and therefore . Comparing (2.22), (2.23), and (2.24), we find that whence we obtain, in particular, that .

Since is an eigenvector of the matrix corresponding to the eigenvalue , it is natural, according to the formula (2.25), to call the normalizing number of the matrix corresponding to the eigenvalue .

The collection of the eigenvalues and normalizing numbers: of the matrix of the form (1.1), (1.2) is called the spectral data of this matrix.

Determination of the spectral data of a given Jacobi matrix is called the direct spectral problem for this matrix.

Thus, the spectral data consist of the eigenvalues and associated normalizing numbers derived by decomposing the resolvent function (Weyl-Titchmarsh function) into partial fractions using the eigenvalues. The resolvent function of the matrix can be constructed by using (2.12). Another convenient formula for computing the resolvent function is (see [10, Section  5]) where is the matrix obtained from by deleting the first row and first column of .

It follows from (2.27) that tends to 1 as . Therefore multiplying (2.21) by and passing then to the limit as , we find

The inverse spectral problem is stated as follows.(i)To see if it is possible to reconstruct the matrix , given its spectral data (2.26). If it is possible, to describe the reconstruction procedure;(ii)To find the necessary and sufficient conditions for a given collection (2.26) to be spectral data for some matrix of the form (1.1) with entries belonging to the class (1.2).

The solution of this problem is well known (see [1, Section  4.6] and [10]) and let us bring here the final result.

Given a collection (2.26), where are real and distinct and are positive, define the numbers: and using these numbers introduce the determinants:

Lemma 2.2. For the determinants defined by (2.30) and (2.29), we have for and for .

Proof. Denote by the matrix corresponding to the determinant given by (2.30). Then for arbitrary real column vector , we have where Further, it follows that if , then If , then and (2.33) is possible only if (recall that are distinct). But then . Therefore, for all nonzero real vectors if . Then as is well known from Linear Algebra, we have . Thus we have proved that for .
To prove that for , let us define the linear functional on the linear space of all polynomials in with complex coefficients as follows: if is a polynomial, then the value of the functional on the element (polynomial) is Let be a fixed integer and set Then, according to (2.35), Consider (2.37) for , and substitute (2.36) in it for . Taking into account that we get Therefore, is a nontrivial solution of the homogeneous system of linear algebraic equations: with the unknowns . Therefore, the determinant of this system, which coincides with , must be equal to zero.

Theorem 2.3. Let an arbitrary collection (2.26) of numbers be given. In order for this collection to be the spectral data for a Jacobi matrix of the form (1.1) with entries belonging to the class (1.2), it is necessary and sufficient that the following two conditions be satisfied:(i)The numbers are real and distinct.(ii)The numbers are positive and such that .
Under the conditions (i) and (ii) we have for and the entries and of the matrix for which the collection (2.26) is spectral data, are recovered by the formulae where is defined by (2.30) and (2.29), and is the determinant obtained from the determinant by replacing in the last column by the column with the components .

It follows from the above solution of the inverse problem that the matrix (1.1) is not uniquely restored from the spectral data. This is linked with the fact that the are determined from (2.41) uniquely up to a sign. To ensure that the inverse problem is uniquely solvable, we have to specify additionally a sequence of signs + and −. Namely, let be a given finite sequence, where for each the is + or −. We have such different sequences. Now to determine uniquely from (2.41) for , we can choose the sign when extracting the square root. In this way, we get precisely distinct Jacobi matrices possessing the same spectral data. The inverse problem is solved uniquely from the data consisting of the spectral data and a sequence of signs + and −. Thus, we can say that the inverse problem with respect to the spectral data is solved uniquely up to signs of the off-diagonal elements of the recovered Jacobi matrix. In particular, the inverse problem is solvable uniquely in the class of entries ,  .

3. Inverse Problem for Two Spectra

Let be an Jacobi matrix of the form (1.1) with entries satisfying (1.2). Define to be the truncated Jacobi matrix given by (1.3). We denote the eigenvalues of the matrices and by and , respectively. We call the collections and the two spectra of the matrix .

The inverse problem for two spectra consists in the reconstruction of the matrix by two of its spectra.

We will reduce the inverse problem for two spectra to the inverse problem for eigenvalues and normalizing numbers solved above in Section 2.

First, let us study some necessary properties of the two spectra of the Jacobi matrix .

Let and be the polynomials of the first and second kind for the matrix . By (2.5) we have Note that we have used the fact that . Therefore, the eigenvalues and of the matrices and coincide with the zeros of the polynymials and , respectively.

Dividing both sides of (2.6) by gives Therefore, by formula (2.12) for the resolvent function , we obtain

Lemma 3.1. The matrices and have no common eigenvalues, that is, for all values of and .

Proof. Suppose that is an eigenvalue of the matrices and . Then by (3.1) and (3.2) we have . But this is impossible by (2.6).

Lemma 3.2. The equality (trace formula) holds.

Proof. For any matrix the spectral trace of coincides with the matrix trace of If are the eigenvalues of , then Therefore, we can write Subtracting the last two equalities side by side, we arrive at (3.5).

Lemma 3.3. The eigenvalues of and interlace:

Proof. Let us set so that is a rational function whose poles coincide with the eigenvalues of and whose zeros coincide with the eigenvalues of . Applying Lemma 2.1 to the rational function we can write where Next, (2.24) shows that , that is, and have the same sign. Then (3.11) implies that .
Differentiating (3.10) we get It follows from (3.12) that for real values of , different from . Therefore, is strictly decreasing continuous function on the intervals ,  ,  . Besides, it follows from (3.10) that Consequently, the function has no zero in the intervals and , and exactly one zero in each of the intervals . Since the zeros of the function coincide with the eigenvalues of by (3.9), the proof is complete.

The following lemma gives a formula for calculating the normalizing numbers in terms of the two spectra.

Lemma 3.4. For each the formula holds, where

Proof. Substituting (2.21) in the left-hand side of (3.4), we can write Multiply both sides of the last equality by and pass then to the limit as . Taking into account that ,  ,   (see (2.23) and (2.24)), we get Next, by (3.1) and (3.2) we have Substituting these in the right-hand side of (3.17), we obtain where Replacing by in (3.19) and then summing this equation over and taking into account (2.28), we get (3.15). The lemma is proved.

Theorem 3.5. (uniqueness result). The two spectra and of the Jacobi matrix of the form (1.1) in the class uniquely determine the matrix .

Proof. Given the two spectra and of the matrix , we determine uniquely the normalizing numbers of the matrix by (3.14) and (3.15). Since the collection of the eigenvalues and normalizing numbers , of the matrix determines uniquely in the class (3.21), the proof is complete.

The following theorem solves the inverse problem in terms of the two spectra. Its proof given below contains an effective procedure for the construction of the Jacobi matrix from its two spectra.

Theorem 3.6. In order for giving two collections of real numbers and to be the spectra of two matrices and , respectively, of the forms (1.1) and (1.3) with the entries in the class (1.2), it is necessary and sufficient that the following inequalities be satisfied:

Proof. The necessity of the condition (3.22) has been proved above in Lemma 3.3. To prove the sufficiency, suppose that two collections of real numbers and are given which satisfy the inequalities in (3.22). We construct according to these data by (3.14) and (3.15). It follows from (3.22) that Therefore, the expression on the right-hand side of (3.14) is positive and hence . Next, it follows directly from (3.14) and (3.15) that .
Consequently, the collection , satisfies the conditions of Theorem 2.3, and hence there exists a Jacobi matrix of the form (1.1) with entries from the class (1.2) such that the are the eigenvalues and the are the corresponding normalizing numbers for . Having the matrix , we construct the matrix by (1.3). It remains to show that is the spectrum of the constructed matrix . Denote the eigenvalues of by . By Lemma 3.3, We have to show that .
By the direct spectral problem, we have (Lemma 3.4): where On the other hand, by our construction of , we have (3.14) and (3.15). Equating the right-hand sides of (3.25) and (3.14), we obtain This means that the polynomial of degree has distinct zeros . Then, this polynomial identically equals zero. Hence, and . The proof is complete.