Abstract

We present a real symmetric tridiagonal matrix of order whose eigenvalues are which also satisfies the additional condition that its leading principle submatrix has a uniformly interlaced spectrum, . The matrix entries are explicit functions of the size , and so the matrix can be used as a test matrix for eigenproblems, both forward and inverse. An explicit solution of a spring-mass inverse problem incorporating the test matrix is provided.

1. Introduction

We are motivated by the following inverse eigenvalue problem first studied by Hochstadt in 1967 [1]. Given two strictly interlaced sequences of real values, with find the , real, symmetric, and tridiagonal matrix, , such that are the eigenvalues of , while are the eigenvalues of the leading principal submatrix of , where is obtained from by deleting the last row and column. The condition on the dataset (2) is both necessary and sufficient for the existence of a unique Jacobian matrix solution to the problem (see [2], Section 4.3 or [3], Section 1.2 for a history of the problem and Section 3 of this paper for additional background theory).

A number of different constructive procedures to produce the exact solution of this inverse problem have been developed [49], but none provide an explicit characterization of the entries of the solution matrix, , in terms of the dataset (2). Computer implementation of these procedures introduces floating point error and associated numerical stability issues. Loss of significant figures due to accumulation of round-off error makes some of the known solution procedures undesirable. Determining the extent of round-off error in the numerical solution, , computed from a given dataset requires a priori knowledge of the exact solution . In the absence of this knowledge, an additional numerical computation of the forward problem to find the spectra and allows comparison to the original data.

Test matrices, with known entries and known spectra, are therefore helpful in comparing the efficacy of the various solution algorithms in regard to stability. It is particularly helpful when test matrices can be produced at arbitrary size. However some existent test matrices given as a function of matrix size suffer the following trait: when ordered by size, the minimum spacing between consecutive eigenvalues is a decreasing function of . This trait is potentially undesirable since the reciprocal of this minimum separation between eigenvalues can be thought of as a condition number on the sensitivity of the eigenvectors (invariant subspaces) to perturbation (see [10], Theorem ). Some of the algorithms for the inverse problem seem to suffer from this form of ill-conditioning. From a motivation to avoid confounding the numerical stability issue with potential increased ill-conditioning of the dataset as a function of , the authors developed a test matrix which has equally spaced and uniformly interlaced simple eigenvalues.

In Section 2 we provide the explicit entries of such a matrix, . We claim that its eigenvalues are equally spaced as while its leading principal submatrix has eigenvalues uniformly interlaced with those of , namely, A short proof verifies the claims. In Section 3 we present some background theory concerning Jacobian matrices, and in Section 4 we apply our test matrix to a model of a physical spring-mass system, an application which leads naturally to Jacobian matrices.

2. Main Result

Let be an real symmetric tridiagonal matrix with entries and let be the principal submatrix of , that is, the matrix obtained from by deleting the last row and column.

Theorem 1. has eigenvalues and has eigenvalues .

Proof. By induction, when has eigenvalues 0,2, and has eigenvalue 1. Assume the result holds for . So has eigenvalues . Let and . Then and are similar via where is upper triangular, with entries Therefore has eigenvalues .
Now we show that has eigenvalues . Let . Factorize , where is lower bidiagonal. We find Therefore has eigenvalue and thus has eigenvalue .
Define ; so with Now has the same eigenvalues as since they are similar matrices via where is upper triangular with entries Therefore has eigenvalues .

3. Discussion

A real, symmetric tridiagonal matrix is called a Jacobian matrix when its off-diagonal elements are nonzero ([2], page 46). We write The similarity transformation, , where is the alternating sign matrix, , produces a Jacobian matrix with entries same as except for the sign of the off-diagonal elements, which are all reversed. If instead we use the self-inverse sign matrix, , to transform , then is a Jacobian matrix identical to except for a switched sign on the off-diagonal element. In regard to the spectrum of the matrix, there is therefore no loss of generality in accepting the convention that a Jacobian matrix is expressed with negative off-diagonal elements; that is, , in (13).

While Cauchy’s interlace theorem [11] guarantees that the eigenvalues of any square, real, symmetric (or even Hermitian) matrix will interlace those of its leading (or trailing) principal submatrix, the interlacing cannot be strict, in general [12]. However, specializing to the case of Jacobian matrices restricts the interlacing to strict inequalities. That is, Jacobian matrices possess distinct eigenvalues, and the eigenvalues of the leading (or trailing) principal submatrix are also distinct and strictly interlace those of the original matrix (see [2], Theorems and . See also [10] exercise P8.4.1, page 475: when a tridiagonal matrix has algebraically multiple eigenvalues, the matrix fails to be Jacobian). The inverse problem is also well-posed: there is a unique (up to the signs of the off-diagonal elements) Jacobian matrix having given spectra specified as per (2) (see [2], Theorem , noting that the interlaced spectrum of eigenvalues can be used to calculate the last components of each of the orthonormalized eigenvectors of via equation ). Therefore, the matrix in Theorem 1 is the unique Jacobian matrix with eigenvalues equally spaced by two, starting with smallest eigenvalue zero, whose leading principal submatrix has eigenvalues also equally spaced by two, starting with smallest eigenvalue one.

As a consequence of the theorem, we now have the following.

Corollary 2. The eigenvalues of the real, symmetric tridiagonal matrix form the arithmetic sequence, while the eigenvalues of its leading principal submatrix, , form the uniformly interlaced sequence

The form and properties of were first hypothesised by the third author while programming Fortran algorithms to reconstruct band matrices from spectral data [3]. Initial attempts to prove the spectral properties of by both he and his graduate supervisor (the first author) failed. Later, the first author produced the short induction argument of Theorem 1, in July 1996. Alas, the fax on which the argument was communicated to the third author was lost in a cross-border academic move, and so the matter languished until recently. In summer of 2013, the second and third authors assigned the problem of this paper as a summer undergraduate research project, “hypothesize, and then verify, if possible, the explicit entries of an symmetric, tridiagonal matrix with eigenvalues (15), such that the eigenvalues of its principal submatrix are (16).” Meanwhile the misplaced fax from the first author’s proof was found during an office cleaning. The student, A. De Serre-Rothney, was able to complete both parts of the problem. His proof is now found in [13]. Though longer than the one presented here, his proof utilizes the spectral properties of another tridiagonal (nonsymmetric) matrix, the so-called Kac-Sylvester matrix, , of size , with eigenvalues [1417]:

The referee has pointed out the connection between the spectra (3) and (4) and the classical orthogonal Hahn polynomials of a discrete variable [18]. Using (3) as nodes with weights determine the Hahn polynomials, , , whose three-term recurrence coefficients are the entries of a Jacobi matrix with eigenvalues (3), hence similar to our .

4. A Spring-Mass Model Problem

One simple problem where symmetric tridiagonal matrices arise naturally is the inverse problem for the spring-mass system shown in Figure 1. In this case the squares of the natural frequencies of free vibration for system (a) are the eigenvalues of a Jacobi matrix , while those for system (b) are the eigenvalues of its principal minor .

Specifically, let be the stiffness matrix, and let be the mass (inertia) matrix for the system in Figure 1(a):

Then the squares of the natural frequencies of the systems in Figure 1 satisfy and , where is obtained from by deleting the last row and column. The solutions can be ordered . We can also rewrite the systems as and where and . Note that the squares of the natural frequencies of the systems are the eigenvalues of and .

Suppose that the matrix was to arise from a spring-mass system like in Figure 1; that is, we are considering the system whose squares of the natural frequencies are the equally spaced values for system (a) and for system (b). The system in Figure 1 is the simplest possible discrete model for a rod vibrating in longitudinal motion and more closely approximates the continuous system as . In a physical system, we expect clustering of frequencies. The test matrix does not share this phenomenon and so we expect the stiffnesses and masses associated with it to become unrealistic as . To demonstrate this, we will explicitly solve for the stiffnesses and masses associated with .

With we note that with eigenvalues , while has eigenvalues .

Let with for all . Let . We wish to solve for and .

The bottom, , equation is where we choose . We will thus be able to express in terms of the scaling parameter .

The equation is

The equation, for , , , is Then

Now suppose for . Then cases are already verified, and the strong inductive assumption applied in (25) with implies . So which verifies, by strong induction, the closed form for given by (26).

Finally, the first equation of (21) is and so

We note that the values can be written as for , and

Since , then

From (26) we have which goes to infinity as and from (32) we see that which also goes to infinity as . This is not a model of a physical rod, as expected.

5. Conclusion

A family of symmetric tridiagonal matrices, , whose eigenvalues are simple and uniformly spaced and whose leading principle submatrix has uniformly interlaced, simple eigenvalues has been presented (14). Members of the family are characterized by a specified smallest eigenvalue and gap size between eigenvalues. The matrices are termed Jacobian, since the off-diagonal entries are all nonzero. The matrix entries are explicit functions of the size , , and ; so the matrices can be used as a test matrices for eigenproblems, both forward and inverse. The matrix for specified smallest eigenvalue and gap is unique up to the signs of the off-diagonal elements.

In Section 4, the form of was used as an explicit solution of a spring-mass vibration model (Figure 1), and the inverse problem to determine the lumped masses and spring stiffnesses was solved explicitly. Both the lumped masses given by (30) and spring stiffnesses from (32) show superexponential growth. Consequently , become vanishingly small as . As a result, the spring-mass system of Figure 1 cannot be used as a discretized model for a physical rod in longitudinal vibration, as the model becomes unrealistic in the limit as .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.