Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015 (2015), Article ID 350879, 7 pages
Research Article

Numerical Reconstruction of Spring-Mass System from Two Nondisjoint Spectra

1Departamento de Matemática, Universidad de Tarapacá, 1010069 Arica, Chile
2Departamento de Matemáticas, Universidad Católica del Norte, 1270709 Antofagasta, Chile

Received 13 May 2015; Accepted 10 June 2015

Academic Editor: Filippo Ubertini

Copyright © 2015 Hubert Pickmann et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


A new numerical procedure is presented to reconstruct a fixed-free spring-mass system from two auxiliary spectra, which are nondisjoint. The method is a modification of the fast orthogonal reduction algorithm, which is less computationally expensive than others in the literature. Numerical results are obtained, showing the accuracy of the algorithm.

1. Introduction

The inverse problems in structures vibration look for determining or estimating the physical properties of a system in vibration (mass density, elastic constants, etc.) from a known dynamic behavior (natural frequencies, electric flux, tension, etc.) (see [14]).

The model used, which has generated much interest in the literature, as a prototype of structure, is a nonuniform thin rod with one end fixed to a surface (see [25]), whose discrete model is a spring-mass system, which consists of mass , associated with the masses of each element of the rod and connected by springs with rigidity constants corresponding to the rigidity of each one of these elements (Figure 1).

Figure 1: Fixed-free spring-mass system.

A spring-mass system in free and longitudinal vibration is governed by a generalized eigenvalue problem of the form (see [14])where and are the stiffness matrix and the mass matrix, respectively, and is the displacement vector. In this system, the eigenvalues , of (1) are related to the natural frequencies and the eigenvectors represent the vibration modes of the system. The spring-mass system is denoted by .

It is known (see [4]) that the matrices and can be uniquely reconstructed if the following information is given: the eigenvalues of the original system , the eigenvalues of the auxiliary system , corresponding to the original system whose last mass is fixed (Figure 2), and an additional factor, for example, the total mass of the system .

Figure 2: Fixed-fixed spring-mass system.

The structural properties of the matrices and allow us to reduce the generalized eigenvalue equation (1) to the standard form (see [14]) where the Jacobi matrix is tridiagonal symmetric positive definite, with the same eigenvalues of the system , which are real, positive, and distinct. Therefore, a fundamental step to determine the system is to reconstruct the matrix . Without loss of generality, we assume that is of the following form:In [6], stable numerical procedures to reconstruct the Jacobi matrix are discussed. This reconstruction uses as initial spectral information the eigenvalues of and the eigenvalues of the matrix , which is obtained by deleting the last row and last column of . A fundamental property in these procedures is the interlacing property (see [1, 4, 6])which is a necessary and sufficient condition for the existence of a physically real system and for constructing as well.

In [7, 8] the authors generalize the reconstruction of the system by using the interlaced spectrum corresponding to an auxiliary system that consists in fixing any mass of the system , other than the extreme masses (Figure 3).

Figure 3: Spring-mass system with a fixed interior mass.

Clearly, if the auxiliary system is the system with its th mass, , being fixed, then is uncoupled in two auxiliary spring-mass systems, and , with natural frequencies and , respectively, where . The structural properties of the matrices , , , and allow us to partition aswhere the submatrices and , with and , are related to the systems and , respectively. As the system can be reconstructed from the matrix , it is enough to reconstruct from the sets and to obtain . Thus, the reconstruction of the system is reduced to the following problem.

Problem 1. Given the sequence of real numbers , satisfying the interlacing propertyreconstruct the matrix in (6), such that

In this problem two cases arise: in the first one, all natural frequencies and are distinct; that is, . In terms of the matrix, the meaning is that no eigenvector of has a node in its coordinate ; that is, . In this case, the reconstruction is unique. In the second case, one or more natural frequencies and , are identical. The meaning of this situation is that some eigenvector of , let us say , has a node in ; that is, . In this case, , and a family of isospectral matrices is obtained.

In [7], the authors study the first case; that is, they reconstruct the system , using a modification of the fast orthogonal reduction method, when the auxiliary spectra are separated. In the next section we study the second case, using the same method, and thus the problem is completely solved. This method is less computationally expensive than others in literature [8].

2. Reconstructing the System from Nondisjoint Spectra

We denote by , , and the characteristic polynomials of the matrices , , and , respectively; that is,We define the vectorscorresponding, respectively, to the last and the first row of the matrices of eigenvectors of and . We also define the diagonal matrices and .

Theorem 2. Let the real numbers and be given, satisfying the interlacing property (7); that is, . Then there exists an isospectral family of matrices , , of the form (6) such that , , and .

Proof. We suppose that there is a pair , of frequencies such that , where . From the expansion of , throughout its th row, we find thatwhere and are the characteristic polynomials of and after we delete its first row and column, and th row and column, respectively.
On the other hand, if we denote by the characteristic polynomial of the principal submatrix obtained from by adding th row and column, we have thatThus, (11) isAnalogously, (11) can be written asNow, if we denote by the characteristic polynomial of the principal submatrix obtained from by adding a row and column above , we haveThen,From (12) we haveand from (15) we haveSince the polynomials , , and in (9) have common factors , (13) and (16) are respectively, whereReplacing (17) and (18) in these last two equations, we obtainand dividing (22) by , we getIt is known that if is the orthogonal matrix of eigenvectors of , then , and we haveThe left side in (24) iswhile the right side isComparing the entries in position in both sides in (24) we find thatTaking the limit when tends to , we obtainAnalogously, we can obtainThen, by replacing (28) and (29) in (23), we get For we can defineThus, given that(31) allow us to know and .
Subsequently, once the vectors and in (10) are known, we can form the matrices:where the entries and are arbitrary real numbers. Then, we apply the Modified Fast Orthogonal Reduction Algorithm (see [9]) to orthogonally reduce the matrices and to their tridiagonal form, obtaining in this way the desired matrices and . To do this, we first permute the arrowhead matrix by applying . We point out that similar relationships are analyzed by Jessup in [10].
Finally, considering that the diagonal entry of can be computed asand the codiagonal entries and can be computed from (32) and (33), respectively, the matrix of the form (6) is obtained completely, having a common eigenvalue with and .
If we have more common eigenvalues, we can repeat the previous procedure for each pair of common eigenvalues. That is, if is the number of identical pairs , , then we haveThus,where means that terms and ,  , respectively, are omitted. Thus, we obtain an isospectral family of tridiagonal matrices that have identical eigenvalues.

3. An Optimization Procedure to Find an Objective Jacobi Matrix

In this section we want to find an objective matrix within a family of matrices. First, we observe that the construction procedure depends continuously on the parameter Then, by means of an optimization procedure, we find an appropriate , so that the procedure reconstructs a matrix with a desired structure.

Theorem 3. Let be a given symmetric tridiagonal matrix partitioned in the following form:where , , and , with and . For small enough the function, , defined by where the matrix is obtained by using the Modified Fast Orthogonal Reduction process, has a minimum in .

Proof. Given , the Modified Fast Orthogonal Reduction process reconstructs a matrix of the form (38), from its eigenvalues. We show that all the entries of the matrix depend continuously on . In fact, expressions (37) are clear since cos and sin are not zero in . That is, The functions are continuous. Therefore, all the matrices have continuous entries. Since, in the Modified Fast Orthogonal Reduction process, the tridiagonalization matrices have rational entries with nonzero denominators, matrices , and, thus, depend continuously on
Now, if and , then Afterwards, due to the discontinuity of , the reconstruction procedure is not completed. Analogously, when and , we have , where again a discontinuity of is produced. Therefore, by affixing, , the function is defined and is continuous in Then, by Weierstrass’ theorem, has a minimum in .

3.1. Numerical Examples

Here we give some examples which show numerical results obtained in the reconstruction of the matrix. In all examples, the reconstructed matrix is the well-known matrixwhich has eigenvalues . Moreover, it is also known that if we delete the th row and column of , the eigenvalues of the submatrices and are, respectively, and , with .

Example 1. In Table 1, we show the results associated with the reconstructed matrix , of the form (44), with , , and for . The given eigenvalues are shown in the second, third, and fourth column. In the fifth and sixth column we list the diagonal and codiagonal entries of . In the last column, we show the relative error with respect to the exact eigenvalues of the matrix and the eigenvalues of .

Table 1

Example 2. In Table 2, we show the results associated with the reconstructed matrix , by considering appropriate orders of and for arbitrary values of , listed from the first to fourth column. The relative errors and , with respect to the diagonal and codiagonal entries of and , respectively, are shown in the fifth and sixth column. In the last column, we present the relative errors , defined as in Example 1.

Table 2

Example 3. In the reconstructed matrix , by considering the same orders of Example 2, we add an optimization process of Golden Section Search [11], of parameter , obtaining an optimal , denoted as and listed in the fourth column of Table 3. In the last three columns the relative errors , and are shown.

Table 3

Example 4. In this example, we reconstruct the matrix for and . In each case we do as many reconstructions of as values can take. The values of allow us to have various reconstructions with . Figures 4 and 5 show the plots of the relative errors and . The results of our numerical experiments confirm that our method works quite well.

Figure 4: 69 of 90 reconstructions of with identical eigenvalues.
Figure 5: 359 of 501 reconstructions of identical eigenvalues.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This paper was supported by Project UTA 4730-13, Chile, and Universidad Católica del Norte, Chile.


  1. B. N. Datta, Numerical Linear Algebra and Applications, SIAM, Philadelphia, Pa, USA, 2nd edition, 2009. View at MathSciNet
  2. B. N. Datta, Numerical Methods for Linear Control Systems Design and Analysis, Elsevier/Academic Press, New York, NY, USA, 2003.
  3. B. N. Datta and D. R. Sarkissian, “Theory and computations of some inverse eigenvalue problems for the quadratic pencil,” in Structured Matrices in Operator Theory, Control, and Signal and Image Processing, vol. 280 of Contemporary Mathematics, pp. 221–240, American Mathematical Society, Providence, RI, USA, 2001. View at Google Scholar
  4. G. M. L. Gladwell, Inverse Problems in Vibration, Martinus Nijhoff, Dordrecht, The Netherlands, 2004. View at MathSciNet
  5. Y. M. Ram and S. Elhay, “Constructing the shape of a rod from eigenvalues,” Communications in Numerical Methods in Engineering, vol. 14, no. 7, pp. 597–608, 1998. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. G. H. Golub and C. F. van Loan, Matrix Computations, vol. 3 of Johns Hopkins Series in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, Md, USA, 2nd edition, 1989. View at MathSciNet
  7. J. C. Egaña and R. L. Soto, “On the numerical reconstruction of a spring-mass system from its natural frequencies,” Proyecciones, vol. 19, no. 1, pp. 27–41, 2000. View at Publisher · View at Google Scholar · View at MathSciNet
  8. G. M. L. Gladwell and N. B. Willms, “The reconstruction of a tridiagonal system from its frequency response at an interior point,” Inverse Problems, vol. 4, no. 4, pp. 1013–1024, 1988. View at Google Scholar · View at MathSciNet
  9. W. B. Gragg and W. J. Harrod, “The numerically stable reconstruction of Jacobi matrices from spectral data,” Numerische Mathematik, vol. 44, no. 3, pp. 317–335, 1984. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. E. R. Jessup, “A case against a divide and conquer approach to the nonsymmetric eigenvalue problem,” Applied Numerical Mathematics, vol. 12, no. 5, pp. 403–420, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes, Cambridge University Press, 3rd edition, 2007. View at MathSciNet