Abstract

A new numerical procedure is presented to reconstruct a fixed-free spring-mass system from two auxiliary spectra, which are nondisjoint. The method is a modification of the fast orthogonal reduction algorithm, which is less computationally expensive than others in the literature. Numerical results are obtained, showing the accuracy of the algorithm.

1. Introduction

The inverse problems in structures vibration look for determining or estimating the physical properties of a system in vibration (mass density, elastic constants, etc.) from a known dynamic behavior (natural frequencies, electric flux, tension, etc.) (see [14]).

The model used, which has generated much interest in the literature, as a prototype of structure, is a nonuniform thin rod with one end fixed to a surface (see [25]), whose discrete model is a spring-mass system, which consists of mass , associated with the masses of each element of the rod and connected by springs with rigidity constants corresponding to the rigidity of each one of these elements (Figure 1).

A spring-mass system in free and longitudinal vibration is governed by a generalized eigenvalue problem of the form (see [14])where and are the stiffness matrix and the mass matrix, respectively, and is the displacement vector. In this system, the eigenvalues , of (1) are related to the natural frequencies and the eigenvectors represent the vibration modes of the system. The spring-mass system is denoted by .

It is known (see [4]) that the matrices and can be uniquely reconstructed if the following information is given: the eigenvalues of the original system , the eigenvalues of the auxiliary system , corresponding to the original system whose last mass is fixed (Figure 2), and an additional factor, for example, the total mass of the system .

The structural properties of the matrices and allow us to reduce the generalized eigenvalue equation (1) to the standard form (see [14]) where the Jacobi matrix is tridiagonal symmetric positive definite, with the same eigenvalues of the system , which are real, positive, and distinct. Therefore, a fundamental step to determine the system is to reconstruct the matrix . Without loss of generality, we assume that is of the following form:In [6], stable numerical procedures to reconstruct the Jacobi matrix are discussed. This reconstruction uses as initial spectral information the eigenvalues of and the eigenvalues of the matrix , which is obtained by deleting the last row and last column of . A fundamental property in these procedures is the interlacing property (see [1, 4, 6])which is a necessary and sufficient condition for the existence of a physically real system and for constructing as well.

In [7, 8] the authors generalize the reconstruction of the system by using the interlaced spectrum corresponding to an auxiliary system that consists in fixing any mass of the system , other than the extreme masses (Figure 3).

Clearly, if the auxiliary system is the system with its th mass, , being fixed, then is uncoupled in two auxiliary spring-mass systems, and , with natural frequencies and , respectively, where . The structural properties of the matrices , , , and allow us to partition aswhere the submatrices and , with and , are related to the systems and , respectively. As the system can be reconstructed from the matrix , it is enough to reconstruct from the sets and to obtain . Thus, the reconstruction of the system is reduced to the following problem.

Problem 1. Given the sequence of real numbers , satisfying the interlacing propertyreconstruct the matrix in (6), such that

In this problem two cases arise: in the first one, all natural frequencies and are distinct; that is, . In terms of the matrix, the meaning is that no eigenvector of has a node in its coordinate ; that is, . In this case, the reconstruction is unique. In the second case, one or more natural frequencies and , are identical. The meaning of this situation is that some eigenvector of , let us say , has a node in ; that is, . In this case, , and a family of isospectral matrices is obtained.

In [7], the authors study the first case; that is, they reconstruct the system , using a modification of the fast orthogonal reduction method, when the auxiliary spectra are separated. In the next section we study the second case, using the same method, and thus the problem is completely solved. This method is less computationally expensive than others in literature [8].

2. Reconstructing the System from Nondisjoint Spectra

We denote by , , and the characteristic polynomials of the matrices , , and , respectively; that is,We define the vectorscorresponding, respectively, to the last and the first row of the matrices of eigenvectors of and . We also define the diagonal matrices and .

Theorem 2. Let the real numbers and be given, satisfying the interlacing property (7); that is, . Then there exists an isospectral family of matrices , , of the form (6) such that , , and .

Proof. We suppose that there is a pair , of frequencies such that , where . From the expansion of , throughout its th row, we find thatwhere and are the characteristic polynomials of and after we delete its first row and column, and th row and column, respectively.
On the other hand, if we denote by the characteristic polynomial of the principal submatrix obtained from by adding th row and column, we have thatThus, (11) isAnalogously, (11) can be written asNow, if we denote by the characteristic polynomial of the principal submatrix obtained from by adding a row and column above , we haveThen,From (12) we haveand from (15) we haveSince the polynomials , , and in (9) have common factors , (13) and (16) are respectively, whereReplacing (17) and (18) in these last two equations, we obtainand dividing (22) by , we getIt is known that if is the orthogonal matrix of eigenvectors of , then , and we haveThe left side in (24) iswhile the right side isComparing the entries in position in both sides in (24) we find thatTaking the limit when tends to , we obtainAnalogously, we can obtainThen, by replacing (28) and (29) in (23), we get For we can defineThus, given that(31) allow us to know and .
Subsequently, once the vectors and in (10) are known, we can form the matrices:where the entries and are arbitrary real numbers. Then, we apply the Modified Fast Orthogonal Reduction Algorithm (see [9]) to orthogonally reduce the matrices and to their tridiagonal form, obtaining in this way the desired matrices and . To do this, we first permute the arrowhead matrix by applying . We point out that similar relationships are analyzed by Jessup in [10].
Finally, considering that the diagonal entry of can be computed asand the codiagonal entries and can be computed from (32) and (33), respectively, the matrix of the form (6) is obtained completely, having a common eigenvalue with and .
If we have more common eigenvalues, we can repeat the previous procedure for each pair of common eigenvalues. That is, if is the number of identical pairs , , then we haveThus,where means that terms and ,  , respectively, are omitted. Thus, we obtain an isospectral family of tridiagonal matrices that have identical eigenvalues.

3. An Optimization Procedure to Find an Objective Jacobi Matrix

In this section we want to find an objective matrix within a family of matrices. First, we observe that the construction procedure depends continuously on the parameter Then, by means of an optimization procedure, we find an appropriate , so that the procedure reconstructs a matrix with a desired structure.

Theorem 3. Let be a given symmetric tridiagonal matrix partitioned in the following form:where , , and , with and . For small enough the function, , defined by where the matrix is obtained by using the Modified Fast Orthogonal Reduction process, has a minimum in .

Proof. Given , the Modified Fast Orthogonal Reduction process reconstructs a matrix of the form (38), from its eigenvalues. We show that all the entries of the matrix depend continuously on . In fact, expressions (37) are clear since cos and sin are not zero in . That is, The functions are continuous. Therefore, all the matrices have continuous entries. Since, in the Modified Fast Orthogonal Reduction process, the tridiagonalization matrices have rational entries with nonzero denominators, matrices , and, thus, depend continuously on
Now, if and , then Afterwards, due to the discontinuity of , the reconstruction procedure is not completed. Analogously, when and , we have , where again a discontinuity of is produced. Therefore, by affixing, , the function is defined and is continuous in Then, by Weierstrass’ theorem, has a minimum in .

3.1. Numerical Examples

Here we give some examples which show numerical results obtained in the reconstruction of the matrix. In all examples, the reconstructed matrix is the well-known matrixwhich has eigenvalues . Moreover, it is also known that if we delete the th row and column of , the eigenvalues of the submatrices and are, respectively, and , with .

Example 1. In Table 1, we show the results associated with the reconstructed matrix , of the form (44), with , , and for . The given eigenvalues are shown in the second, third, and fourth column. In the fifth and sixth column we list the diagonal and codiagonal entries of . In the last column, we show the relative error with respect to the exact eigenvalues of the matrix and the eigenvalues of .

Example 2. In Table 2, we show the results associated with the reconstructed matrix , by considering appropriate orders of and for arbitrary values of , listed from the first to fourth column. The relative errors and , with respect to the diagonal and codiagonal entries of and , respectively, are shown in the fifth and sixth column. In the last column, we present the relative errors , defined as in Example 1.

Example 3. In the reconstructed matrix , by considering the same orders of Example 2, we add an optimization process of Golden Section Search [11], of parameter , obtaining an optimal , denoted as and listed in the fourth column of Table 3. In the last three columns the relative errors , and are shown.

Example 4. In this example, we reconstruct the matrix for and . In each case we do as many reconstructions of as values can take. The values of allow us to have various reconstructions with . Figures 4 and 5 show the plots of the relative errors and . The results of our numerical experiments confirm that our method works quite well.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This paper was supported by Project UTA 4730-13, Chile, and Universidad Católica del Norte, Chile.