Abstract

We first consider the following inverse eigenvalue problem: given and a diagonal matrix , find Hermite-Hamilton matrices and such that . We then consider an optimal approximation problem: given Hermitian matrices and , find a solution of the above inverse problem such that . By using the Moore-Penrose generalized inverse and the singular value decompositions, the solvability conditions and the representations of the general solution for the first problem are derived. The expression of the solution to the second problem is presented.

1. Introduction

Throughout this paper, we will adopt the following notations. Let and stand for the set of all matrices, Hermitian matrices, and unitary matrices over the complex field , respectively. By we denote the Frobenius norm of a matrix. The symbols , and denote the transpose, conjugate transpose, inverse, and Moore-Penrose generalized inverse of , respectively.

Definition 1.1. Let , , and . If and , then the matrix is called Hermite-Hamilton matrix.

We denote by the set of all Hermite-Hamilton matrices.

Vibrating structures such as bridges, highways, buildings, and automobiles are modeled using finite element techniques. These techniques generate structured matrix second-order differential equations: where are analytical mass and stiffness matrices. It is well known that all solutions of the above differential equation can be obtained via the algebraic equation . But such finite element model is rarely available in practice, because its natural frequencies and mode shapes often do not match very well with experimentally measured ones obtained from a real-life vibration test [1]. It becomes necessary to update the original model to attain consistency with empirical results. The most common approach is to modify and to satisfy the dynamic equation with the measured model data. Let be the measured model matrix and the measured natural frequencies matrix, where . The measured mode shapes and frequencies are assumed correct and have to satisfy where are the mass and stiffness matrices to be corrected. To date, many techniques for model updating have been proposed. For undamped systems, various techniques have been discussed by Berman [2] and Wei [3]. Theory and computation of damped systems were proposed by authors of [4, 5]. Another line of thought is to update damping and stiffness matrices with symmetric low-rank correction [6]. The system matrices are adjusted globally in these methods. As model errors can be localized by using sensitivity analysis [7], residual force approach [8], least squares approach [9], and assigned eigenstructure [10], it is usual practice to adjust partial elements of the system matrices using measured response data.

The model updating problem can be regarded as a special case of the inverse eigenvalue problem which occurs in the design and modification of mass-spring systems and dynamic structures. The symmetric inverse eigenvalue problem and generalized inverse eigenvalue problem with submatrix constraint in structural dynamic model updating have been studied in [11] and [12], respectively. Hamiltonian matrices usually arise in the analysis of dynamic structures [13]. However, the inverse eigenvalue problem for Hermite-Hamilton matrices has not been discussed. In this paper, we will consider the following inverse eigenvalue problem and an associated optimal approximation problem.

Problem 1. Given that and a diagonal matrix , find Hermite-Hamilton matrices and such that

Problem 2. Given that , let be the solution set of Problem 1. Find such that

We observe that, when , Problem 1 can be reduced to the following inverse eigenproblem: which has been solved for different classes of structured matrices. For example, Xie et al. considered the problem for the case of symmetric, antipersymmetric, antisymmetric, and persymmetric matrices in [14, 15]. Bai and Chan studied the problem for the case of centrosymmetric and centroskew matrices in [16]. Trench investigated the case of generalized symmetry or skew symmetry matrices for the problem in [17] and Yuan studied R-symmetric matrices for the problem in [18].

The paper is organized as follows. In Section 2, using the Moore-Penrose generalized inverse and the singular value decompositions of matrices, we give explicit expressions of the solution for Problem 1. In Section 3, the expressions of the unique solution for Problem 2 are given and a numerical example is provided.

2. Solution of Problem 1

Let

Lemma 2.1. Let . Then if and only if there exists a matrix such that where is the same as in (2.1).

Proof. Let , and let each block of be square. From Definition 1.1 and (2.1), it can be easily proved.

Lemma 2.2 (see [19]). Let , , and . Then the matrix equation has a solution if and only if ; in this case the general solution of the equation can be expressed as , where is arbitrary.

Let the partition of the matrix be where is defined as in (2.1).

We assume that the singular value decompositions of the matrices and are where , , , , and , , , .

Let the singular value decompositions of the matrices and be where , , , , and , , ,  .

Theorem 2.3. Suppose that and is a diagonal matrix. Let the partition of be (2.3), and let the singular value decompositions of , and be given in (2.4) and (2.5), respectively. Then (1.3) is solvable and its general solution can be expressed as where with , being arbitrary matrices, and is the same as in (2.1).

Proof. By Lemma 2.1, we know that is a solution to Problem 1 if and only if there exist matrices such that
Using (2.3), the above equation is equivalent to the following two equations:
By the singular value decomposition of , then the relation (2.9) becomes
Clearly, (2.11) with respect to unknown matrix is always solvable. By Lemma 2.2 and (2.5), we get where is an arbitrary matrix. Substituting into (2.12), we get Since is of full column rank, then the above equation with respect to unknown matrix is always solvable, and the general solution can be expressed as where is an arbitrary matrix.
Substituting and (2.15) into (2.10), we get
By the singular value decomposition of , then the relation (2.16) becomes
Clearly, (2.17) with respect to unknown matrix is always solvable. From Lemma 2.2 and (2.5), we have where is arbitrary. Substituting into (2.18), we get Then, we have Since is of full row rank, then the above equation with respect to is always solvable. By Lemma 2.2, we have where is arbitrary. Then, we get where is arbitrary.
Finally, we have where . The proof is completed.

From Lemma 2.1, we have that if the mass matrix , then is not positive definite. If is symmetric positive definite and is a symmetric matrix, then (1.3) can be reformulated as the following form: where . From [20, Theorem ], we know that is a diagonalizable matrix, all of whose eigenvalues are real. Thus, and is of full column rank. Assume that is a real matrix. Let the singular value decomposition of be where denotes the set of all orthogonal matrices. The solution of (2.25) can be expressed as where is an arbitrary matrix and is an arbitrary diagonalizable matrix (see [21, Theorem ]).

Let with . Choose , where is an arbitrary nonsingular matrix and with . The solutions to (1.3) with respect to unknown matrices and are presented in the following theorem.

Theorem 2.4 (see [21]). Given that , , and , let the singular value decomposition of be (2.26). Then the symmetric positive-definite solution and symmetric solution to (1.3) can be expressed as where , , , and , where is an arbitrary nonsingular matrix The matrix satisfies the equation

3. Solution of Problem 2

Lemma 3.1 (see [22]). Given that and , let Then if and only if .

For the given matrices , let

From Theorem 2.3, we know that . The following theorem is for the best approximation solution of Problem 2.

Theorem 3.2. Given that , , and , then Problem 2 has a unique solution and the solution can be expressed as where

Proof. It is easy to verify that is a closed convex subset of . From the best approximation theorem, we know that there exists a unique solution in such that (1.4) holds. From Theorem 2.3 and the unitary invariant of the Frobenius norm, we have where . Hence, is equivalent to
Let Then from the unitary invariant of the Frobenius norm, we have
Let . It is not difficult to see that, when that is, , we have . In other words, we can always find such that . Let Then, we have that is equivalent to . According to Lemma 3.1 and (3.10), we get the following matrix equation: and its solution is . Again from Lemma 3.1, we have that, when , attains its minimum, which gives , and Then, the unique solution of Problem 2 given by (3.3) is obtained.

Now, we give an algorithm to compute the optimal approximate solution of Problem 2.

Algorithm 3. Input , , , and .
Compute according to (2.3).
Find the singular value decomposition of according to (2.4).
Calculate by (3.4).
Compute by (3.3).

Example 1. Let , and the matrices , and be given by
From the Algorithm, we obtain the unique solution of Problem 2 as follows: where . It is easy to calculate , and

Acknowledgments

This paper was granted financial support from National Natural Science Foundation (10901056) and Shanghai Natural Science Foundation (09ZR1408700), NSFC grant (10971070). The authors would like to thank the referees for their valuable comments and suggestions.