Abstract

The inverse eigenvalue problem of constructing symmetric positive semidefinite matrix 𝐷 (written as 𝐷0) and real-valued skew-symmetric matrix 𝐺 (i.e., 𝐺𝑇=𝐺) of order 𝑛 for the quadratic pencil 𝑄(𝜆)=𝜆2𝑀𝑎+𝜆(𝐷+𝐺)+𝐾𝑎, where 𝑀𝑎>0, 𝐾𝑎0 are given analytical mass and stiffness matrices, so that 𝑄(𝜆) has a prescribed subset of eigenvalues and eigenvectors, is considered. Necessary and sufficient conditions under which this quadratic inverse eigenvalue problem is solvable are specified.

1. Introduction

Vibrating structures such as beams, buildings, bridges, highways, and large space structures, are distributed parameter systems. While it is desirable to obtain a solution of a vibration problem in its own natural setting of distributed parameter systems; due to the lack of appropriate computational methods, in practice, very often a distributed parameter system is first discretized to a matrix second-order model (referred to as an analytical model) using techniques of finite elements or finite differences and then an approximate solution is obtained from the solution of the problem in the analytical model. A matrix second-order model of the free motion of a vibrating system is a system of differential equations of the form

𝑀𝑎𝐷̈𝑥(𝑡)+𝑎+𝐺𝑎̇𝑥(𝑡)+𝐾𝑎𝑥(𝑡)=0,(1.1) where 𝑀𝑎,𝐷𝑎,𝐺𝑎, and 𝐾𝑎 are, respectively, analytical mass, damping, gyroscopic and stiffness matrices.

The system represented by (1.1) is called damped gyroscopic system. The gyroscopic matrix 𝐺𝑎 is always skew symmetric and, in general, the mass matrix 𝑀𝑎 is symmetric and positive definite and 𝐷𝑎,𝐾𝑎 are symmetric positive semidefinite; the system is called symmetric definite system. If the gyroscopic force is not present, then the system is called nongyroscopic.

It is well known that all solutions of the differential equation of (1.1) can be obtained via the algebraic equation

𝜆2𝑀𝑎𝐷+𝜆𝑎+𝐺𝑎+𝐾𝑎𝑥=0.(1.2) Complex numbers 𝜆 and nonzero vectors 𝑥 for which this relation holds are, respectively, the eigenvalues and eigenvectors of the system. The “forward” problem is, of course, to find the eigenvalues and eigenvectors when the coefficient matrices are given. Many authors have devoted to this kind of problem and a series of good results have been made (see, e.g., [17]). Generally speaking, very often natural frequencies and mode shapes (eigenvalues and eigenvectors) of an analytical model described by (1.2) do not match very well with experimentally measured frequencies and mode shapes obtained from a real-life vibrating structure. Thus, a vibration engineer needs to update the theoretical analytical model to ensure its validity for future use. In view of in analytical model (1.1) for structure dynamics, the mass and stiffness are, in general, clearly defined by physical parameters. However, the effect of damping and Coriolis forces on structural dynamic systems is not well understood because it is purely dynamics property that cannot be measured statically. Our main interest in this paper is the corresponding inverse problem, given partially measured information about eigenvalues and eigenvectors, we reconstruct the damping and gyroscopic matrices to produce an adjusted analytical model with modal properties that closely match the experimental modal data. Recently, the quadratic inverse eigenvalue problems over the complex field have been well studied and there now exists a wealth of information. Many papers have been written (see, e.g., [815]), and a complete book [16] has been devoted to the subject. In the present paper we will consider an inverse problem related to damped gyroscopic second-order systems.

Problem P
Given a pair of matrices (Λ,𝑋) in the form 𝜆Λ=diag1,𝜆2,,𝜆2𝑙1,𝜆2𝑙,𝜆2𝑙+1,,𝜆𝑝𝐂𝑝×𝑝𝑥,(1.3)𝑋=1,𝑥2,,𝑥2𝑙1,𝑥2𝑙,𝑥2𝑙+1,,𝑥𝑝𝐂𝑛×𝑝,(1.4) where Λ and 𝑋 are closed under complex conjugation in the sense that 𝜆2𝑗=𝜆2𝑗1𝐂,𝑥2𝑗=𝑥2𝑗1𝐂𝑛 for 𝑗=1,,𝑙, and 𝜆𝑘𝐑,𝑥𝑘𝐑𝑛 for 𝑘=2𝑙+1,,𝑝, we find symmetric positive semidefinite matrix 𝐷 and real-valued skew-symmetric matrix 𝐺 that satisfy the following equation: 𝑀𝑎𝑋Λ2+(𝐷+𝐺)𝑋Λ+𝐾𝑎𝑋=0.(1.5) In other words, each pair (𝜆𝑡,𝑥𝑡),𝑡=1,,𝑝, is an eigenpair of the quadratic pencil 𝑄(𝜆)=𝜆2𝑀𝑎+𝜆(𝐷+𝐺)+𝐾𝑎,(1.6) where 𝑀𝑎>0 and 𝐾𝑎0 are given analytical mass and stiffness matrices.

The goal of this paper is to derive the necessary and sufficient conditions on the spectral information under which the inverse problem is solvable. Our proof is constructive. As a byproduct, numerical algorithm can also be developed thence. A numerical example will be discussed in Section 3.

In this paper we will adopt the following notation. 𝐂𝑚×𝑛,𝐑𝑚×𝑛 denote the set of all 𝑚×𝑛 complex and real matrices, respectively. 𝐎𝐑𝑛×𝑛 denotes the set of all orthogonal matrices in 𝐑𝑛×𝑛. Capital letters 𝐴,𝐵,𝐶, denote matrices, lower case letters denote column vectors, Greek letters denote scalars, 𝛼 denotes the conjugate of the complex number 𝛼, 𝐴𝑇 denotes the transpose of the matrix 𝐴, 𝐼𝑛 denotes the 𝑛×𝑛 identity matrix, and 𝐴+ denotes the Moore-Penrose generalized inverse of 𝐴. We write 𝐴>0(𝐴0) if 𝐴 is real symmetric positive definite (positive semi-definite).

2. Solvability Conditions for Problem P

Let 𝛼𝑖=Re(𝜆𝑖) (the real part of the complex number 𝜆𝑖), 𝛽𝑖=Im(𝜆𝑖) (the imaginary part of the complex number 𝜆𝑖), 𝑦𝑖=Re(𝑥𝑖), 𝑧𝑖=Im(𝑥𝑖) for 𝑖=1,3,,2𝑙1. Define

𝛼Λ=diag1𝛽1𝛽1𝛼1𝛼,,2𝑙1𝛽2𝑙1𝛽2𝑙1𝛼2𝑙1,𝜆2𝑙+1,,𝜆𝑝𝐑𝑝×𝑝𝑦,(2.1)𝑋=1,𝑧1,,𝑦2𝑙1,𝑧2𝑙1,𝑥2𝑙+1,,𝑥𝑝𝐑𝑛×𝑝,(2.2)𝐶=𝐷+𝐺.(2.3) Then the equation of (1.5) can be written equivalently as

𝑀𝑎𝑋Λ2𝑋+𝐶Λ+𝐾𝑎𝑋=0,(2.4) and the relations of 𝐶,𝐷, and 𝐺 are

1𝐷=2𝐶+𝐶𝑇1,𝐺=2𝐶𝐶𝑇.(2.5) In order to solve the equation of (2.4), we shall introduce some lemmas.

Lemma 2.1 (see [17]). If 𝐴𝐑𝑚×𝑙,𝐹𝐑𝑞×𝑙, then 𝑍𝐴=𝐹 has a solution 𝑍𝐑𝑞×𝑚 if and only if 𝐹𝐴+𝐴=𝐹. In this case, the general solution of the equation can be described as 𝑍=𝐹𝐴++𝐿(𝐼𝑚𝐴𝐴+), where 𝐿𝐑𝑞×𝑚 is an arbitrary matrix.

Lemma 2.2 (see [18, 19]). Let 𝐴𝐑𝑚×𝑚,𝐵𝐑𝑚×𝑙, then 𝑍𝐵𝑇+𝐵𝑍𝑇=𝐴(2.6) has a solution 𝑍𝐑𝑚×𝑙 if and only if 𝐴=𝐴𝑇,𝐼𝑚𝐵𝐵+𝐴𝐼𝑚𝐵𝐵+=0.(2.7) When condition (2.7) is satisfied, a particular solution of (2.6) is 𝑍0=12𝐴(𝐵+)𝑇+12𝐼𝑚𝐵𝐵+𝐴(𝐵+)𝑇,(2.8) and the general solution of (2.6) can be expressed as 𝑍=𝑍0+2𝑉𝑉𝐵+𝐵𝐵𝑉𝑇(𝐵+)𝑇𝐼𝑚𝐵𝐵+𝑉𝐵+𝐵,(2.9) where 𝑉𝐑𝑚×𝑙 is an arbitrary matrix.

Lemma 2.3 (see [20]). Let 𝐻𝐻=[𝑖𝑗] be a real symmetric matrix partitioned into 2×2 blocks, where 𝐻11 and 𝐻22 are square submatrices. Then 𝐻 is a symmetric positive semi-definite matrix if and only if 𝐻11𝐻0,22𝐻21𝐻+11𝐻12𝐻0,rank11𝐻=rank11,𝐻12.(2.10) Lemma 2.3 directly results in the following lemma.

Lemma 2.4. Let 𝐻𝐻=[𝑖𝑗]𝐑𝑛×𝑛 be a real symmetric matrix partitioned into 2×2 blocks, where 𝐻11𝐑𝑟×𝑟 is the known symmetric submatrix, and 𝐻12,𝐻22 are two unknown submatrices. Then there exist matrices 𝐻12,𝐻22 such that 𝐻 is a symmetric positive semi-definite matrix if and only if 𝐻110. Furthermore, all submatrices 𝐻12,𝐻22 can be expressed as 𝐻12=𝐻11𝐻𝑌,22=𝑌𝑇𝐻11𝑌+𝐻,(2.11) where 𝑌𝐑𝑟×(𝑛𝑟) is an arbitrary matrix and 𝐻𝐑(𝑛𝑟)×(𝑛𝑟) is an arbitrary symmetric positive semi-definite matrix.

By Lemma 2.1, the equation of (2.4) with respect to unknown matrix 𝐶𝐑𝑛×𝑛 has a solution if and only if

𝑀𝑎𝑋Λ2+𝐾𝑎𝑋(𝑋Λ)+𝑋Λ=𝑀𝑎𝑋Λ2+𝐾𝑎𝑋.(2.12) In this case, the general solution of (2.4) can be written as

𝐶=𝐶0𝐼+𝑊𝑛𝑋Λ𝑋Λ+,(2.13) where 𝑊𝐑𝑛×𝑛 is an arbitrary matrix and

𝐶0𝑀=𝑎𝑋Λ2+𝐾𝑎𝑋(𝑋Λ)+.(2.14) From (2.5) and (2.13) we have

𝑊𝐼𝑛𝑋Λ𝑋Λ++𝐼𝑛𝑋Λ𝑋Λ+𝑊𝑇=2𝐷𝐶0𝐶𝑇0.(2.15) For a fixed symmetric positive semi-definite matrix 𝐷, we know, from the lemma (2.2), that the equation of (2.15) has a solution 𝑊𝐑𝑛×𝑛 if and only if

(𝑋Λ)𝑇𝐷𝑋1Λ=2𝑋Λ𝑇𝐶0+𝐶𝑇0𝑋Λ.(2.16) Let the singular value decomposition (SVD) of 𝑋Λ be

𝑋𝑃Λ=𝑈Σ000𝑇=𝑈1Σ𝑃𝑇1,(2.17) where 𝑈=[𝑈1,𝑈2]𝐎𝐑𝑛×𝑛,𝑃=[𝑃1,𝑃2]𝐎𝐑𝑝×𝑝,Σ=diag{𝜎1,,𝜎𝑟}>0, and define

𝑈𝑇𝐷𝐷𝑈=11𝐷12𝐷𝑇12𝐷22with𝐷11𝐑𝑟×𝑟.(2.18) Then (2.16) becomes

Σ𝐷111Σ=2Σ𝑈𝑇1𝐶0+𝐶𝑇0𝑈1Σ.(2.19) Clearly, 𝐷110 if and only if

𝑈𝑇1𝐶0+𝐶𝑇0𝑈10(2.20) or equivalently,

𝑋Λ𝑇𝐶0+𝐶𝑇0𝑋Λ0.(2.21) According to Lemma 2.4, we know if condition (2.21) holds, then there are a family of symmetric positive semi-definite matrices

𝐷𝐷=𝑈11𝐷11𝑌𝑌𝑇𝐷11𝑌𝑇𝐷11𝑈𝑌+𝐻𝑇,(2.22) where 𝐷11=(1/2)𝑈𝑇1(𝐶0+𝐶𝑇0)𝑈1,𝑌𝐑𝑟×(𝑛𝑟) is an arbitrary matrix, and 𝐻𝐑(𝑛𝑟)×(𝑛𝑟) is an arbitrary symmetric positive semi-definite matrix, satisfying the equation of (2.16).

Applying Lemma 2.2 again to the equation of (2.15) yields

𝑊=𝑊0𝐼+2𝑉𝑉𝑛𝑋Λ𝑋Λ+𝐼𝑛𝑋Λ𝑋Λ+𝑉𝑇𝐼𝑛𝑋Λ𝑋Λ+𝑋Λ𝑋Λ+𝑉𝐼𝑛𝑋Λ𝑋Λ+,(2.23) where

𝑊0=122𝐷𝐶0𝐶𝑇0𝐼𝑛𝑋Λ𝑋Λ++12𝑋Λ𝑋Λ+2𝐷𝐶0𝐶𝑇0𝐼𝑛𝑋Λ𝑋Λ+(2.24) is a particular solution of (2.15) with 𝐷 the same as in (2.22), and 𝑉𝐑𝑛×𝑛 is an arbitrary matrix.

Since 𝐶0(𝐼𝑛𝑋𝑋Λ(Λ)+)=0, it follows from (2.13) and (2.23) that

1𝐺=2𝐶𝐶𝑇=12𝐶0𝐶𝑇0+12𝑊0𝐼𝑛𝑋Λ𝑋Λ+𝐼𝑛𝑋Λ𝑋Λ+𝑊𝑇0+𝐼𝑛𝑋Λ𝑋Λ+𝑉𝑉𝑇𝐼𝑛𝑋Λ𝑋Λ+=12𝐶0𝐶𝑇0+122𝐷𝐶𝑇0𝐼𝑛𝑋Λ𝑋Λ+𝐼𝑛𝑋Λ𝑋Λ+2𝐷𝐶0+𝐼𝑛𝑋Λ𝑋Λ+𝑉𝑉𝑇𝐼𝑛𝑋Λ𝑋Λ+=𝐺0+𝐼𝑛𝑋Λ𝑋Λ+𝐽𝐼𝑛𝑋Λ𝑋Λ+,(2.25) where

𝐺0=12𝐶0𝐶𝑇0+122𝐷𝐶𝑇0𝐼𝑛𝑋Λ𝑋Λ+12𝐼𝑛𝑋Λ𝑋Λ+2𝐷𝐶0,(2.26) and 𝐽 is an arbitrary skew-symmetric matrix.

By now, we have proved the following result.

Theorem 2.5. Let 𝑀𝑎>0,𝐾𝑎0, and let the matrix pair (𝑋,Λ)𝐂𝑛×𝑝×𝐂𝑝×𝑝 be given as in (1.3) and (1.4). Separate matrices Λ and 𝑋 into real parts and imaginary parts resulting Λ and 𝑋 expressed as in (2.1) and (2.2). Let the SVD of 𝑋Λ be (2.17). Then Problem P is solvable if and only if conditions (2.12) and (2.21) are satisfied, in which case, 𝐷 and 𝐺 are given, respectively, by (2.22) and (2.25).

Note that when rank(𝑋Λ)=𝑛, that is, 𝑋Λ is full row rank, then the arbitrary matrices 𝑌 and 𝐻 in the equation of (2.22) disappear, in this case, 𝐷 is uniquely determined, and so is 𝐺. Thus, we have the following corollary.

Corollary 2.6. Under the same assumptions as in Theorem 2.5, suppose that rank (𝑋Λ)=𝑛, if condition (2.12) and 𝐶0+𝐶𝑇00 are satisfied. Then there exist unique matrices 𝐷 and 𝐺 such that (1.5) holds. Furthermore, 𝐷 and 𝐺 can be expressed as 1𝐷=2𝐶0+𝐶𝑇01,𝐺=2𝐶0𝐶𝑇0.(2.27)

3. A Numerical Example

Based on Theorem 2.5 we can state the following algorithm.

Algorithm 3.1. An algorithm for solving Problem P.(1)Input 𝑀𝑎,𝐾𝑎,Λ,𝑋.(2)Separate matrices Λ and 𝑋 into real parts and imaginary parts resulting Λ and 𝑋 given as in (2.1) and (2.2).(3)Compute the SVD of 𝑋Λ according to (2.17).(4)If (2.12) and (2.21) hold, then continue, otherwise, go to (1).(5)Choose matrices 𝑌𝐑𝑟×(𝑛𝑟), 𝐻𝐑(𝑛𝑟)×(𝑛𝑟) with 𝐻0, and 𝐽𝐑𝑛×𝑛 with 𝐽𝑇=𝐽.(6)According to (2.22) and (2.25) calculate 𝐷 and 𝐺.

Example 3.2. Consider a five-DOF system modelled analytically with mass and stiffness matrices given by 𝑀𝑎𝐾=diag{1,2,5,4,3},𝑎=.10020000201203500035801200012954000040124(3.1) The measured eigenvalue and eigenvector matrices Λ and 𝑋 are given by ,.Λ=diag1.7894+7.6421𝑖1.78947.6421𝑖1.6521+3.9178𝑖1.65213.9178𝑖𝑋=0.1696+0.6869𝑖0.16960.6869𝑖0.02450.0615𝑖0.0245+0.0615𝑖0.3906+0.5733𝑖0.39060.5733𝑖0.08200.2578𝑖0.0820+0.2578𝑖0.02100.1166𝑖0.0210+0.1166𝑖0.30250.5705𝑖0.3025+0.5705𝑖0.0389+0.0079𝑖0.03890.0079𝑖0.5205+0.2681𝑖0.52050.2681𝑖0.0486+0.0108𝑖0.04860.0108𝑖0.1806+0.3605𝑖0.18060.3605𝑖(3.2) According to Algorithm 3.1, it is calculated that conditions (2.12) and (2.21) hold. Thus, by choosing 𝑌=0.37420.30620.37070.7067𝑇,,𝐻=10,𝐽=00.45120.18790.07470.44680.451200.29560.03950.05060.18790.295600.60440.58440.07470.03950.604400.19740.44680.05060.58440.19740(3.3) we can figure out ,.𝐷=10.82558.57154.68400.03277.72708.571515.90972.63321.223411.24174.68402.63329.21850.58370.14490.03271.22340.583713.82353.23617.727011.24170.14493.236126.5027𝐺=0.00001.04383.79210.54708.67401.04380.00006.67470.739110.32623.79216.67470.00007.07746.21010.54700.73917.07740.000012.64968.674010.32626.210112.64960.0000(3.4) We define the residual as 𝜆res𝑖,𝑥𝑖=𝜆2𝑖𝑀𝑎+𝜆𝑖(𝐷+𝐺)+𝐾𝑎𝑥𝑖,(3.5) where is the Frobenius norm, and the numerical results shown in Table 1.