/ / Article

Research Article | Open Access

Volume 2014 |Article ID 703178 | 9 pages | https://doi.org/10.1155/2014/703178

# Solutions of a Quadratic Inverse Eigenvalue Problem for Damped Gyroscopic Second-Order Systems

Accepted15 Dec 2013
Published21 Jan 2014

#### Abstract

Given pairs of complex numbers and vectors (closed under conjugation), we consider the inverse quadratic eigenvalue problem of constructing real matrices , , , and , where , and are symmetric, and is skew-symmetric, so that the quadratic pencil has the given pairs as eigenpairs. First, we construct a general solution to this problem with . Then, with the special properties and , we construct a particular solution. Numerical results illustrate these solutions.

#### 1. Introduction

Vibrating structures such as buildings, bridges, highways, and airplanes are distributed parameter systems [1]. Very often a distributed parameter system is first discretized to a matrix second-order using techniques of finite element or finite difference, and then an approximate solution is obtained for the discretized model. Associated with the matrix second-order model is the eigenvalue problem of the quadratic pencil, where , , , and are, respectively, mass, damping, gyroscopic and stiffness matrices.

The system represented by (1) is called damped gyroscopic system. In general, the gyroscopic matrix is always skew-symmetric, the damping matrix and the stiffness matrix are symmetric, the mass matrix is symmetric positive definite, and they are all real matrices. If , the system is called damped nongyroscopic system, and if , the system is called undamped gyroscopic system.

The damped gyroscopic system has been widely studied in two aspects: the quadratic eigenvalue problem (QEP) and the quadratic inverse eigenvalue problem (QIEP). The QEP involves finding scalars and nonzero vectors , called the eigenvalues and eigenvectors of the system, to satisfy the algebraic equation , when the coefficient matrices are given. Many authors have been devoted to this kind of problems and a series of good results have been made (see, e.g., [2â€“8]). The QIEP determines or estimates the parameters of the system from observed or expected eigeninformation of . Our main interest in this paper is the corresponding inverse problem: given partially measured information about eigenvalues and eigenvectors, we reconstruct matrices , , , and , satisfied with several conditions, so that has the given eigenpairs. The problem we considered is stated as follows.

Problem 1. Given an eigeninformation pair , where with find real matrices , , , and , with being symmetric definite, and being symmetric, and being skew-symmetric, so that

In [9], Gohberg et al. developed a powerful GLR theory to solve the QIEP of the undamped gyroscopic system. In [10], Chu and Xu developed an elegant procedure to obtain a real-valued spectral decomposition of the damped nongyroscopic system. And then, Jia and Wei [11] derived a real-valued spectral decomposition of the undamped gyroscopic system. However, [10, 11] both need all the eigen-information of to obtain the parameters, and it is often impractical or impossible to obtain complete spectral information. Thus, it becomes very interesting to consider a QIEP with only a subset of eigenpairs known.

In [12], Kuo et al. constructed the solutions of the QIEP of the damped nongyroscopic system with given eigenpairs. And for the same system, Cai et al. [13] solved the QIEP with given eigenpairs. Meanwhile, for the damped gyroscopic system, Yuan [14] solved the QIEP with given eigenpairs. In [14], Yuan constructed symmetric positive semidefinite matrix and skew-symmetric matrix for , with and as given matrices. So, it becomes challenging to construct for damped gyroscopic system (1) with given eigenpairs, and this is the goal of this paper.

This paper is organized as follows. In Section 2, we establish the solubility theory of the Problem 1. In Section 3, we develop a simple method to compute a particular solution to the Problem 1 with . Moreover, for and , a simple algorithm is developed to compute a solution in Section 4. Some numerical results are presented in Section 5 to illustrate our main results. In the last section, some conclusions and acknowledgements are given.

Throughout this paper, we use capital letters to denote matrices, and lowercase (bold) letters to denote scalars (vectors). denotes the transpose of the matrix , denotes the identity matrix, and denotes the Moore-Penrose generalized inverse of . We write if is real symmetric positive (semidefinite). The spectrum of is denoted by .

For simplicity, we make the following assumptions.(A1)The eigenvector matrix in Problem 1 has full column rank, that is, rank .(A2)The eigenvalue matrix in Problem 1 has only simple eigenvalues, that is, .

Remark 2. For the case that , using the assumption of simple eigenvalues, we can partition , where has no zero eigenvalue, and then do discussion with . So, in this paper, we only consider the case that has no zero eigenvalue.

#### 2. General Solution of the Problem

In this section, we will give a general solution to the Problem 1 for given matrix pair as in (2) and (3). At the beginning, we will introduce some lemmas.

Lemma 3 (see [15]). Let and ; then has a solution if and only if where is the Moore-Penrose generalized inverse of .
When condition (6) is satisfied, the general solution of (5) is where is arbitrary and is constrained only by the symmetry requirement that

Lemma 3 directly results in the following lemma.

Lemma 4. Let be a nonsingular matrix and ; then has a solution if and only if in which case the general solution is , where is an arbitrary matrix.

Given matrix pair as in Problem 1: let be the -factorization of , where is orthogonal and is upper triangular. We may require that has positive diagonal entries, since is of full column rank.

Let , so that finding , , , and which satisfy (4) is equivalent to finding , , and which satisfy and the relations of , , and are

Let ; we can see exists by using . Denoting where , , , , , , , , , , we will obtain the following main theorem.

Theorem 5. Let , , and be defined as in (14)â€“(16); then there are real matrices , and satisfy (4) if and only if(i) is arbitrarily symmetric positive definite,(ii), , are arbitrary,(iii) is arbitrary symmetric,(iv), where is arbitrary symmetric,(v),(vi).Furthermore, and can be expressed as in (13).

Proof. Necessity. From (14)â€“(16), we know , , and ; substituting them and (11) into (12), we have
Thus, finding , , and which satisfy (12) is equivalent to finding the submatrices , , , , , and which satisfy (17) and (18). Clearly, it follows from (18) that is determined by where and are arbitrary.
As and are required to be symmetric positive definite and symmetric, respectively, so are and in (14) and (16). From (17) it follows that
Let be an arbitrary symmetric positive definite matrix. We need to find such that is symmetric; that is, it satisfies After rearrangement, (21) becomes Because and is nonsingular, we can get from Lemma 4 that where is arbitrary symmetric. Substituting (23) into (20) yields (v). Furthermore, and can be expressed as in (13).
Sufficiency. From the description of (i)â€“(vi), we can obtain that (12) holds; thus (4) holds with

Remark 6. The general solution to the Problem 1 with and prescribed eigenpairs has been given in [12], here, we generalize its solution to the case of .

Remark 7. It is complicated for the more general case , and we will discuss it in our next work. However, here we provide a simple solution. We can select the linear independent columns and the relevant eigenvalues to construct a new and , then do discussion with them.

Remark 8. When , by Theorem 5, the general solution of the Problem 1 is given by where with and which can be arbitrarily chosen and is arbitrarily symmetric.

Using Theorem 5, we can construct a solution to the Problem 1 as follows.

Algorithm 9. An algorithm for solving Problem 1 is proposed as follows.(1)Input and , compute the decomposition of according to (11), and compute .(2) Choose a symmetric positive definite matrix and a symmetric matrix , arbitrarily. Compute and by (iv) and (v) in Theorem 5, respectively.(3) Choose arbitrary and , and compute by (vi) in Theorem 5, and .(4)Choose a symmetric positive definite matrix ; compute .(5)Choose arbitrary matrices and and a symmetric matrix , and form where is given by (11). Compute and by (13).

#### 3. Particular Solutions with

As we all know, the applications of the undamped gyroscopic system (i.e., ) exist in many fields; for details, see [5]. In this section, we will discuss the particular solutions of Problem 1 with and prescribed eigenpairs. And in this case, in (1) becomes

It is well known that the eigenvalues of (28) have a Hamiltonian structure; that is, they occur in quadruples , possibly collapsing to real or imaginary pairs or single-zero eigenvalues. In [11], Jia and Wei discussed the eigenvalues of in (28) and separated them into four categories. From the assumption (A2), we can know is even. Here we rewrite the given eigeninformation pair of Problem 1 as with where are eigenpairs, and .

In this section, the Problem 1 becomes the following problem.

Given an eigeninformation pair with (29)â€“(31), find real matrices , , and , with being symmetric definite, being symmetric, and being skew-symmetric, so that,

As well as in Section 2, let where is partitioned conforming with that of in (14), and we can easily calculate that also satisfies the action of in Section 2, except that has an additional property, that is, . In the following theorem, we will discuss the solubility of Problem 1 with and .

Theorem 10. Let , , and be defined as in (14), (16) and (34); then there are real matrices , , and which satisfy (33) if and only if(i) is arbitrarily symmetric positive definite,(ii) and are arbitrary,(iii) is arbitrary symmetric,(iv),(v),(vi),in which with and being arbitrary real numbers.

Proof. Necessity. Same as the proof of Theorem 5, we can get where and are arbitrary. We also have and satisfies After rearrangement, (38) becomes It is easily seen that (39) has a particular solution Next, we consider the homogeneous equation Substituting into (41), we get Write , where is partitioned conforming with that of in (29). Then we observe that When , (43) can be rewritten as in which stands for the Kronecker product and vec stands for the column vectorization of a matrix. Because , and assumption (A2), we know that is nonsingular; therefore , so .
Now we discuss the structures of matrices , , which are skew-symmetric. For simplicity, we denote by . Then we need to solve Since has the form in (30), we can easily compute that the general solution of (45) has the form Thus, the general solution of the homogeneous equation (41) has the form with defined in (35). This, together with (40), gives rise to the general solution of (39) Substituting (48) into (37) yields (v).

Sufficiency. From the description of (i)â€“(vi), we can obtain that (33) holds.

Remark 11. When , by Theorem 10, the solution of the Problem 1 with is given by where with and which can be arbitrarily chosen.

#### 4. Particular Solution with and

In practice, the matrix in the Problem 1 with is sometimes required to be symmetric negative definite [5]. In this section, we will apply Theorem 10 to construct such a solution. We first prove the following lemma.

Lemma 12. For any given matrix defined in (35), we can construct a symmetric positive definite matrix so that defined in Theorem 10 is symmetric negative definite.

Proof. Since , it is easy to see that in Theorem 10 is symmetric negative definite if and only if the matrix is symmetric negative definite.
By the assumption (A2), we can first construct a symmetric positive definite matrix so that . Then we use to construct the desired .
From (35) and (30), we denote with Here and are arbitrary real numbers. Take with Using (53), if we choose , , , , , and such that then and . Obviously, such real numbers , , , , , and can be easily chosen. Once is determined, the required can be chosen by Furthermore,

Using Lemma 12, we can construct a particular solution to the Problem 1 with , as follows.

Algorithm 13. An algorithm for solving the Problem 1 with , is proposed as follows.(1)Input and , compute the decomposition of according to (11), and compute .(2)Choose as in (35) arbitrarily and compute by (52) and (53).(3)Construct a symmetric positive definite matrix by (54)â€“(57), compute by (iv), and compute by (v) in Theorem 10 or by (58).(4)Choose arbitrary and , and compute by (vi) in Theorem 10, and .(5)Choose a symmetric positive definite matrix and a symmetric negative definite matrix ; compute , .(6)Choose an arbitrary skew-symmetric matrix , and form where is given by (11).

Remark 14. When , we only need to choose by (54)â€“(56), and compute by (57), that is, is the whole ; then use the same method described in Remark 8, we can obtain the particular solution of the Problem 1 with and .

#### 5. Numerical Examples

In this section, we present two numerical examples to illustrate the solutions constructed in Sections 2 and 4, respectively. For presentation, we report all numbers in 5 significant digits only, though all calculations are carried out in full precision.

Example 1. In this example, we use Algorithm 9 to construct the general solution of the Problem 1. The partially prescribed eigeninformation as in (2)-(3) is given by the following eigenvalues and eigenvectors, which are from [12]:
It is easy to check that the matrix pair satisfy the assumptions (A1) and (A2). According to Algorithm 9, by randomly choosing
we can figure out
It is easy to check that is symmetric positive definite, and are symmetric, and is skew-symmetric. We define the residual as and the numerical results are shown in Table 1. This shows that Algorithm 9 to construct the general solution of the Problem 1 is effective.

Example 2. In this example, we use Algorithm 13 to construct the general solution of the Problem 1 with and . The partially prescribed eigeninformation as in (29)â€“(31) is given by randomly generated eigenvalues and eigenvectors
It is easy to check that the matrix pair satisfy the assumptions (A1) and (A2). According to Algorithm 13, by randomly choosing and choosing
we can figure out