• Views 494
• Citations 0
• ePub 18
• PDF 381
`Journal of Applied MathematicsVolume 2014, Article ID 513513, 7 pageshttp://dx.doi.org/10.1155/2014/513513`
Research Article

## Extremal Inverse Eigenvalue Problem for a Special Kind of Matrices

1School of Sciences, Jiujiang University, Jiujiang 332005, China
2Faculty of Library, Jiujiang University, Jiujiang 332005, China

Received 11 June 2013; Accepted 13 December 2013; Published 5 February 2014

Copyright © 2014 Zhibing Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We consider the following inverse eigenvalue problem: to construct a special kind of matrix (real symmetric doubly arrow matrix) from the minimal and maximal eigenvalues of all its leading principal submatrices. The necessary and sufficient condition for the solvability of the problem is derived. Our results are constructive and they generate algorithmic procedures to construct such matrices.

#### 1. Introduction

Peng et al. in [1] solved two inverse eigenvalue problems for symmetric arrow matrices and, in the other article [2], a correction, for one of the problems stated in [1], has been presented as well. In recent paper [3], Nazari and Beiranvand introduced an algorithm to construct symmetric quasi- antibidiagonal matrices that having its given eigenvalues. Pickmann et al. in [4] introduced an algorithm for inverse eigenvalue problem on symmetric tridiagonal matrices. In this paper we introduced symmetric doubly arrow matrix as follows: where ,  . If or ; then the matrix of the form (1) is a symmetric arrow matrix as follows:

This family of matrices appears in certain symmetric inverse eigenvalue and inverse Sturm-Liouville problems [5, 6], which arise in many applications [712], including modern control theory and vibration analysis [7, 8]. In this paper, we construct matrix of the form (1), from a special kind of spectral information, which only recently is being considered. Since this type of matrix structure generalizes the well-known arrow matrices, we think that it will also become of interest in applications.

We will denote as the identity matrix of order ; as the leading principal submatrix of ; as the characteristic polynomial of ; and as the eigenvalues of .

We want to solve the following problem.

Problem 1. Given the real numbers and , , find an matrix of the form (1) such that and are, respectively, the minimal and maximal eigenvalues of ,  .

Our work is motivated by the results in [2]. There, the authors solved this kind of inverse eigenvalue problem for symmetric arrow matrix of the form (2).

Theorem 2 (see [2]). Let the real numbers and , , be given. Then there exists an matrix of the form (2), such that and are, respectively, the minimal and maximal eigenvalues of its leading principal submatrix , , if and only if

Theorem 3 (see [2]). Let the real numbers and ,  , be given. Then there exists a unique matrix of the form (2), with and , such that and are, respectively, the minimal and the maximal eigenvalues of its leading principal submatrix ,  , if and only if

In this paper, we will show that Theorems 2 and 3 are also right for symmetric doubly arrow matrix in (1) by a similar method.

The paper is organized as follows. In Section 2, we discuss some properties of . In Section 3, we solve Problem 1 by giving a necessary and sufficient condition for the existence of the matrix in (1) and also solve the case in which the matrix , in Problem 1, is required to have all its entries positive. Finally, In Section 4 we show some examples to illustrate the results.

#### 2. Properties of the Matrix

Lemma 4. Let be a matrix of the form (1). Then the sequence of characteristic polynomials satisfies the recurrence relation:

Proof. It is easy to verify by expanding the determinant.

Lemma 5 (see [2]). Let be a monic polynomial of degree with all real zeroes. If and are, respectively, the minimal and maximal zeroes of , then(1)if , we have that ;(2)if , we have that .

Observe that, from the Cauchy interlacing property, the minimal and the maximal eigenvalue, and , respectively, of each leading principal submatrix ,  , of the matrix in (1) satisfy the relations

Lemma 6. Let be the polynomials defined in (5), whose minimal and maximal zeroes, and ,  , respectively, satisfy the relations (6) and (7), and Then

Proof. From Lemma 5, we have Moreover, from (7) Clearly from (10) and (11).

Lemma 7 (see [2]). Let be a matrix of the form (2) with . Let and , respectively, be the minimal and maximal eigenvalues of the leading principal submatrix ,  , of . Then for each .

#### 3. Solution of Problem 1

The following theorem solves Problem 1. In particular, the theorem shows that condition (6) is necessary and sufficient for the existence of the matrix in (1).

Theorem 8. Let the real numbers and ,  , be given. Then there exists an matrix of the form (1), such that and are, respectively, the minimal and maximal eigenvalues of its leading principal submatrix ,  , if and only if

Proof. Let and ,  , satisfy (13). Observe that and . From Theorem 2, there exists ,   with and as its minimal and maximal eigenvalues, respectively. To show the existence of ,   with and as its minimal and maximal eigenvalues, respectively, is equivalent to showing that the system of equations has real solution and ,  . If the determinant of the coefficient matrix of the system (15) is nonzero, then the system has unique solutions and ,  . In this case, from Lemma 6 we have . By solving the system (15) we obtain Since then is a real number and therefore, there exists with the spectral properties required.
Now we will show that, if , the system (15) still has a solution. We do this by induction by showing that the rank of the coefficients matrix is equal to the rank of the augmented matrix.
Let . If , then which, from Lemma 5, is equivalent to In this case the augmented matrix is and the ranks of both matrices, the coefficient matrix and the augmented matrix, are equal. Hence exists.
Now we consider . If , then From Lemma 5 Then leads us to the following cases:(i), (ii), (iii), (iv),and the augmented matrix isBy replacing conditions (i)–(iii) in (24), it is clear that the coefficients matrix and the augmented matrix have the same rank. From condition (iv), the system of (15) becomes If and , then and from (13) Thus, , which is a contradiction. Hence, under condition (iv) or and therefore the coefficients matrix and the augmented matrix have also the same rank. By taking , there exists a matrix with the required spectral properties. The necessity comes from the Cauchy interlacing property.

We have seen in the proof of Theorem 8 that if the determinant of the coefficients matrix of the system (15) is nonzero, then the Problem 1 has a unique solution except for the sign of the entries.

Now we solve the Problem 1 in the case that the entries are required to be positive. We need the following lemma.

Lemma 9. Let be a matrix of the form (1) with . Let and , respectively, be the minimal and maximal eigenvalues of the leading principal submatrix ,  , of . Then for each .

Proof. From Lemma 7, (27) hold for . For , we have from (5) As , then from Lemma 7, we have , , and If or , then contradicts or (29) and from (7) we have Let . Then from (5) In the same way . Hence and are not zeroes of and from (6) Now suppose that . Then contradicts the inequalities (31) and (33). The same occurs if we assume that . Then from (7) and Lemma 7 we have Now, suppose that (27) hold for and consider Since and , then and . Hence neither nor are zeroes of . Then from (6) we have Finally, if , then contradicts (33). Then

The following theorem solves Problem 1 with .

Theorem 10. Let the real numbers and ,  , be given. Then there exists a unique matrix of the form (1), with and , such that and are, respectively, the minimal and maximal eigenvalues of its leading principal submatrix , , if and only if

Proof. The proof is quite similar to the proof of Theorem 8: Let and , , satisfy (40). From Theorem 3, there exists ,   with and as its minimal and maximal eigenvalues, respectively. To show the existence of ,   with the required spectral properties, is equivalent to showing that the system of (15) has real solutions and , with . To do this it is enough to show that the determinant of the coefficients matrix is nonzero.
From Lemmas 6 and 9 it follows that . Hence and the system (15) has real and unique solutions: where Then it is clear that . Therefore, the can be chosen positive and then there exists a unique matrix with the required spectral properties. The necessity of the result comes from Lemma 9.

#### 4. Examples

Now we give an algorithm to construct the solution of Problem 1.

Algorithm.(1)Input a positive integer and real numbers and ,  ;(2)let .  ;(3)for , calculate according to (5);(4)compute and according to (17).

Example 1. The following numbers [2]satisfy the necessary and sufficient condition (40) of Theorem 10. Then the doubly arrow matrix with and is From the above doubly arrow matrix , we recompute the spectrum of its submatrix by MATLAB 7.0, , and get

Example 2. We modify the previous example, that some given eigenvalues become equal to [2]These numbers satisfy the necessary and sufficient condition (13) of Theorem 8. One solution of Problem 1 with is the matrix Recomputing the spectrum of , we have

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This work is supported by the Natural Science Foundation of Jiangxi, China (nos. 20114BAB201015, 20122BAB201013, and 20132BAB201056), and Scientific and Technological Project of Jiangxi Education Office, China (no. KJLD13093).

#### References

1. J. Peng, X.-Y. Hu, and L. Zhang, “Two inverse eigenvalue problems for a special kind of matrices,” Linear Algebra and Its Applications, vol. 416, no. 2-3, pp. 336–347, 2006.
2. H. Pickmann, J. Egaña, and R. L. Soto, “Extremal inverse eigenvalue problem for bordered diagonal matrices,” Linear Algebra and its Applications, vol. 427, no. 2-3, pp. 256–271, 2007.
3. A. M. Nazari and Z. Beiranvand, “The inverse eigenvalue problem for symmetric quasi anti-bidiagonal matrices,” Applied Mathematics and Computation, vol. 217, no. 23, pp. 9526–9531, 2011.
4. H. Pickmann, R. L. Soto, J. Egaña, and M. Salas, “An inverse eigenvalue problem for symmetrical tridiagonal matrices,” Computers & Mathematics with Applications, vol. 54, no. 5, pp. 699–708, 2007.
5. D. Boley and G. H. Golub, “A survey of matrix inverse eigenvalue problems,” Inverse Problems, vol. 3, no. 4, pp. 595–622, 1987.
6. M. T. Chu and G. H. Golub, Inverse Eigenvalue Problems: Theory, Algorithms, and Applications, Oxford University Press, New York, NY, USA, 2005.
7. J. C. Egaña, N. M. Kuhl, and L. C. Santos, “An inverse eigenvalue method for frequency isolation in spring-mass systems,” Numerical Linear Algebra with Applications, vol. 9, no. 1, pp. 65–79, 2002.
8. G. M. L. Gladwell, “Inverse problems in vibration,” Applied Mechanics Reviews, vol. 39, 1986.
9. O. H. Hald, “Inverse eigenvalue problems for Jacobi matrices,” Linear Algebra and Its Applications, vol. 14, no. 1, pp. 63–85, 1976.
10. X. Y. Hu, L. Zhang, and Z. Y. Peng, “The construction of a Jacobi matrix from its defective eigen-pair and a principal submatrix,” Mathematica Numerica Sinica, vol. 22, no. 3, pp. 345–354, 2000.
11. A. P. Liao and Z. Z. Bai, “Construction of positive definite Jacobian matrices from two eigenpairs,” Journal on Numerical Methods and Computer Applications, vol. 23, no. 2, pp. 131–138, 2002 (Chinese).
12. L. Lu and M. K. Ng, “On sufficient and necessary conditions for the Jacobi matrix inverse eigenvalue problem,” Numerische Mathematik, vol. 98, no. 1, pp. 167–176, 2004.