Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal

Volume 2014, Article ID 543610, 7 pages

http://dx.doi.org/10.1155/2014/543610
Research Article

A New Solution to the Matrix Equation

School of Mathematical Sciences, University of Jinan, Jinan 250022, China

Received 16 April 2014; Accepted 28 June 2014; Published 15 July 2014

Academic Editor: Kaleem R. Kazmi

Copyright © 2014 Caiqin Song. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We investigate the matrix equation . For convenience, the matrix equation is named as Kalman-Yakubovich-conjugate matrix equation. The explicit solution is constructed when the above matrix equation has unique solution. And this solution is stated as a polynomial of coefficient matrices of the matrix equation. Moreover, the explicit solution is also expressed by the symmetric operator matrix, controllability matrix, and observability matrix. The proposed approach does not require the coefficient matrices to be in arbitrary canonical form. At the end of this paper, the numerical example is shown to illustrate the effectiveness of the proposed method.

1. Introduction

Throughout this paper, we use and to denote the real number field and the complex number field. We use , , , and to denote transpose, conjugate, conjugate transpose, and the adjoint matrix of , respectively. and are the sets of characteristic eigenvalues of matrices and , respectively. is represented as appropriate dimension identity matrix. Moreover, for , , and , we have the following notations: In this case, , , , and are named as the controllability matrix, the observability matrix, and a symmetric operator matrix, respectively.

Matrix equations are often encountered in system theory and control theory, such as Lyapunov matrix equation, Sylvester matrix equation, and so on. The traditional method is that we convert this kind of matrix equations into their equivalent forms by using the Kronecker product, however, which involves the inversion of the associated large matrix and results in increasing computation and excessive computer memory. In the field of matrix algebra, some complex matrix equations have attached much attention from many researchers since it is shown in [1]. In [2, 3], the consistence of the matrix equation is related to the consimilarity of two partitioned matrices associated with the matrices , , and . In the preceding matrix equation, denotes the matrix obtained by taking the complex conjugate of each element of . Recently, in [4] some explicit expressions of the solution to the matrix equation were established by means of real representation of a complex matrix, and it is shown that there exists a unique solution if and only if and have no common eigenvalues. Yuan and Liao [5] investigated the least squares solution of the quaternion -conjugate matrix equation (where denotes the -conjugate of quaternion matrix ) with the least norm using the complex representation of a quaternion matrix, the Kronecker product of the matrices and the Moore-Penrose generalized inverse. The authors in [6] considered the matrix nearness problem associated with the quaternion matrix equation   by means of the CCD-Q, GSVD-Q, and the projection theorem in the finite dimensional inner product space. In [7, 8], the solutions to matrix equations and have been expressed in terms of the characteristic polynomial of the matrix . Song and Chen [9, 10] established the explicit solutions of the quaternion matrix equations and , where denote the -conjugate of the quaternion matrix. Wang et al. in [1113] investigated the extreme ranks of the solutions to the quaternion matrix equations and , respectively, the equivalence canonical form of a matrix triplet over an arbitrary division ring, solvable condition for a pair of generalized Sylvester equations, and so on. Moreover, other matrix equations such as the coupled Sylvester matrix equations and the Riccati equations have also found numerous applications in control theory. For more related introduction, see [1419] and the references therein.

In the present paper, we investigate the polynomial solutions to the Karm-Yakubovich-conjugate matrix equation. By the Faddeev Leverrier algorithm and characteristic polynomial, we provide the explicit solutions to the well-known Karm-Yakubovich-conjugate matrix equation. In addition, the equivalent forms of the solution have been proposed.

The rest of this paper is outlined as follows. In Section 2, the polynomial solutions to the Karm-Yakubovich-conjugate matrix equation are given. One of the polynomial solutions has a neat and elegant form in terms of symmetric operator matrix, controllability matrix, and observability matrix. The polynomial solution to the Karm-Yakubovich-conjugate is also proposed by the generalized Faddeev Leverrier algorithm. The example is given to show the efficiency of the proposed algorithm in Section 3.

2. Kalman-Yakubovich-Conjugate Matrix Equation

In this section, the following matrix equation is investigated: where , , and .

Lemma 1. Assume that , , and , if is a solution of (2). Then for any integer , the following conclusion can be established:

Proof. We prove this conclusion by mathematical induction. By postmultiplying both sides of (2) by , we have By taking conjugate in both sides of (2), we can get By postmultiplying both sides of (5) by and premultiplying both sides of (5) by , we have Combining (4) with (6), we can obtain This implies that relation (3) holds for . Now we assume that relation (3) holds for , ; that is, Premultiplying both sides of (8) by , we have By (5), we have Combining (10) with (2), we can obtain According to (11), it is easy to obtain Combining (9) with (12), we can obtain This implies that relation (3) holds for . So the conclusion is true.

Define and then equality (3) can be compactly written as

Let and then It follows from (15), (16), and (17) that we have On the other hand, Now, we define It is obvious that is a polynomial of the matrices , , and . This polynomial is defined by the coefficient matrices and the characteristic polynomial of . That is to say, for each Kalman-Yakubovich-conjugate equation of the form (2), there is a uniquely determined polynomial of its coefficients matrices. Therefore, we obtain the following equation:

Theorem 2. If , for any and , then (21) is equivalent to (2).

Proof. From the abovementioned argument, it is shown that (2) implies (21). Now we prove that (21) implies (2) when for any and . Now suppose that is a solution of (21), we can obtain In addition, we have

Combining this with (22) gives Since for any and , the matrix is nonsingular. Hence, it is obtained from (24) that (24) implies (2). With the above two aspects, the conclusion is proved.

The following theorem gives a result on the unique solution of the Kalman-Yakubovich-conjugate matrix equation.

Theorem 3. If for any and , then the unique solution of the matrix equation (2) is which is a polynomial of matrices , , and .

Proof. Firstly, we assume that the characteristic polynomial of be . Because is nonsingular, it is shown that . It follows from Cayley-Hamilton’s theorem that This relation implies that Therefore, we have which is a polynomial of . Since is a polynomial of , it is easily known that is a polynomial of   . So we can see that is a polynomial of matrices , , and .

Next, we give two equivalent forms of the solution to the matrix equation (2). In order to obtain the unique solution of the matrix equation (2), only the coefficients of characteristic polynomial of are required. Firstly, the so-called generalized Faddeev-Leverrier algorithm [20] is stated as the following relations: where , , are the coefficients of the characteristic polynomial of the matrix , and , , are the coefficient matrices of the adjoint matrix .

So we have the following theorems.

Theorem 4. Given the matrices , , and , let (1)If is a solution to (2), then (2)If the matrix is nonsingularity, then the matrix equation (2) has the unique solution; that is,

Proof. By applying (29), we can obtain the following expression: Meanwhile, the above formula can be stated as Thus, it is easy to know Thus, we can easily obtain the conclusions.

Theorem 5. Suppose the matrices , , and .(1)Let be a solution of (2), thus, (2)Let for any , ; thus, the matrix equation (2) has a unique solution

Proof. In view of relation (33), it is obvious that Then it is easy to obtain that Combining this with Theorem 4, we complete the proof.

On the basis of the above results, we have the following corollary on the solution of the Stein-conjugate matrix equation .

Corollary 6. Let the matrices and be given. If the matrix is Schur stable, then the unique solution of the matrix equation is expressed as which is a polynomial of matrices and .

3. Numerical Example

Example 1. Here we give an example for computing the solution to matrix equation .

The parametric matrix can be written as follows: It is easy to check that , for any and . So we can see that the above matrix equation has a unique solution.

The following result can be obtained by some simple computation: Thus , , and ; in addition, we have So we have the following matrix expression: Therefore, it follows from Theorem 3 that the unique solution of (2) is

4. Conclusion

The well-known Karm-Yakubovich matrix equation and the generalized discrete Sylvester matrix equation have many important applications in control theory and system theory. As the generalization of the above matrix equations, in this paper we have proposed polynomial solutions to the Karm-Yakubovich-conjugate matrix equation. Different from the other approaches, the approach in the current paper does not require transformation of the coefficient matrices into any canonical form. The solutions are stated as a polynomial of parameters of the matrix equation. All the coefficient matrices are not restricted to be in any canonical form. Meanwhile, an equivalent form of the solutions to the Karm-Yakubovich-conjugate matrix equation has been expressed in terms of controllability matrix associated with , , and and observability matrix associated with and . Such a feature may bring greater convenience and advantages to some problems related to the Karm-Yakubovich-conjugate matrix equation. At the end of the paper, the numerical experiment is done to illustrate the performance of the proposed method. From the discussion in our paper, on one hand, one can observe that the solutions to the Karm-Yakubovich-conjugate matrix equation are crucial as the theoretical basis of the development of many kinds of other matrix equations and deserve further investigation in the future and, on the other hand, as the theoretical generalization of the well-known Karm-Yakubovich matrix equation, it may be helpful for future control applications.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This project is granted financial support from Postdoctoral Science Foundation of China (2013M541900).

References

  1. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, Mass, USA, 1990. View at MathSciNet
  2. J. H. Bevis, F. J. Hall, and R. E. Hartwig, “Consimilarity and the matrix equation AX¯-XB=C,” in Current trends in matrix theory (Auburn, ALA., 1986), pp. 51–64, North-Holland, New York, NY, USA, 1987. View at Google Scholar · View at MathSciNet
  3. J. H. Bevis, F. J. Hall, and R. E. Hartwig, “The matrix equation AX¯-XB=C and its special cases,” SIAM Journal on Matrix Analysis and Applications, vol. 9, no. 3, pp. 348–359, 1988. View at Publisher · View at Google Scholar · View at MathSciNet
  4. A. G. Wu, G. R. Duan, and H. H. Yu, “On solutions of XF-AX=C and XF-AX-=C,” Applied Mathematics and Computation, vol. 182, no. 2, pp. 932–941, 2006. View at Google Scholar
  5. S. Yuan and A. Liao, “Least squares solution of the quaternion matrix equation X-AX^B=C with the least norm,” Linear and Multilinear Algebra, vol. 59, no. 9, pp. 985–998, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. S. Yuan, A. Liao, and G. Yao, “The matrix nearness problem associated with the quaternion matrix equation AXAH+BYBH=C,” Journal of Applied Mathematics and Computing, vol. 37, no. 1-2, pp. 133–144, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. T. Jiang and M. Wei, “On solutions of the matrix equations X-AXB=C and X-AX-B=C,” Linear Algebra and Its Applications, vol. 367, pp. 225–233, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. T. S. Jiang and M. S. Wei, “On a solution of the quaternion matrix equation X-AX~B=C and its application,” Acta Mathematica Sinica, vol. 21, no. 3, pp. 483–490, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. C. Song and G. Chen, “On solutions of matrix equation XF-AX=C and XF-AX~=C over quaternion field,” Journal of Applied Mathematics and Computing, vol. 37, no. 1-2, pp. 57–68, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. C. Q. Song, G. L. Chen, and Q. B. Liu, “Explicit solutions to the quaternion matrix equations X-AXF=C and X-AX~F=C,” International Journal of Computer Mathematics, vol. 89, no. 7, pp. 890–900, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. Q. W. Wang, H. S. Zhang, and S. W. Yu, “On solutions to the quaternion matrix equation AXB+CYD=E,” Electronic Journal of Linear Algebra, vol. 17, pp. 343–358, 2008. View at Google Scholar · View at MathSciNet · View at Scopus
  12. Q. W. Wang, H. Zhang, and G. Song, “A new solvable condition for a pair of generalized Sylvester equations,” Electronic Journal of Linear Algebra, vol. 18, pp. 289–301, 2009. View at Google Scholar · View at MathSciNet · View at Scopus
  13. Q. Wang and C. Li, “Ranks and the least-norm of the general solution to a system of quaternion matrix equations,” Linear Algebra and Its Applications, vol. 430, no. 5-6, pp. 1626–1640, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. S. T. Ling, M. H. Wang, and M. S. Wei, “Hermitian tridiagonal solution with the least norm to quaternionic least squares problem,” Computer Physics Communications, vol. 181, no. 3, pp. 481–488, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. K.-W. E. Chu, “Symmetric solutions of linear matrix equations by matrix decompositions,” Linear Algebra and Its Applications, vol. 119, pp. 35–50, 1989. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. J. Feng, J. Lam, and Y. Wei, “Spectral properties of sums of certain Kronecker products,” Linear Algebra and its Applications, vol. 431, no. 9, pp. 1691–1701, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  17. C. Song, G. Chen, and L. Zhao, “Iterative solutions to coupled Sylvester-transpose matrix equations,” Applied Mathematical Modelling, vol. 35, no. 10, pp. 4675–4683, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. G. Bourgeois, “How to solve the matrix equation XA-AX=f(X),” Linear Algebra and its Applications, vol. 434, no. 3, pp. 657–668, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. H. Liping, “The matrix equation AXB-GXD=E over the quaternion field,” Linear Algebra and its Applications, vol. 234, pp. 197–208, 1996. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. B. Hanzon and R. M. Peeters, “A Feddeev sequence method for solving Lyapunove and Sylvester equations,” Linear Algebra and Its Applications, vol. 241–243, pp. 401–430, 1996. View at Google Scholar