• Views 702
• Citations 3
• ePub 20
• PDF 355
`Journal of Applied MathematicsVolume 2014, Article ID 681605, 7 pageshttp://dx.doi.org/10.1155/2014/681605`
Research Article

## Solvability Theory and Iteration Method for One Self-Adjoint Polynomial Matrix Equation

1School of Mathematics and Statistics, Jiangsu Normal University, Xuzhou 221116, China

2Department of Mathematics, Qingdao University of Science and Technology, Qingdao 266061, China

3College of Science, China University of Mining and Technology, Jiangsu 221116, China

Received 19 October 2013; Revised 30 March 2014; Accepted 13 April 2014; Published 7 May 2014

Copyright © 2014 Zhigang Jia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The solvability theory of an important self-adjoint polynomial matrix equation is presented, including the boundary of its Hermitian positive definite (HPD) solution and some sufficient conditions under which the (unique or maximal) HPD solution exists. The algebraic perturbation analysis is also given with respect to the perturbation of coefficient matrices. An efficient general iterative algorithm for the maximal or unique HPD solution is designed and tested by numerical experiments.

#### 1. Introduction

In this paper, we consider the following self-adjoint polynomial matrix equation: where are positive integers, , and . As far as we know, the solvability of (1) is not completely solved untill now.

In many fields of applied mathematics, engineering, and economic sciences, (1) plays an important role. The famous discrete-time algebraic Lyapunov equation (DALE) is exactly (1) with . Undoubtedly, DALE is one of the most important mathematical problems in signal processing, system, and control theory and many others (e.g., see the monographs [1, 2]). If is stable (with respect to the unit circle), DALE has a unique Hermitian positive definite (HPD) solution. Such strong relation between the spectral property of and the solvability theory is fortunately owned by (1), which can be considered as a nonlinear DALE if or . What about the following algebraic Riccati equation: where , , and ? Defining and , we can immediately get (1) with and as an equivalent form of (2). As we all know, solving algebraic Riccati equations is an important task in the linear-quadratic regulator problem, Kalman filtering, -control, model reduction problems, and so forth. See [1, 35] and the references therein. Many numerical methods have been proposed, such as invariant subspace methods [6], Schur method [7], doubling algorithm [8], and structure-preserving doubling algorithm [9, 10]. At the same time the perturbation theory was developed in [1115], as well as the unified methods for the discrete-time and continuous-time algebraic Riccati equations [16, 17]. A general iteration method for (1) given in this paper can be seen as a new algorithm for the algebraic Riccati equation (2), setting and .

Apart from the above applications, (1) is appealing from the mathematical viewpoint since it unifies a large class of systems of polynomial matrix equations. Many nonlinear matrix equations are special cases of (1). For example, nonlinear matrix equations, (see, e.g., [18, 19]), are equivalence models of and , where are positive integers and . In a rather general form, Ran and Reurings [18] investigated () for its positive semidefinite solutions under the assumption that the function is monotone and is positively definite. Besides, Lee and Lim [20] proved that (1) has a unique HPD solution when and . See [2125] for more recent results on nonlinear matrix equations. To the best of our knowledge, (1) with (without monotony in hand) has not been discussed. These facts motivate us to study polynomial matrix equation (1).

This paper is organized as follows. In Section 2 we deduce the existence and uniqueness conditions of HPD solutions of (1); in Section 3 we derive the algebraic perturbation theory for the unique or maximal solution of (1); finally in Section 4, we provide an iterative algorithm and two numerical experiments.

We begin with some notations used throughout this paper. stands for the set of matrices with elements on field . If is a Hermitian matrix on , and stand for the minimal and the maximal eigenvalues, respectively. Denote the singular values of a matrix by , where . Suppose that and are Hermitian matrices; we write if is positively semidefinite (definite) and denote the matrices set by .

#### 2. Solvability of Self-Adjoint Polynomial Matrix Equation

In this section, we study the solvability theory of (1) assuming that is nonsingular; that is, . To do this, we need two simple but useful functions defined on the positive abscissa axis:

The following two famous inequalities will be used frequently in the remaining of this paper.

Lemma 1 (Löwner-Heinz inequality [26, Theorem 1.1]). If and , then .

Lemma 2 (see [27, Theorem 2.1]). Let and be positive operators on a Hilbert space , such that , and . Then hold for any .

##### 2.1. Maximal Solution of (1) with

Now we derive a necessary condition and a sufficient condition for existence of HPD solutions of (1) with . With and in hand, we can easily get the distribution of eigenvalues of the HPD solution of (1).

Theorem 3. Suppose that and is an HPD solution of (1); then for any eigenvalue of , where are two positive roots of and are two positive roots of .

Proof. From Theorem (d) in Horn and Johnson [28], one can see that

If is nonsingular, That means The above equations still hold if is singular, since , that is, , in this case. Applying Weyl theorem in Horn and Johnson [29], implies

Define a function . Then the only positive stationary point of is . If , has two positive roots, and , with . So implies that has two roots and has two roots . Since , . Then from (9) we obtain (5).

If (1) has an HPD solution, its eigenvalues may skip between and . Next, what we take more attention on is the HPD solution with its eigenvalues distributed only on one interval.

Theorem 4. Suppose that .(1)Equation (1) has an HPD solution, , and if such exists uniquely.(2)Equation (1) has an HPD solution, , and if such exists uniquely.

Proof. (1) Let , where . Lemmas 1 and 2 and imply Applying Brouwer’s fixed-point theorem, has a fixed point . Then from Theorem 3, .

We now prove the uniqueness of under the additional condition that . Suppose is another HPD solution of (1) and . It has been known that Then from and , which is impossible. Hence, .

(2) Let , where . is continuous, and because . By Lemmas 1 and 2 and Brouwer’s fixed-point theorem, it is sufficient to prove and in order for an HPD solution to exist. The existence of such follows from inequalities

Next we prove the uniqueness of under the additional condition that . Suppose (1) has two different HPD solutions and on . Then Moreover, if , applying the inequality , we have which is impossible. Hence, .

The maximal solution (see, e.g., [30, 31]) of (1) is defined as follows.

Definition 5. An HPD solution of (1) is the maximal solution if, for any HPD solution of (1), there is .

So the second term of Theorem 4 implies that the maximal solution of (1) is on .

Theorem 6. Suppose that and ; then (1) has a maximal solution which can be computed by with the initial value .

Proof. Let ; then . From the proof of Theorem 4 (2), Then which indicates the convergence of matrix series , generated by (17).

Set . Assuming , then from inequalities (14) we have That means, for any . By Theorem 4 (2), we can see that is the unique HPD solution of (1) on .

Now we prove the maximality of . Suppose that is an arbitrary HPD solution of (1); then , and Theorem 3 implies (since ). Assuming that , Lemma 1 with implies Then , which implies that by the Löwner-Heinz inequality.

Note that similar iteration formula ever appeared in some papers such as [20, 21] for other nonlinear matrix equations. Here we firstly proved that the iteration form (17) preserves the maximality of over all HPD solutions of (1).

##### 2.2. Unique Solution of (1) with

If , Lee and Lim [20, Theorem 9.4] show that (1) always has a unique HPD solution, denoted by . Now we give an upper bound and a lower bound of and suggest an iteration method for computing .

As defined in (3), and with have unique positive roots, denoted by and , respectively.

Since and , .

Theorem 7. If , (1) has a unique HPD solution . Let or , then matrix series generated by will converge to .

Proof. We only need to prove the convergence of matrix series . Set . From (22) we have and then . Assuming that , Then for any , we have and then by Löwner-Heinz inequality. On the other hand, implies for any , because if , then Then with is a decreasingly monotone matrix series with a lower bound . Similarly we can prove that generated by (22) with is an increasingly monotone matrix series with an upper bound . Therefore, the convergence of has been proved.

From the above proof, we can see that the iteration form (22) preserves the minimality () or maximality () of in process.

If , (1) can be reduced to a linear matrix equation , which is the discrete-time algebraic Lyapunov equation (DALE) or Hermitian Stein equation, [1, Page 5], assuming that . It is well known that if is d-stable (see [1]), has a unique solution, and matrix series , generated by with an initial value , will converge to the unique solution. Besides, it is not difficult to get an expression of the unique solution , applying [32, Theorem 1, Section 13.2), [1, Theorem ], and the results in Section 6.4 in [28].

Now we have presented the solvability theory of the self-adjoint polynomial matrix equation (1) in three cases. A general iterative algorithm for its maximal solution () or unique solution () will be given in Section 4. Before it, we study the algebraic perturbation of the maximal or unique solution of (1).

#### 3. Algebraic Perturbation Analysis

In this section, we present the algebraic perturbation analysis of the HPD solution of (1) with respect to the perturbation of its coefficient matrices. Similar to [30], we define the perturbed matrix equation of (1) as where and . We always suppose that (1) has a maximal (or unique) solution, denoted by , and (26) has a maximal (or unique) solution, denoted by .

Now we present the perturbation bound for when . Define a function :

Theorem 8. Let be an arbitrary real number, and . If then

Proof. It is easy to induce that Then from (1) and (26), we have Since , Then for an arbitrary , if and , we have (29).

If , for an arbitrary , define where

Theorem 9. Let be an arbitrary real number, and . If then

Proof. Similar to the proof of Theorem 8, we can induce that Then With the help of (30) and (34), (38) implies Then if and , we have (36).

Theorems 8 and 9 make sure that the perturbation of can be controlled if and have a proper upper bound.

#### 4. Algorithm and Numerical Experiments

In this section we give a general iterative algorithm for the maximal or unique solutions of (1) and two numerical experiments. All reported results were obtained using MATLAB-R2012b on a personal computer with 2.4 GHz Intel Core i7 and 8 GB 1600 MHz DDR3.

Example 10. Let matrices and . With and not more than 200 iterations, we apply Algorithm 1 to compute the maximal or unique HPD solutions of (1) with and compare the results with those by the iteration method from [33] (denoted by MONO in Table 1).

Table 1 shows iterations, CPU times before convergence, and the residues of the computed HPD solution , defined by

Table 1: Iteration, CPU time (seconds) and residue for solving (1) with .

Algorithm 1: Given matrices and positive integers .

From Table 1, we can see that it takes more iterations and CPU times to solve the maximal solution of (1) with than to solve the unique solution of (1) with . At the same time, the accuracy of the latter is better than the former. MONO can not be used to solve (1) with , and it costs more iterations and CPU times than Algorithm 1 when solving (1) with .

Now we use Example 4.1 of [33] to test our method.

Example 11. Let with and let , with . We solve (1) with and with two different initial solutions. The iterations, CPU times, and the residues of the computation are reported in Table 2.

Table 2: Iteration, CPU time (seconds) and residue for solving (1) with and different initial solutions.

Table 2 shows that for Algorithm 1 the choice is better than . When and rise, MONO might lose its efficiency. It seems not proper to apply the iteration method designed for with to solve , although they are equivalent to each other in theory.

#### 5. Conclusion

In this paper, we considered the solvability of the self-adjoint polynomial matrix equation (1). Sufficient conditions were given to guarantee the existence of the maximal or unique HPD solutions of (1). The algebraic perturbation analysis including perturbation bounds was also developed for (1) under the perturbation of given coefficient matrices. At last a general iterative algorithm with maximality preserved in process was presented for the maximal or unique solution with two numerical experiments reported.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

Zhigang Jia’s research was supported in part by National Natural Science Foundation of China under Grants 11201193 and 11171289 and a project funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions. Minghui Wang’s research was supported in part by the National Natural Science Foundation of China (Grant no. 11001144), the Science and Technology Program of Shandong Universities of China (J11LA04), and the Research Award Fund for Outstanding Young Scientists of Shandong Province in China (BS2012DX009). Sitao Ling’s research was supported in part by National Natural Science Foundations of China under Grant 11301529, Postdoctoral Science Foundation of China under Grant 2013M540472, and Jiangsu Planned Projects for Postdoctoral Research Funds 1302036C. The authors would like to thank three anonymous referees for giving valuable comments and suggestions.

#### References

1. H. Abou-Kandil, G. Freiling, V. Ionescu, and G. Jank, Matrix Riccati Equations in Control and Systems Theory, Birkhäuser, Basel, Switzerland, 2003.
2. I. Gohberg, P. Lancaster, and L. Rodman, Matrix Polynomials, Academic Press, New York, NY, USA, 1982.
3. P. Benner, A. J. Laub, and V. Mehrmann, “Benchmarks for the numerical solution of algebraic Riccati equations,” IEEE Control Systems Magazine, vol. 17, no. 5, pp. 18–28, 1997.
4. S. Bittanti, A. Laub, and J. C. Willems, Eds., The Riccati Equation, Communications and Control Engineering Series, Springer, Berlin, Germany, 1991.
5. P. Lancaster and L. Rodman, Algebraic Riccati Equations, Oxford Science Publications, Oxford University Press, New York, NY, USA, 1995.
6. A. J. Laub, “Invariant subspace methods for the numerical solution of Riccati equations,” in The Riccati Equation, S. Bittanti, A. J. Laub, and J. C. Willems, Eds., Communications and Control Engineering, pp. 163–196, Springer, Berlin, Germany, 1991.
7. A. J. Laub, “A Schur method for solving algebraic Riccati equations,” IEEE Transactions on Automatic Control, vol. 24, no. 6, pp. 913–921, 1979.
8. M. Kimura, “Doubling algorithm for continuous-time algebraic Riccati equation,” International Journal of Systems Science, vol. 20, no. 2, pp. 191–202, 1989.
9. E. K.-W. Chu, H.-Y. Fan, W.-W. Lin, and C.-S. Wang, “Structure-preserving algorithms for periodic discrete-time algebraic Riccati equations,” International Journal of Control, vol. 77, no. 8, pp. 767–788, 2004.
10. E. K.-W. Chu, H.-Y. Fan, and W.-W. Lin, “A structure-preserving doubling algorithm for continuous-time algebraic Riccati equations,” Linear Algebra and Its Applications, vol. 396, pp. 55–80, 2005.
11. N. J. Higham, “Perturbation theory and backward error for $AX-XB=C$,” BIT Numerical Mathematics, vol. 33, no. 1, pp. 124–136, 1993.
12. M. Konstantinov and P. Petkov, “Note on perturbation theory for algebraic Riccati equations,” SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 1, pp. 327–354, 2000.
13. J.-G. Sun, “Residual bounds of approximate solutions of the algebraic Riccati equation,” Numerische Mathematik, vol. 76, no. 2, pp. 249–263, 1997.
14. J.-G. Sun, “Backward error for the discrete-time algebraic Riccati equation,” Linear Algebra and Its Applications, vol. 259, pp. 183–208, 1997.
15. J.-G. Sun, “Backward perturbation analysis of the periodic discrete-time algebraic Riccati equation,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 1, pp. 1–19, 2004.
16. V. Mehrmann, “A step towards a unified treatment of continuous and discrete time control problems,” Linear Algebra and Its Applications, vol. 241–243, pp. 749–779, 1996.
17. H.-G. Xu, “Transformations between discrete-time and continuous-time algebraic Riccati equations,” Linear Algebra and Its Applications, vol. 425, no. 1, pp. 77–101, 2007.
18. A. C. M. Ran and M. C. B. Reurings, “On the nonlinear matrix equation $X+{A}^{*}ℱ\left(X\right)A=Q$: solutions and perturbation theory,” Linear Algebra and Its Applications, vol. 346, pp. 15–26, 2002.
19. A. C. M. Ran, M. C. B. Reurings, and L. Rodman, “A perturbation analysis for nonlinear selfadjoint operator equations,” SIAM Journal on Matrix Analysis and Applications, vol. 28, no. 1, pp. 89–104, 2006.
20. H. Lee and Y. Lim, “Invariant metrics, contractions and nonlinear matrix equations,” Nonlinearity, vol. 21, no. 4, pp. 857–878, 2008.
21. J. Cai and G.-L. Chen, “On the Hermitian positive definite solutions of nonlinear matrix equation ${X}^{s}+{AX}^{-t}A=Q$,” Applied Mathematics and Computation, vol. 217, no. 1, pp. 117–123, 2010.
22. X.-F. Duan, Q.-W. Wang, and A.-P. Liao, “On the matrix equation arising in an interpolation problem,” Linear and Multilinear Algebra, vol. 61, no. 9, pp. 1192–1205, 2013.
23. Z.-G. Jia and M.-S. Wei, “Solvability and sensitivity analysis of polynomial matrix equation ${X}^{s}+{A}^{T}{X}^{t}A=Q$,” Applied Mathematics and Computation, vol. 209, no. 2, pp. 230–237, 2009.
24. M.-H. Wang, M.-S. Wei, and S. Hu, “The extremal solution of the matrix equation ${X}^{s}+{A}^{*}{X}^{-q}A=I$,” Applied Mathematics and Computation, vol. 220, pp. 193–199, 2013.
25. B. Zhou, G.-B. Cai, and J. Lam, “Positive definite solutions of the nonlinear matrix equation $X+{A}^{\mathrm{H }}{\overline{X}}^{ -1}A=I$,” Applied Mathematics and Computation, vol. 219, no. 14, pp. 7377–7391, 2013.
26. X.-Z. Zhan, Matrix Inequalities, vol. 1790 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 2002.
27. T. Furuta, “Operator inequalities associated with Hölder-McCarthy and Kantorovich inequalities,” Journal of Inequalities and Applications, vol. 1998, Article ID 234521, 1998.
28. R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, UK, 1991.
29. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, 1990.
30. X.-G. Liu and H. Gao, “On the positive definite solutions of the matrix equations ${X}^{s}-{A}^{T}{X}^{-t}A={I}_{n}$,” Linear Algebra and Its Applications, vol. 368, pp. 83–97, 2003.
31. S.-F. Xu, “Perturbation analysis of the maximal solution of the matrix equation ${A}^{*}{X}^{-1}A=P$,” Linear Algebra and Its Applications, vol. 336, pp. 61–70, 2001.
32. P. Lancaster and M. Tismenetsky, The Theory of Matrices, Computer Science and Applied Mathematics, Academic Press, Orlando, Fla, USA, 2nd edition, 1985.
33. S. M. El-Sayed and A. C. M. Ran, “On an iteration method for solving a class of nonlinear matrix equations,” SIAM Journal on Matrix Analysis and Applications, vol. 23, no. 3, pp. 632–645, 2001.