• Views 327
• Citations 1
• ePub 22
• PDF 205
`Journal of Applied MathematicsVolume 2013 (2013), Article ID 518269, 8 pageshttp://dx.doi.org/10.1155/2013/518269`
Research Article

## Sensitivity Analysis of the Matrix Equation from Interpolation Problems

1School of Mathematics and Statistics, Shandong University, Weihai, Weihai 264209, China
2School of Mathematics, Shandong University, Jinan 250100, China

Received 22 July 2013; Accepted 27 September 2013

Academic Editor: Carlos J. S. Alves

Copyright © 2013 Jing Li and Yuhai Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper studies the sensitivity analysis of a nonlinear matrix equation connected to interpolation problems. The backward error estimates of an approximate solution to the equation are derived. A residual bound of an approximate solution to the equation is obtained. A perturbation bound for the unique solution to the equation is evaluated. This perturbation bound is independent of the exact solution of this equation. The theoretical results are illustrated by numerical examples.

#### 1. Introduction

In this paper we consider the Hermitian positive definite solution of the nonlinear matrix equation: where are complex matrices, is a positive integer, and is a positive definite matrix. Here, denotes the conjugate transpose of the matrix .

This type of nonlinear matrix equations arises in many practical problems. The equation which is a representative of (1) for comes from ladder networks, dynamic programming, control theory, stochastic filtering, statistics, and so forth [16]. When , (1) comes from the nonlinear matrix equation: where is an positive definite matrix, is an positive semidefinite matrix, is an arbitrary matrix, and is the block diagonal matrix with, on each diagonal entry, the matrix . In [7], (2) is recognized as playing an important role in modelling certain optimal interpolation problems. Let and , where , are matrices. Then can be rewritten as . When we solve the nonlinear matrix equation , we often do not know and exactly but have only approximations and available. Then we only solve the equation exactly which gives a different solution . We would like to know how the errors of and influence the error in . Motivated by this, we consider in this paper the sensitivity analysis of (1).

For the equation and related equations  and  ( or ) there were many contributions in the literature to the solvability and numerical solutions [814]. However, these papers have not examined the sensitivity analysis about the above equations. Hasanov et al. [12, 15] obtained two perturbation estimates of the solutions to the equations . Li and Zhang [13] derived two perturbation bounds of the unique solution to the equation . They also obtained the explicit expression of the condition number for the unique positive definite solution. The perturbation analysis about the related equations , , and were mentioned in papers [1619]. Yin and Fang [20] obtained an explicit expression of the condition number for the unique positive definite solution of (1). They also gave two perturbation bounds for the unique positive definite solution, whereas, to our best knowledge, there have been no backward error estimates and any computable residual bound for (1) in the known literature. In this paper, we obtain the backward error estimates and a residual bound of the approximate solution to (1) as well as evaluate a new relative perturbation bound for (1). This bound does not need any knowledge of the exact solution of (1), which is important in many practical calculations.

As a continuation of the previous results, this paper gives some preliminary knowledge that will be needed to develop this work in Section 2. In Section 3, the backward error estimates of an approximate solution for the unique solution to (1) are discussed. In Section 4, we derive a residual bound of an approximate solution for the unique solution to (1). In Section 5, we give a new perturbation bound for the unique solution to (1), which is independent of the exact solution of (1). Finally, several numerical examples are presented in Section 6.

We denote by the set of complex matrices, by the set of Hermitian matrices, by the identity matrix, by the imaginary unit, by the spectral norm, by the Frobenius norm, and by and the maximal and minimal eigenvalues of , respectively. For and a matrix , is a Kronecker product, and is a vector defined by . For , we write   (resp., ) if is Hermitian positive semidefinite (resp., definite).

#### 2. Preliminaries

Lemma 1 (see [8, Theorem 2.1]). The matrix equation always has a unique positive definite solution.

Lemma 2 (see [8, Theorem 2.3]). Let be the unique Hermitian positive definite solution of (3); then , where the pair is a solution of the system

#### 3. Backward Error

In this section, applying the technique developed in [18], we obtain some estimates for the backward error of the approximate solution of (1).

Let be an approximation to the unique solution to (1), and let and be the corresponding perturbations of the coefficient matrices and in (1). A backward error of the approximate solution can be defined by where and are positive parameters. Taking , and in (5) gives the relative backward error , and taking , and in (5) gives the absolute backward error .

Let Note that It follows from (6) that Let where is the vec-permutation. Then (8) can be written as It follows from that matrix is full row rank. Hence, , which implies that every solution to the equation must be a solution to (10). Consequently, for any solution to (11) we have Then we can state the estimates of the backward error as follows.

Theorem 3. Let be given matrices and let be the backward error defined by (5). If then one has that where

Proof. Let Obviously, is continuous. Condition (13) ensures that the quadratic equation in has two positive real roots. The smaller one is Define . Then for any , we have The last equality is due to the fact that is a solution to the quadratic equation (18). Thus we have proved that . By the Schauder fixed-point theorem, there exists a such that , which means that is a solution to (11), and hence it follows from (12) that Next we derive a lower bound for . Suppose that satisfies Then we have where Let a singular value decomposition of be , where and are unitary matrices and with . Substituting this decomposition into (23), and letting we get It follows from (22) that Here we have used the fact that Let Since is a solution to (18), we have that which implies that Then .

#### 4. Residual Bound

Residual bound reveals the stability of a numerical method. In this section, in order to derive the residual bound of an approximate solution for the unique solution to (1), we first introduce the following lemma.

Lemma 4. For every positive definite matrix , if , then

Proof. According to it follows that

Theorem 5. Let be an approximation to the solution of (1). If the residual satisfies then where

Proof. Let Obviously, is a nonempty bounded convex closed set. Let Evidently is continuous.
Note that condition (35) ensures that the quadratical equation has two positive real roots, and the smaller one is given by Next, we will prove that .
For every , we have Hence By (36), one sees that According to (35), we obtain which implies that
According to Lemma 4, we obtain for . That is, . By Brouwer fixed point theorem, there exists a such that . Hence is a solution of (1). Moreover, by Lemma 1, we know that the solution of (1) is unique. Then

#### 5. Perturbation Bounds

In this section we develop a relative perturbation bound for the unique solution of (1), which does not need any knowledge of the actual solution of (1) and is easy to calculate.

Here we consider the perturbed equation: where and are small perturbations of and in (1), respectively. It follows from Lemma 1 that the solutions of (1) and (49) exist. Then we assume that and are the solutions of (1) and (49), respectively. Let , , and .

Theorem 6. If then

Proof. By Lemma 1, we know that and are the unique solutions to (1) and (49), respectively. Subtracting (1) from (49) we have Then By Lemma 2, it follows that and . Then Condition (50) ensures .
Combining (52) and (54), we obtain

Remark 7. Yin and Fang [20] obtained two perturbation bounds, which were dependent on the exact solution of (1), whereas, in this paper, the relative perturbation bound in Theorem 6 does not need any knowledge of the actual solution of (1), which is important in many practical calculations.

Remark 8. With we get as and . Therefore (1) is well-posed.

#### 6. Numerical Examples

To illustrate the theoretical results of the previous sections, in this section several simple examples are given, which were carried out using MATLAB 7.1. For the stopping criterion we take .

Example 1. In this example, we consider the backward error of an approximate solution for the unique solution to (1) in Theorem 3. We consider with the coefficient matrices where .
Let be an approximate solution to (1). Take , and in Theorem 3. Some results on lower and upper bounds for the backward error are displayed in Table 1.
The results listed in Table 1 show that the backward error of decreases as the error decreases.

Table 1: Backward error for Example 1 with different values of  .

Example 2. This example considers the residual bound of an approximate solution for the unique solution to (1) in Theorem 5. We consider with
Choose . Let the approximate solution of be given with the iterative method , where is the iteration number.
The residual satisfies the conditions in Theorem 5. By Theorem 5, we can compute the residual bounds for where Some results are listed in Table 2.
The results listed in Table 2 show that the residual bound given by Theorem 5 is fairly sharp.

Table 2: Residual bounds for Example 2 with different values of  .

Example 3. In this example, we consider the corresponding perturbation bound for the solution in Theorem 6.
We consider the matrix equation with Suppose that the coefficient matrices and are perturbed to , where and is a random matrix generated by MATLAB function randn.
By Theorem 6, we can compute the relative perturbation bound . The results averaged as the geometric mean of 20 randomly perturbed runs. Some results are listed in Table 3.
The results listed in Table 3 show that the perturbation bound given by Theorem 6 is fairly sharp.

Table 3: Perturbation bounds for Example 3 with different values of  .

#### 7. Concluding Remarks

In this paper, we consider the sensitivity analysis of the nonlinear matrix equation . Compared with existing literature, the contributions of this paper are as follows. (i)A backward error and a computable residual bound of an approximate solution for the unique solution to (1) are derived, which do not appear in other known literature works. (ii)Some results in this paper can cover the work of Li and Zhang [13] for the matrix equation as a special case.(iii)This paper develops a new relative perturbation bound for the solution to (1), which does not need any knowledge of the actual solution of (1) and could be computed easily.

#### Acknowledgments

The authors would like to express their gratitude to the referees for their fruitful comments. The work was supported in part by the National Nature Science Foundation of China (11201263), Natural Science Foundation of Shandong Province (ZR2012AQ004), and Independent Innovation Foundation of Shandong University (IIFSDU), China. The authors declare that there is no conflict of interests regarding the publication of this paper.

#### References

1. W. N. Anderson, G. B. Kleindorfer, M. B. Kleindorfer, and M. B. Woodroofe, “Consistent estimates of the parameters of a linear systems,” The Annals of Mathematical Statistics, vol. 40, no. 6, pp. 2064–2075, 1969.
2. W. N. Anderson, T. D. Morley, and G. E. Trapp, “The cascade limit, the shorted operator and quadratic optimal control,” in Linear Circuits, Systems and Signal Processsing: Theory and Application, I. Christopher Byrnes, F. C. Martin, and E. Richard Saeks, Eds., pp. 3–7, North-Holland, New York, NY, USA, 1988.
3. R. S. Bucy, “A priori bounds for the Riccati equation,” in Proceedings of the 6th Berkeley Symposium on Mathematical Statistics and Probability, Volume 3: Probability Theory, pp. 645–656, University of California Press, Berkeley, Calif, USA, 1972.
4. D. V. Ouellette, “Schur complements and statistics,” Linear Algebra and Its Applications C, vol. 36, pp. 187–295, 1981.
5. W. Pusz and S. L. Woronowitz, “Functional caculus for sequlinear forms and purification map,” Reports on Mathematical Physics, vol. 8, no. 2, pp. 159–170, 1975.
6. J. Zabcyk, “Remarks on the control of discrete time distributed parameter systems,” SIAM Journal on Control and Optimization, vol. 12, no. 4, pp. 721–735, 1974.
7. A. C. M. Ran and M. C. B. Reurings, “A nonlinear matrix equation connected to interpolation theory,” Linear Algebra and Its Applications, vol. 379, no. 1–3, pp. 289–302, 2004.
8. X. F. Duan and A. P. Liao, “On Hermitian positive definite solution of the matrix equation $X-{\sum }_{i=1}^{m}{A}_{i}^{*}{X}^{r}{A}_{i}=Q$,” Journal of Computational and Applied Mathematics, vol. 229, no. 1, pp. 27–36, 2009.
9. A. Ferrante and B. C. Levy, “Hermitian solutions of the equation $X=Q+N{X}^{-1}{N}^{*}$,” Linear Algebra and Its Applications, vol. 247, pp. 359–373, 1996.
10. C. Guo and P. Lancaster, “Iterative solution of two matrix equations,” Mathematics of Computation, vol. 68, no. 228, pp. 1589–1603, 1999.
11. V. I. Hasanov, “Positive definite solutions of the matrix equations $X±{A}^{*}{X}^{-q}A=Q$,” Linear Algebra and Its Applications, vol. 404, no. 1–3, pp. 166–182, 2005.
12. V. I. Hasanov, I. G. Ivanov, and F. Uhlig, “Improved perturbation estimates for the matrix equations $X±{A}^{*}{X}^{-1}A=Q$,” Linear Algebra and Its Applications, vol. 379, no. 1–3, pp. 113–135, 2004.
13. J. Li and Y. H. Zhang, “The Hermitian positive definite solutions and perturbation analysis of the matrix equation $X-{A}^{*}{X}^{-1}A=Q$,” Bulletin of the Institute of Mathematics Academia Sinica, vol. 30, pp. 129–142, 2008 (Chinese).
14. Y. Lim, “Solving the nonlinear matrix equation $X=Q+{\sum }_{i=1}^{m}{M}_{i}{X}^{{\delta }_{i}}{M}_{i}^{*}$ via a contraction principle,” Linear Algebra and Its Applications, vol. 430, no. 4, pp. 1380–1383, 2009.
15. V. I. Hasanov and I. G. Ivanov, “On two perturbation estimates of the extreme solutions to the equations $X±{A}^{*}{X}^{-1}A=Q$,” Linear Algebra and Its Applications, vol. 413, no. 1, pp. 81–92, 2006.
16. J. Li and Y. Zhang, “Perturbation analysis of the matrix equation $X-{A}^{*}{X}^{-q}A=Q$,” Linear Algebra and Its Applications, vol. 431, no. 9, pp. 1489–1501, 2009.
17. X. G. Liu and H. Gao, “On the positive definite solutions of the matrix equations ${X}^{s}±{A}^{T}{X}^{-t}A={I}_{n}$,” Linear Algebra and Its Applications, vol. 368, pp. 83–87, 2003.
18. J. G. Sun and S. F. Xu, “Perturbation analysis of the maximal solution of the matrix equation $X+{A}^{*}{X}^{-1}A=P$. II,” Linear Algebra and Its Applications, vol. 362, pp. 211–228, 2003.
19. S. F. Xu, “Perturbation analysis of the maximal solution of the matrix equation $X+{A}^{*}{X}^{-1}A=P$,” Linear Algebra and Its Applications, vol. 336, no. 1–3, pp. 61–70, 2001.
20. X. Y. Yin and L. Fang, “Perturbation analysis for the positive definite solution of the nonlinear matrix equation $X-{\sum }_{i=1}^{m}{A}_{i}^{*}{X}^{-1}{A}_{i}=Q$,” Journal of Applied Mathematics and Computing, vol. 43, no. 1, pp. 199–211, 2013.