Abstract

We consider the perturbation analysis of the matrix equation . Based on the matrix differentiation, we first give a precise perturbation bound for the positive definite solution. A numerical example is presented to illustrate the sharpness of the perturbation bound.

1. Introduction

In this paper, we consider the matrix equation where are complex matrices, is an identity matrix, are nonnegative integers and the positive definite solution is practical interest. Here, and denote the conjugate transpose of the matrices and , respectively. Equation (1.1) arises in solving some nonlinear matrix equations with Newton method. See, for example, the nonlinear matrix equation which appears in Sakhnovich [1]. Solving these nonlinear matrix equations gives rise to (1.1). On the other hand, (1.1) is the general case of the generalized Lyapunov equation whose positive definite solution is the controllability Gramian of the bilinear control system (see [2, 3] for more details) Set then (1.2) can be equivalently written as (1.1) with and .

Some special cases of (1.1) have been studied. Based on the kronecker product and fixed point theorem in partially ordered sets, Reurings [4] and Ran and Reurings [5, 6] gave some sufficient conditions for the existence of a unique positive definite solution of the linear matrix equations and . And the expressions for these unique positive definite solutions were also derived under some constraint conditions. For the general linear matrix equation (1.1), Reurings [[4], Page 61] pointed out that it is hard to find sufficient conditions for the existence of a positive definite solution, because the map is not monotone and does not map the set of positive definite matrices into itself. Recently, Berzig [7] overcame these difficulties by making use of Bhaskar-Lakshmikantham coupled fixed point theorem and gave a sufficient condition for (1.1) existing a unique positive definite solution. An iterative method was constructed to compute the unique positive definite solution, and the error estimation was given too.

Recently, the matrix equations of the form (1.1) have been studied by many authors (see [814]). Some numerical methods for solving the well-known Lyapunov equation , such as Bartels-Stewart method and Hessenberg-Schur method, have been proposed in [12]. Based on the fixed point theorem, the sufficient and necessary conditions for the existence of a positive definite solution of the matrix equation have been given in [8, 9]. The fixed point iterative method and inversion-free iterative method were developed for solving the matrix equations in [13, 14]. By making use of the fixed point theorem of mixed monotone operator, the matrix equation was studied in [10], and derived a sufficient condition for the existence of a positive definite solution. Assume that maps positive definite matrices either into positive definite matrices or into negative definite matrices, the general nonlinear matrix equation was studied in [11], and the fixed point iterative method was constructed to compute the positive definite solution under some additional conditions.

Motivated by the works and applications in [26], we continue to study the matrix equation (1.1). Based on a new mathematical tool (i.e., the matrix differentiation), we firstly give a differential bound for the unique positive definite solution of (1.1), and then use it to derive a precise perturbation bound for the unique positive definite solution. A numerical example is used to show that the perturbation bound is very sharp.

Throughout this paper, we write if the matrix is positive definite (semidefinite). If is positive definite (semidefinite), then we write . If a positive definite matrix satisfies , we denote that . The symbols and denote the maximal and minimal eigenvalues of an Hermitian matrix , respectively. The symbol stands for the set of Hermitian matrices. The symbol denotes the spectral norm of the matrix .

2. Perturbation Analysis for the Matrix Equation (1.1)

Based on the matrix differentiation, we firstly give a differential bound for the unique positive definite solution of (1.1), and then use it to derive a precise perturbation bound for in this section.

Definition 2.1 ([15], Definition 3.6). Let , then the matrix differentiation of is . For example, let Then

Lemma 2.2 ([15], Theorem 3.2). The matrix differentiation has the following properties:(1); (2), where is a complex number;(3); (4); (5); (6), where is a constant matrix.

Lemma 2.3 ([7], Theorem 3.1). If then (1.1) has a unique positive definite solution and

Theorem 2.4. If then (1.1) has a unique positive definite solution , and it satisfies

Proof. Since then consequently, Combining (2.5)–(2.9) we have Then by Lemma 2.3 we obtain that (1.1) has a unique positive definite solution , which satisfies Noting that is the unique positive definite solution of (1.1), then It is known that the elements of are differentiable functions of the elements of and . Differentiating (2.12), and by Lemma 2.2, we have which implies that By taking spectral norm for both sides of (2.14), we obtain that and noting (2.11) we obtain that Then Due to (2.5) we have Combining (2.17), (2.18) and noting (2.19), we have which implies that

Theorem 2.5. Let be perturbed matrices of in (1.1) and . If then (1.1) and its perturbed equation have unique positive definite solutions and , respectively, which satisfy where

Proof. By (2.22) and Theorem 2.4, we know that (1.1) has a unique positive definite solution . And by (2.23) we have similarly, by (2.24) we have By (2.28), (2.29), and Theorem 2.4 we obtain that the perturbed equation (2.25) has a unique positive definite solution .

Set then by (2.23) we have similarly, by (2.24) we have

Therefore, by (2.31), (2.32), and Theorem 2.4 we derive that for arbitrary , the matrix equation has a unique positive definite solution , especially,

From Theorem 2.4 it follows that Noting that and combining Mean Value Theorem of Integration, we have

3. Numerical Experiments

In this section, we use a numerical example to confirm the correctness of Theorem 2.5 and the precision of the perturbation bound for the unique positive definite solution of (1.1).

Example 3.1. Consider the symmetric linear matrix equation and its perturbed equation where
It is easy to verify that the conditions (2.22)–(2.24) are satisfied, then (3.1) and its perturbed equation (3.2) have unique positive definite solutions and , respectively. From Berzig [7] it follows that the sequences and generated by the iterative method both converge to . Choose as the termination scalar, that is, By using the iterative method (3.4) we can get the computed solution of (3.1). Since , then the computed solution has a very high precision. For simplicity, we write the computed solution as the unique positive definite solution . Similarly, we can also get the unique positive definite solution of the perturbed equation (3.2).
Some numerical results on the perturbation bounds for the unique positive definite solution are listed in Table 1.
From Table 1, we see that Theorem 2.5 gives a precise perturbation bound for the unique positive definite solution of (3.1).

4. Conclusion

In this paper, we study the matrix equation (1.1) which arises in solving some nonlinear matrix equations and the bilinear control system. A new method of perturbation analysis is developed for the matrix equation (1.1). By making use of the matrix differentiation and its elegant properties, we derive a precise perturbation bound for the unique positive definite solution of (1.1). A numerical example is presented to illustrate the sharpness of the perturbation bound.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (11101100; 11171205; 11261014; 11226323), the Natural Science Foundation of Guangxi Province (2012GXNSFBA053006; 2011GXNSFA018138), the Key Project of Scientific Research Innovation Foundation of Shanghai Municipal Education Commission (13ZZ080), the Natural Science Foundation of Shanghai (11ZR1412500), the Ph.D. Programs Foundation of Ministry of Education of China (20093108110001), the Discipline Project at the corresponding level of Shanghai (A. 13-0101-12-005), and Shanghai Leading Academic Discipline Project (J50101). The authors wish to thank Professor Y. Zhang and the anonymous referees for providing very useful suggestions for improving this paper. The authors also thank Dr. Xiaoshan Chen for discussing the properties of matrix differentiation.