Linear and Nonlinear Matrix EquationsView this Special Issue
Research Article | Open Access
Perturbation Analysis of the Nonlinear Matrix Equation
Consider the nonlinear matrix equation with . Two perturbation bounds and the backward error of an approximate solution to the equation are derived. Explicit expressions of the condition number for the equation are obtained. The theoretical results are illustrated by numerical examples.
In this paper we consider the Hermitian positive definite solution of the nonlinear matrix equation where , are complex matrices, is a positive integer, and is a positive definite matrix. Here, denotes the conjugate transpose of the matrix .
When , (1) is recognized as playing an important role in solving a system of linear equations. For example, in many physical calculations, one must solve the system of linear equation where arises in a finite difference approximation to an elliptic partial differential equation (for more information, refer to ). We can rewrite as , where can be factored as if and only if is a solution of equation . When , this type of nonlinear matrix equations arises in ladder networks, dynamic programming, control theory, stochastic filtering, statistics, and so forth [2–7].
For the similar equations , , , and , there were many contributions in the literature to the theory, numerical solutions, and perturbation analysis [8–32]. Jia and Gao  derived two perturbation estimates for the solution of the equation with . In addition, Duan et al.  proved that the equation has a unique positive definite solution. They also proposed an iterative method for obtaining the unique positive definite solution. However, to our best knowledge, there has been no perturbation analysis for (1) with in the known literature.
As a continuation of the previous results, the rest of the paper is organized as follows. In Section 2, some preliminary lemmas are given. In Section 3, two perturbation bounds for the unique solution to (1) are derived. Furthermore, in Section 4, we obtain the backward error of an approximate solution to (1). In Section 5, we also discuss the condition number of the unique solution to (1). Finally, several numerical examples are presented in Section 6.
We denote by the set of complex matrices, by the set of Hermitian matrices, by the identity matrix, by the imaginary unit, by the spectral norm, by the Frobenius norm, and by and the maximal and minimal eigenvalues of , respectively. For and a matrix , is a Kronecker product, and is a vector defined by . For , we write ( if is Hermitian positive semidefinite (definite, resp.).
Lemma 1 (see ). If and , then .
Lemma 2 (see ). For any Hermitian positive definite matrix and Hermitian matrix , one has (i), ;(ii). In addition, if and , then (iii).
Lemma 3 (see ). The matrix equation always has a unique positive definite solution . The matrix sequence : converges to the unique positive definite solution for arbitrary initial positive definite matrices .
3. Perturbation Bounds
By Lemma 3, we know that (1) always has a unique positive definite solution ; then in this section two perturbation bounds for the unique positive definite solution of (1) are developed. The relative perturbation bound in Theorem 5 does not depend on any knowledge of the actual solution of (1). Furthermore, a sharper perturbation bound in Theorem 8 is derived.
To prove the next theorem, we first verify the following lemma.
Lemma 4. If is a solution of (1), then
The next theorem generalizes [33, Theorem 4] with , to arbitrary integer , .
Theorem 5. Let . If then where
Obviously, is a nonempty bounded convex closed set. Let
Evidently, is continuous. We will prove that .
For every , it follows that Thus According to (9), we have Therefore From Lemmas 2 and 4, it follows that Therefore That is, . By Brouwer's fixed point theorem, there exists a such that . Moreover, by Lemma 3, we know that and are the unique solutions to (1) and (7), respectively. Then
Remark 6. According to we get for and . Therefore (1) is well posed.
Next, a sharper perturbation estimate is derived.
Lemma 7. If , then the linear operator defined by is invertible.
Proof. It suffices to show that the following equation: has a unique solution for every . Define the operator by Let . Thus (21) is equivalent to According to Lemma 2, we have which implies that and is invertible. Therefore, the operator is invertible.
Furthermore, we define operators by Thus, we can rewrite (21) as Define Now we denote that
Proof. Let Obviously, is continuous. The condition (32) ensures that the quadratic equation with respect to the variable has two positive real roots. The smaller one is Define . Then for any , by (32), we have It follows that is nonsingular and Using (22) and Lemma 2, we have Noting (31) and (34), it follows that for . That is, . According to Schauder fixed point theorem, there exists such that . It follows that is a Hermitian solution of (7). By Lemma 3, we know that the solution of (7) is unique. Then .
4. Backward Error
In this section, a backward error of an approximate solution for the unique solution to (1) is obtained.
Theorem 10. Let be an approximation to the solution of (1). If and the residual satisfies then
where . Obviously, is a nonempty bounded convex closed set. Let
Evidently is continuous. We will prove that . For every , we have
Using (43), one sees that
According to (17), we obtain for . That is, . By Brouwer’s fixed point theorem, there exists a such that . Hence is a solution of (1). Moreover, by Lemma 3, we know that the solution of (1) is unique. Then
5. Condition Number
5.1. The Complex Case
By the theory of condition number developed by Rice , we define the condition number of the Hermitian positive definite solution to (1) by where , , and , , are positive parameters. Taking in (54) gives the absolute condition number , and taking , , and in (54) gives the relative condition number .
Let be the matrix representation of the linear operator . Then it is easy to see that Let , , , , where , , and is the vec-permutation matrix, such that Then we obtain that
Then we have the following theorem.
Remark 12. From (60) we have the relative condition number
5.2. The Real Case
In this subsection we consider the real case. That is, all the coefficient matrices , of (1) are real. In such a case the corresponding solution is also real. Completely similar arguments as Theorem 11 give the following theorem.
Theorem 13. Let , be real and let be the condition number defined by (54). If , then has the explicit expression where
Remark 14. In the real case the relative condition number is given by
6. Numerical Examples
To illustrate the results of the previous sections, in this section three simple examples are given, which were carried out using MATLAB 7.1. For the stopping criterion we take .
Example 15. We consider the matrix equation with
Suppose that the coefficient matrices and are perturbed to , where and is a random matrix generated by MATLAB function randn.
The assumptions in Theorem 5 are
By Theorems 5 and 8, we can compute the relative perturbation bounds and , respectively. These results averaged as the geometric mean of 10 randomly perturbed runs. Some results are listed in Table 2.
Example 16. We consider the matrix equation with Choose , . Let the approximate solution of be given with the iterative method (6), where is the iteration number.
The residual satisfies the conditions in Theorem 10.
By Theorem 10, we can compute the backward error bound for as follows: where