Research Article | Open Access

Aijing Liu, Guoliang Chen, "On the Hermitian Positive Definite Solutions of Nonlinear Matrix Equation ", *Mathematical Problems in Engineering*, vol. 2011, Article ID 163585, 18 pages, 2011. https://doi.org/10.1155/2011/163585

# On the Hermitian Positive Definite Solutions of Nonlinear Matrix Equation

**Academic Editor:**Mohammad Younis

#### Abstract

Nonlinear matrix equation has many applications in engineering; control theory; dynamic programming; ladder networks; stochastic filtering; statistics and so forth. In this paper, the Hermitian positive definite solutions of nonlinear matrix equation are considered, where is a Hermitian positive definite matrix, , are nonsingular complex matrices, is a positive number, and , . Necessary and sufficient conditions for the existence of Hermitian positive definite solutions are derived. A sufficient condition for the existence of a unique Hermitian positive definite solution is given. In addition, some necessary conditions and sufficient conditions for the existence of Hermitian positive definite solutions are presented. Finally, an iterative method is proposed to compute the maximal Hermitian positive definite solution, and numerical example is given to show the efficiency of the proposed iterative method.

#### 1. Introduction

We consider the nonlinear matrix equation where is an Hermitian positive definite matrix, are nonsingular complex matrices, is a positive number, and , . Here stands for the conjugate transpose of the matrix .

Nonlinear matrix equations with the form of (1.1) have many applications in engineering; control theory; dynamic programming; ladder networks; stochastic filtering; statistics and so forth. The solutions of practical interest are their Hermitian positive definite (HPD) solutions. The existence of HPD solutions of (1.1) has been investigated in some special cases. Long et al. [1] studied (1.1) when , . In addition, there have been many papers considering the Hermitian positive solutions of For instance, the authors [2–5] studied (1.2) when , . In Hasanov [6, 7], the authors investigated (1.2) when , . Then Peng et al. [8] proposed iterative methods for the extremal positive definite solutions of (1.2) for with two cases: and . Cai and Chen [9, 10] studied (1.2) with two cases: and are positive integers, and , or , respectively.

In this paper, we study the HPD solutions of (1.1). The paper is organized as follows. In Section 2, we derive necessary and sufficient conditions for the existence of HPD solutions of (1.1) and give a sufficient condition for the existence of a unique HPD solution of (1.1). We also present some necessary conditions and sufficient conditions for the existence of HPD solutions of (1.1). Then in Section 3, we propose an iterative method for obtaining the maximal HPD solution of (1.1). We give a numerical example in Section 4 to show the efficiency of the proposed iterative method.

We start with some notations which we use throughout this paper. The symbol denotes the set of complex matrices. We write if the matrix is positive definite(semidefinite). If is positive definite(semidefinite), then we write . We use and to denote the maximal and minimal eigenvalues of a matrix . We use and to denote the spectral and Frobenius norm of a matrix , and we also use to denote -norm of a vector . We use and to denote the minimal and maximal HPD solution of (1.1), that is, for any HPD solution of (1.1), then . The symbol denotes the identity matrix. The symbol denotes the spectral radius of . Let and . For matrices and , is a Kronecker product and is a vector defined by .

#### 2. Solvability Conditions and Properties of the HPD Solutions

In this section, we will derive the necessary and sufficient conditions for (1.1) to have an HPD solution and give a sufficient condition for the existence of a unique HPD solution of (1.1). We also will present some necessary conditions and sufficient conditions for the existence of Hermitian positive definite solutions of (1.1).

Lemma 2.1 (see [11]). *If (or ), then (or ) for all , and (or ) for all .*

Lemma 2.2 (see [12]). *Let and be positive operators on a Hilbert space such that , , and . Then
**
hold for any .*

Lemma 2.3 (see [13]). *Let , , . Then*(1)* is increasing on and decreasing on ;*(2)*. *

Lemma 2.4 (see [14]). *If and are Hermitian matrices of the same order with , then .*

Lemma 2.5 (see [15]). *If , and and are positive definite matrices of the same order with , then and . Here stands for one kind of matrix norm.*

Lemma 2.6 (see [5]). *Let and be two arbitrary compatible matrices. Then .*

Theorem 2.7. *Equation (1.1) has an HPD solution if and only if can factor as
**
where is a nonsingular matrix and is column orthonormal.*

*Proof. * If (1.1) has an HPD solution, then . Let be the Cholesky factorization, where is a nonsingular matrix. Then (1.1) can be rewritten as
Let ,, then , . Moreover, (2.3) turns into
that is,
which means that is column orthonormal.

Conversely, if have the decompositions as (2.2), let , then is an HPD matrix, and it follows from (2.2) and (2.4) that
Hence (1.1) has an HPD solution.

Theorem 2.8. *Equation (1.1) has an HPD solution if and only if there exist a unitary matrix , a column-orthonormal matrix (in which ), and diagonal matrices and with such that
*

*Proof. *If (1.1) has an HPD solution, we have by Theorem 2.7 that the matrix is column orthonormal. According to the CS decomposition theorem (Theorem 3.8 in [16]), there exist unitary matrices (in which ), , such that
where , and . Thus the diagonal matrices and . Furthermore, noting that is nonsingular, by (2.8), we have

Equation (2.10) is equivalent to . Let be partitioned as , in which , , then we have
from which it follows that , . By (2.9), we have . Then by (2.2), we have

Conversely, assume that have the decomposition (2.7). Let , which is an HPD matrix. Then it is easy to verify that is an HPD solution of (1.1).

Theorem 2.9. *If (1.1) has an HPD solution , then , where
**
in which and are the minimal and maximal eigenvalues of respectively, and are the minimal and maximal eigenvalues of , respectively.*

*Proof. * Let be an HPD solution of (1.1), then it follows from and Lemma 2.1 that , . Hence
Thus we have
On the other hand, from , it follows that
Let and be the minimal and maximal eigenvalues of , respectively. Since , and , by Lemma 2.2, we get .

Similarly, we have , in which and are the minimal and maximal eigenvalues of , respectively.

Hence we have .

Theorem 2.10. *If for all , and
**
where is defined by (2.13), then (1.1) has a unique HPD solution.*

*Proof. *By the definition of , we have . Hence .

We consider the map and let .Obviously, is a convex, closed, and bounded set and is continuous on .

By the hypothesis of the theorem, we have
that is, . Hence .

For arbitrary , we have
Hence
From (2.20), it follows that

According to the definition of the map , we have
Combining (2.21) and (2.22), we have by Lemma 2.5 that

Since , we know that the map is a contraction map in . By Banach fixed point theorem, the map has a unique fixed point in and this shows that (1.1) has a unique HPD solution in .

Theorem 2.11. *If (1.1) has an HPD solution , then
**
where , and is a solution of the equation
**
in .*

*Proof. * Consider the sequence defined as follows:
Let be an HPD solution of (1.1), then
Assuming that , then by Lemma 2.1, we have
Therefore . Then by the principle of induction, we get .

Noting that the sequence is monotonically decreasing and positive, hence is convergent. Let , then , that is, is a solution of the equation .

Consider the function , since
from which it follows that .

Next we will prove that . Obviously, . On the other hand, for the sequence , since , we may assume that without loss of generality. Then
Hence , . So .

Consequently, we have .

This completes the proof.

Theorem 2.12. *If (1.1) has an HPD solution, then
*

*Proof. *For any eigenvalue of , let be a corresponding eigenvector. Multiplying left side of (1.1) by and right side by , we have
which yields
Since , there exists an unitary matrix such that , where . Then (2.34) turns into the following form:
Let , then (2.35) reduces to
from which we obtain
Form Lemma 2.3, we know that
that is,
Noting that , we get
Consequently,

Then .

Since , clearly denote , and the last inequality implies directly (2.31).

The proof of (2.32) is similar to that of (2.31), thus it is omitted here.

Theorem 2.13. *If and (1.1) has an HPD solution, then
*

*Proof. *If (1.1) has an HPD solution, we have by Theorem 2.7 that
and the matrix is column orthonormal. From which we have
Hence,

Similarly, we have .

Thus, . Hence .

On the other hand, by Lemma 2.6 and (2.2), we get
The proof of (2.43) is similar to that of (2.42).

If , we denote . Then (1.1) turns into Consider the following equations:

We assume that , and satisfy where By (2.51) and Lemma 2.3, we know that (2.49) has two positive real roots . We also get that (2.50) has two positive real roots . It is easy to prove that

We define matrix sets as follows:

Theorem 2.14. *Suppose that , and satisfy (2.51), that is,
**
Then*(i)* Equation (2.48) has a unique HPD solution in ;*(ii)* Equation (2.48) has no HPD solution in .*

*Proof. * Consider the map , which is continuous on . Obviously, is a convex, closed, and bounded set. If ,
Hence, we have . One has
Hence, we have .

Thus, maps into itself.

For arbitrary , similar to (2.21) and (2.22), we have
Combining (2.57), we have by Lemma 2.5 and (2.49)

Thus, we know that the map is a contraction map in . By Banach fixed point theorem, the map has a unique fixed point in and this shows that (2.48) has a unique HPD solution in .

Assume is the HPD solution of (2.48), then
that is, . So, , thus (2.48) has no HPD solution in .
that is, . So, or , thus (2.48) has no HPD solution in .

This completes the proof.

#### 3. Iterative Method for the Maximal HPD Solution

In this section, we consider the iterative method for obtaining the maximal HPD solution of (1.1). We propose the following algorithm which avoids calculating matrix inversion in the process of iteration.

*Algorithm 1. **Step 1. *Input initial matrices:
where , and is defined in Theorem 2.11.*Step 2. *For , compute

Theorem 3.1. *If (1.1) has an HPD solution, then it has the maximal one . Moreover, to the sequences and generated by Algorithm 1, one has
*

*Proof. *Since is an HPD solution of (1.1), by Theorem 2.11, we have , thus
By Lemmas 2.1 and 2.4, we have
According to Lemma 2.1 and , we have
that is, , by Lemma 2.1 again, it follows that .

Hence , and .

Assume that , and , we will prove the inequalities , and .

By Lemmas 2.1 and 2.4, we have
Since , we have , thus we have by Lemma 2.1 that
that is, , by Lemma 2.1 again, it follows that .

Hence we have by induction that
are true for all , and so and exist. Suppose , , taking the limit in the Algorithm 1 leads to and . Therefore is an HPD solution of (1.1), thus . Moreover, as each , so , then . The theorem is proved.

Theorem 3.2. *If (1.1) has an HPD solution and after iterative steps of Algorithm 1, one has , then
**
where is defined by (2.13).*

*Proof. * From the proof of Theorem 3.1, we have for all . Thus we have by Theorem 2.9 that . And this implies
Since
we have by Lemma 2.5 that

#### 4. Numerical Example

In this section, we give a numerical example to illustrate the efficiency of the proposed algorithm. All the tests are performed by MATLAB 7.0 with machine precision around . We stop the practical iteration when the residual .

*Example 4.1. *Let , , , and