/ / Article

Research Article | Open Access

Volume 2014 |Article ID 705830 | 9 pages | https://doi.org/10.1155/2014/705830

# The Iteration Solution of Matrix Equation Subject to a Linear Matrix Inequality Constraint

Accepted22 Jul 2014
Published27 Aug 2014

#### Abstract

We propose a feasible and effective iteration method to find solutions to the matrix equation subject to a matrix inequality constraint , where means that the matrix is nonnegative. And the global convergence results are obtained. Some numerical results are reported to illustrate the applicability of the method.

#### 1. Introduction

In this paper, we consider the following problem: where , , , , , and are known constant matrixes and is unknown matrix.

The solutions to the linear matrix equation with special structures have been widely studied, for example, symmetric solutions (see ), -symmetric solutions (see ), -symmetric solutions (see [7, 8]), bisymmetric solutions (see ), centrosymmetric solutions (see ), and other general solutions (see ). Some iterative methods to solve a pair of linear matrix equations have been studied as well (see ).

However, very little research has been done on the solutions to a matrix equation subjected to a matrix inequality constraint. In 2012, Peng et al. (see ) proposed a feasible and effective algorithm to find solutions to the matrix equation subjected to a matrix inequality constraint based on the polar decomposition in Hilbert space. Next year, Li et al. (see ) used a similar way to study the bisymmetric solutions to the same problem. Motivated and inspired by the work mentioned above, in this paper, we consider the solutions of the matrix equation over linear inequality constraint. We use the theory on the analytical solution of matrix equation to transform the problem into a matrix inequality smallest nonnegative deviation problem. And then, combined with the polar decomposition theory, an iterative method for solving this transformed problem is proposed. Meanwhile, the global convergence results are obtained. Some numerical results are reported and indicate that the proposed method is quite effective.

Throughout this paper, we use the following notation: for , we write , , and to denote the transpose, the Moore-Penrose generalized inverse, and the Frobenius norm of the matrix , respectively. For any , , we write if . denotes the Kronecker product defined as . For the matrix , denotes the operator defined as . For , is a matrix with th entry equal to . Obviously, . The inner product in space is defined as Hence is a inner product space, and the norm of a matrix generated by this inner product space is the Frobenius norm.

This paper is organized as follows. In Section 2, we transform problem (1) into a matrix inequality smallest nonnegative deviation problem. Then we study the existence of the solutions for problem (1) in Section 3. The iterative method to the transformed problem and convergence analysis are presented in Section 4. Section 5 shows some numerical experiments. Finally, we conclude this paper in Section 6.

#### 2. Transforming the Original Problem

In this section, we use the theory on the analytical solution of matrix equation to transform problem (1) into a matrix inequality smallest nonnegative deviation problem. Firstly, we present the following lemma about the analytical solution of matrix equation .

Lemma 1 (see Theorem 1.21 in ). Given , , and , the matrix equation is solvable for in if and only if . Moreover, if the matrix equation is solvable, then the general solutions can be expressed as where is an arbitrary matrix.

Assume that ; that is, assume that the matrix equation is solvable. Substituting (3) into the second inequality of (1), we get By simple calculation, we have Hence (3) is a solution of (1) if and only if the matrix in (1) satisfies (5). However, inequality (5) may be unsolvable; namely, where . In this case, we can find such that is solvable (if (5) is solvable, then ). Obviously, there exist many satisfied (7). Here we need to find a such that , where , satisfied (7). Thus we consider the following smallest nonnegative deviation of the matrix inequality, which is also a quadratic programming problem: If and solve (8) with , then solves (7), and (3) is a solution of (1). If and solve (8) , then solves (7) in the smallest nonnegative deviation sense, and (3) is a solution of the matrix equation over the nonnegative smallest deviation constraint of the inequality . Conversely, if solves (7), then and solve (8). So, to find satisfied (1), we only need to solve the smallest nonnegative deviation problem (8).

Suppose that and solve (8); then On the other hand, if a pair of matrices and solves the smallest nonnegative deviation problem (8), then there exists a nonnegative matrix satisfying Consequently, the smallest nonnegative deviation problem (8) is equivalent to the following optimization problem: Eliminating from (11) yields the following optimization problem: Suppose that a pair solves (12); then which together with the matrices and solves (11), and hence and solve (8). This allows one to determine whether or not (3) is a solution of (1). Therefore, to solve (1), we first solve optimization problem (12). Our iteration method proposed below will take advantage of these equivalent forms of (1).

#### 3. The Solution of the Problem

To illustrate the existence of the solutions , , and of (8), (11), and (12), we give the following theorem in the first place.

Theorem 2 (see [35, 36]). Let be a closed convex cone (i.e., is closed convex set and , for all and ). Let be the polar cone of ; that is, Then for all , has unique polar decomposition of the form

Theorem 2 implies that is the projection of onto and is the projection of onto .

For problem (12), we give the following two matrix sets: Now we will prove that is a closed convex cone and .

Lemma 3. The matrix set is a closed convex cone in the Hilbert space .

Proof. For all , there exists and such that . By the definition of Kronecker product, we have where the second equality follows from the definition of Moore-Penrose generalized inverse, and It is easy to see that . As and are arbitrary, is arbitrary as well. Let and , where ; then . Thus is equivalent to the following set: By the result in , we know that the set is a closed convex cone in the Hilbert space . Hence is a closed convex cone in the Hilbert space .

Lemma 4. The matrix set is the polar cone of the matrix set .

Proof. By the definition of the polar cone, we get So we just need to prove . Firstly, we prove .
For all and , we have Thus . Then .
Now we prove . For all , if , then . So there exists a positive real number and such that Let , where : This contradicts the assumption .

Theorem 5. Assume that the matrices and solve (12). Define matrices and as Then , , , and ; namely, and are the polar decomposition of .

Proof. As and solve (12), we have . Then Thus .
By Lemmas 3 and 4 and Theorem 2, we get that has unique polar decomposition with and ; that is, there exists unique and such that and .
Consider optimization problem (12). The objective function in (12) is Since , solve (12), , and , we have . Thus . Then . This completes the proof.

Remark 6. Theorem 5 implies that if a pair of matrices and solves optimization problem (12), then and are the projections of onto and , respectively. Conversely, by Theorem 2 we get that has unique polar decomposition of the form , , and . By the definition of , there exists and such that and . Moreover, , , and solve optimization problem (11). Thus problem (11) is solvable.

By the above analysis, we get the following theorem immediately.

Theorem 7. Problem (1) is solvable if and only if and , where and are the solutions of optimization problem (12).

#### 4. Iterative Method and Convergence Analysis

In this section, we present an iteration method to solve (1) and give the convergence analysis. We are now in a position to give our algorithm to compute the solutions , , and of (8), (11), and (12).

Algorithm 8 (an iteration method for (1)).
Step 0. Input matrices , , , , , and . Choose the initial matrix . Compute , , and . Take the stopping criterion . Set .
Step 1. Find a solution of the least squares problem
Step 2. Update the sequences
Step 3. If or , then stop; otherwise, set and go to Step 1.

Next we give the following lemma.

Lemma 9. is the direct sum of and ; that is, where

Proof. Obviously, and are linear subspaces of . By orthogonal decomposition theorem in the space, we obtain , where is the orthogonal complement space of . So we just need to prove .
We prove firstly. For all and , we have where the third equality follows from the definition of Moore-Penrose generalized inverse. Hence ; namely, .
Then we prove . For all and , we have . By the same way, we get . As is arbitrary, we take . Then So ; that is, . Thus .
From the above, we get . Therefore, .

Now we present the convergence theorem.

Theorem 10. Let be the unique polar decomposition of . Then

Proof. Since matrix solves (27), we have . This together with Algorithm 8 yields where the first inequality follows from and the second inequality follows from nonexpansive property of the projection. This implies that the sequence is monotonically decreasing and is bounded from below. So there exists a constant such that . Furthermore, is bounded. Then has at least one cluster point. Next we show that any cluster point of the sequence is equal to . Consequently, converges toward to .
Let be any cluster point of the sequence . Without loss of generality, we suppose . Obviously, . It follows from Lemma 9 that has unique orthogonal decomposition of the form Moreover, by (27), satisfies . Thus : Let ; we get . This together with yields that So . Therefore Since , as well. This together with follows that .
Since we get . By Lemma 3 and , we gain . By the definition of , there exists and such that . Let . Then is the unique polar decomposition of . Hence and . Furthermore, This completes the proof of the theorem.

#### 5. Numerical Experiments

In this section, we present two numerical examples to illustrate the efficiency and the performance of Algorithm 8. Firstly, we consider least squares problem (27) in Algorithm 8.

By the definition of Kronecker product, least squares problem (27) in Algorithm 8 is equivalent to where and . It is well known that the normal equation of problem (41) is Notice that and therefore (42) is equal to Taking the definition of and into the above equation, we get which is the normal equation of problem (27). Let and . Then problem (27) is equivalent to , which can be solved by the modified conjugate gradient method (see ).

Algorithm 11 (modified conjugate gradient method).
Step 0. Input matrices , , , , , , and . Choose the initial matrix . Compute , , , and . Take the stopping criterion . Set .
Step 1. If , then stop; otherwise, set and go to Step 2.
Step 2. Update the sequences

In our experiment, all computations were done using the PC with Pentium Dual-Core CPU E5800 @2.40 GHz. All the programming is implemented in MATLAB R2011b. The initial matrix in Algorithm 8 is taken as the null matrix and the termination criterion is or .

Example 12. Matrices , , , , , and are given as follows:

Computing by Algorithm 8, we have , and a solution to inequality (5) as follows: It follows from and that problem (1) is solvable. By substituting into (3), we obtain a solution to problem (1) as follows: Furthermore, denote We have the following iterative error curve in Figure 1.

Example 13. Given matrices , , and being the same as in Example 12, and are identity matrices, and is given as follows:
Following Algorithm 8, we get , and a solution to inequality (5) as follows: It follows from and that problem (1) is solvable. By substituting into (3), we obtain a solution to problem (1) as follows: Furthermore, we have the following iterative error curve in Figure 2.

#### 6. Conclusion

In this paper, we propose Algorithm 8 to find solutions to the matrix equation subject to a matrix inequality constraint . And the global convergence results are obtained. For least squares problem (27) in Algorithm 8, we use the modified conjugate gradient method to solve it. Numerical results also confirm the good theoretical properties of our approach.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Authors’ Contribution

All authors contributed equally and significantly to the writing of this paper. All authors read and approved the final paper.

#### Acknowledgments

The project is supported by the National Natural Science Foundation of China (Grant nos. 11071041, 11201074), Fujian Natural Science Foundation (Grant no. 2013J01006), the University Special Fund Project of Fujian (Grant no. JK2013060), and R&D of Key Instruments and Technologies for Deep Resources Prospecting (the National R&D Projects for Key Scientific Instruments) under Grant no. ZDYZ2012-1-02-04.

1. H. Dai, “On the symmetric solutions of linear matrix equations,” Linear Algebra and its Applications, vol. 131, pp. 1–7, 1990.
2. K. W. E. Chu, “Symmetric solutions of linear matrix equations by matrix decompositions,” Linear Algebra and Its Applications, vol. 119, pp. 35–50, 1989. View at: Publisher Site | Google Scholar | MathSciNet
3. F. J. H. Don, “On the symmetric solutions of a linear matrix equation,” Linear Algebra and Its Applications, vol. 93, pp. 1–7, 1987. View at: Publisher Site | Google Scholar | MathSciNet
4. Y.-B. Deng, Z.-Z. Bai, and Y.-H. Gao, “Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations,” Numerical Linear Algebra with Applications, vol. 13, no. 10, pp. 801–823, 2006. View at: Publisher Site | Google Scholar | MathSciNet
5. W. F. Trench, “Characterization and properties of matrices with generalized symmetry or skew symmetry,” Linear Algebra and Its Applications, vol. 377, pp. 207–218, 2004. View at: Publisher Site | Google Scholar | MathSciNet
6. W. F. Trench, “Hermitian, Hermitian $R$-symmetric, and HERmitian $R$-skew symmetric Procrustes problems,” Linear Algebra and its Applications, vol. 387, pp. 83–98, 2004. View at: Publisher Site | Google Scholar | MathSciNet
7. W. F. Trench, “Inverse eigenproblems and associated approximation problems for matrices with generalized symmetry or skew symmetry,” Linear Algebra and Its Applications, vol. 380, pp. 199–211, 2004. View at: Publisher Site | Google Scholar | MathSciNet
8. W. F. Trench, “Minimization problems for (R,S)-symmetric and (R,S)-skew symmetric matrices,” Linear Algebra and Its Applications, vol. 389, pp. 23–31, 2004. View at: Publisher Site | Google Scholar | MathSciNet
9. D. X. Xie, Y. P. Sheng, and X. Hu, “The least-squares solutions of inconsistent matrix equation over symmetric and antipersymmetric matrices,” Applied Mathematics Letters, vol. 16, no. 4, pp. 589–598, 2003. View at: Publisher Site | Google Scholar | MathSciNet
10. L. Zhao, X. Hu, and L. Zhang, “Least squares solutions to $AX=B$ for bisymmetric matrices under a central principal submatrix constraint and the optimal approximation,” Linear Algebra and Its Applications, vol. 428, no. 4, pp. 871–880, 2008. View at: Publisher Site | Google Scholar | MathSciNet
11. Q. Wang, J. Sun, and S. Li, “Consistency for bi(skew)symmetric solutions to systems of generalized Sylvester equations over a finite central algebra,” Linear Algebra and its Applications, vol. 353, no. 1–3, pp. 169–182, 2002. View at: Publisher Site | Google Scholar | MathSciNet
12. M. Dehghan and M. Hajarian, “An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices,” Applied Mathematical Modelling, vol. 34, no. 3, pp. 639–654, 2010. View at: Publisher Site | Google Scholar | MathSciNet
13. M. Dehghan and M. Hajarian, “Analysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equations,” Applied Mathematical Modelling, vol. 35, no. 7, pp. 3285–3300, 2011. View at: Publisher Site | Google Scholar | MathSciNet
14. C. Song, G. Chen, and L. Zhao, “Iterative solutions to coupled Sylvester-transpose matrix equations,” Applied Mathematical Modelling, vol. 35, no. 10, pp. 4675–4683, 2011. View at: Publisher Site | Google Scholar | MathSciNet
15. C. Q. Gu and H. J. Qian, “Skew-symmetric methods for nonsymmetric linear systems with multiple right-hand sides,” Journal of Computational and Applied Mathematics, vol. 223, no. 2, pp. 567–577, 2009. View at: Publisher Site | Google Scholar | MathSciNet
16. G. Konghua, X. Hu, and L. Zhang, “A new iteration method for the matrix equation $AX=B$,” Applied Mathematics and Computation, vol. 187, no. 2, pp. 1434–1441, 2007. View at: Publisher Site | Google Scholar | MathSciNet
17. K. Jbilou, “Smoothing iterative block methods for linear systems with multiple right-hand sides,” Journal of Computational and Applied Mathematics, vol. 107, no. 1, pp. 97–109, 1999.
18. S. Karimi and F. Toutounian, “The block least squares method for solving nonsymmetric linear systems with multiple right-hand sides,” Applied Mathematics and Computation, vol. 177, no. 2, pp. 852–862, 2006.
19. F. Li, L. Gong, X. Hu, and L. Zhang, “Successive projection iterative method for solving matrix equation $AX=B$,” Journal of Computational and Applied Mathematics, vol. 234, no. 8, pp. 2405–2410, 2010.
20. F. Toutounian and S. Karimi, “Global least squares method (Gl-LSQR) for solving general linear systems with several right-hand sides,” Applied Mathematics and Computation, vol. 178, no. 2, pp. 452–460, 2006. View at: Publisher Site | Google Scholar | MathSciNet
21. Q. Wang, “A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity,” Linear Algebra and Its Applications, vol. 384, pp. 43–54, 2004. View at: Publisher Site | Google Scholar | MathSciNet
22. G. X. Huang, F. Yin, and K. Guo, “An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation $AXB=C$,” Journal of Computational and Applied Mathematics, vol. 212, no. 2, pp. 231–244, 2008. View at: Publisher Site | Google Scholar | MathSciNet
23. J. K. Baksalary and R. Kala, “The matrix equation $AXB+CYD=E$,” Linear Algebra and Its Applications, vol. 30, pp. 141–147, 1980. View at: Publisher Site | Google Scholar | MathSciNet
24. S. K. Mitra, “Common solutions to a pair of linear matrix equations $A1×B1=C1$ and $A2×B2=C2$,” Proceedings of the Cambridge Philosophical Society, vol. 74, pp. 213–216, 1973. View at: Google Scholar | MathSciNet
25. S. K. Mitra, “A pair of simultaneous linear matrix equations ${A}_{1}X{B}_{1}={C}_{1}$, ${A}_{2}X{B}_{2}={C}_{2}$ and a matrix programming problem,” Linear Algebra and Its Applications, vol. 131, pp. 107–123, 1990. View at: Publisher Site | Google Scholar | MathSciNet
26. N. Shinozaki and M. Sibuya, “Consistency of a pair of matrix equations with an application,” Keio Science and Technology Reports, vol. 27, no. 10, pp. 141–146, 1975. View at: Google Scholar | MathSciNet
27. A. Navarra, P. L. Odell, and D. M. Young, “A representation of the general common solution to the matrix equations ${A}_{\mathrm{1}}X{B}_{\mathrm{1}}={C}_{\mathrm{1}}$ and ${A}_{\mathrm{2}}X{B}_{\mathrm{2}}={C}_{\mathrm{2}}$ with applications,” Computers & Mathematics with Applications, vol. 41, no. 7-8, pp. 929–935, 2001. View at: Publisher Site | Google Scholar | MathSciNet
28. M. Hajarian, “Matrix form of the CGS method for solving general coupled matrix equations,” Applied Mathematics Letters, vol. 34, pp. 37–42, 2014. View at: Publisher Site | Google Scholar | MathSciNet
29. M. Hajarian, “Matrix iterative methods for solving the Sylvester-transpose and periodic Sylvester matrix equations,” Journal of the Franklin Institute, vol. 350, no. 10, pp. 3328–3341, 2013. View at: Publisher Site | Google Scholar | MathSciNet
30. M. Hajarian, “Matrix form of the Bi-CGSTAB method for solving the coupled Sylvester matrix equations,” IET Control Theory & Applications, vol. 7, no. 14, pp. 1828–1833, 2013. View at: Publisher Site | Google Scholar | MathSciNet
31. M. Hajarian, “Solving the general coupled and the periodic coupled matrix equations via the extended QMRCGSTAB algorithms,” Computational & Applied Mathematics, vol. 33, no. 2, pp. 349–362, 2014. View at: Publisher Site | Google Scholar | MathSciNet
32. Z.-Y. Peng, L. Wang, and J.-J. Peng, “The solutions of matrix equation $AX=B$ over a matrix inequality constraint,” SIAM Journal on Matrix Analysis and Applications, vol. 33, no. 2, pp. 554–568, 2012. View at: Publisher Site | Google Scholar | MathSciNet
33. J. F. Li, Z. Y. Peng, and J. J. Peng, “Bisymmetric solution of the matrix equation $AX=B$ under a matrix inequality constraint,” Mathematica Numerica Sinica, vol. 35, no. 2, pp. 137–150, 2013 (Chinese). View at: Google Scholar | MathSciNet
34. K. Y. Zhang and Z. Xu, Numerical Algebra, Science Press, Beijing, China, 2006, (Chinese).
35. J. J. Moreau, “Décomposition orthogonale d'un espace hilbertien selon deux cones mutuellement polaires,” Comptes Rendus de l'Académie des Sciences, vol. 225, pp. 238–240, 1962. View at: Google Scholar
36. A. P. Wierzbicki and S. Kurcyusz, “Projection on a cone, penalty functionals and duality theory for problems with inequaltity constraints in Hilbert space,” SIAM Journal on Control and Optimization, vol. 15, no. 1, pp. 25–56, 1977. View at: Publisher Site | Google Scholar | MathSciNet
37. P. G. Ciarlet, Introduction to Numerical Linear Algebra and Optimisation, Cambridge University Press, Cambridge, UK, 1989. View at: MathSciNet

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. 