Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
VolumeΒ 2012, Article IDΒ 857284, 18 pages
http://dx.doi.org/10.1155/2012/857284
Research Article

An Iterative Algorithm for the Least Squares Generalized Reflexive Solutions of the Matrix Equations

1School of Science, Sichuan University of Science and Engineering, Zigong 643000, China
2Geomathematics Key Laboratory of Sichuan Province, College of Mathematics, Chengdu University of Technology, Chengdu 610059, China

Received 10 October 2011; Accepted 9 December 2011

Academic Editor: ZhenyaΒ Yan

Copyright Β© 2012 Feng Yin and Guang-Xin Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The generalized coupled Sylvester systems play a fundamental role in wide applications in several areas, such as stability theory, control theory, perturbation analysis, and some other fields of pure and applied mathematics. The iterative method is an important way to solve the generalized coupled Sylvester systems. In this paper, an iterative algorithm is constructed to solve the minimum Frobenius norm residual problem: min over generalized reflexive matrix . For any initial generalized reflexive matrix , by the iterative algorithm, the generalized reflexive solution can be obtained within finite iterative steps in the absence of round-off errors, and the unique least-norm generalized reflexive solution can also be derived when an appropriate initial iterative matrix is chosen. Furthermore, the unique optimal approximate solution to a given matrix in Frobenius norm can be derived by finding the least-norm generalized reflexive solution of a new corresponding minimum Frobenius norm residual problem: with , . Finally, several numerical examples are given to illustrate that our iterative algorithm is effective.

1. Introduction

A matrix is said to be a generalized reflection matrix if satisfies that . Let and be two generalized reflection matrices. A matrix is called generalized reflexive (or generalized anti-reflexive) with respect to the matrix pair if . The set of all m-by-n generalized reflexive matrices with respect to matrix pair is denoted by . The generalized reflexive and anti-reflexive matrices have many special properties and usefulness in engineering and scientific computations [1–3].

In this paper, we will consider the minimum Frobenius norm residual problem and its optimal approximation problem as follows.

Problem 1. For given matrices , , find matrix such that

Problem 2. Let denote the set of the generalized reflexive solutions of Problem 1. For a given matrix , find such that

Problem 1 plays a fundamental role in wide applications in several areas, such as Pole assignment, measurement feedback, and matrix programming problem.Liao and Lei [4]presented some examples to show a motivation for studying Problem 1. Problem 2 arises frequently in experimental design. Here the matrix may be a matrix obtained from experiments, but it may not satisfy the structural requirement (generalized reflexive) and/or spectral requirement (the solution of Problem 1). The best estimate is the matrix that satisfies both requirements and is the best approximation of in the Frobenius norm.

Least-squares-based iterative algorithms are very important in system identification, parameter estimation, and signal processing, including the recursive least squares (RLS) and iterative least squares (ILS) methods for solving the solutions of some matrix equations, for example, the Lyapunov matrix equation, Sylvester matrix equations, and coupled matrix equations as well. Some related contributions in solving matrix equations and parameter identification/estimation should be mentioned in this paper. For example, novel gradient-based iterative (GI) method [5–9] and least-squares-based iterative methods [5, 9, 10] with highly computational efficiencies for solving (coupled) matrix equations are presented and have good stability performances, based on the hierarchical identification principle [11–13] which regards the unknown matrix as the system parameter matrix to be identified.

The explicit and numerical solutions of matrix equation pair have been addressed in a large body of the literature. Peng et al. [14] presented iterative methods to obtain the symmetric solutions of the matrix equation pair. Sheng and Chen [15] presented a finite iterative method when the matrix equation pair is consistent. Liao and Lei [4] presented an analytical expression of the least squares solution and an algorithm for the matrix equation pair with the minimum norm. Peng et al. [16] presented an efficient algorithm for the least squares reflexive solution. Dehghan and Hajarian [17] presented an iterative algorithm for solving a pair of the matrix equations over generalized centrosymmetric matrices. Cai and Chen [18] presented an iterative algorithm for the least squares bisymmetric solutions of the matrix equations. By applying the hierarchical identification principle,KΔ±lΔ±Γ§man and Zhour [19]developed an iterative algorithm for obtaining the weighted least squares solution. Dehghan and Hajarian [20] constructed an iterative algorithm to solve the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices. Wu et al. [21, 22] gave the finite iterative solutions to coupled Sylvester-conjugate matrix equations. Wu et al. [23] gave the finite iterative solutions to a class of complex matrix equations with conjugate and transpose of the unknowns. Jonsson and KΓ₯gstrΓΆm [24, 25] proposed recursive block algorithms for solving the coupled Sylvester matrix equations and the generalized Sylvester and Lyapunov Matrix equations. Very recently, Huang et al. [26] presented a finite iterative algorithms for the one-sided and generalized coupled Sylvester matrix equations over generalized reflexive solutions. Yin et al. [27] presented a finite iterative algorithms for the two-sided and generalized coupled Sylvester matrix equations over reflexive solutions. For more studies on the matrix equations, we refer to [1–4, 16, 17, 28–40]. However, the problem of finding the least squares generalized reflexive solution of the matrix equation pair has not been solved.

The following notations are also used in this paper. Let denote the set of all real matrices. We denote by the superscript the transpose of a matrix. In matrix space , define inner product as for all , and represents the Frobenius norm of . represents the column space of . represents the vector operator, that is, for the matrix . stands for the Kronecker product of matrices and .

This paper is organized as follows. In Section 2, we will solve Problem 1 by constructing an iterative algorithm, that is, for an arbitrary initial matrix , we can obtain a solution of Problem 1 within finite iterative steps in the absence of round-off errors. The convergence of the algorithm is also proved. Let , where are arbitrary matrices, or more especially, let ;we can obtain the unique least-norm solution of Problem 1. Then in Section 3, we give the optimal approximate solution of Problem 2 by finding the least-norm generalized reflexive solution of a corresponding new minimum Frobenius norm residual problem. In Section 4, several numerical examples are given to illustrate the application of our iterative algorithm.

2. Solution of Problem 1

In this section, we firstly introduce some definitions, lemmas, and theorems which are required for solving Problem 1. Then we present an iterative algorithm to obtain the solution of Problem 1. We also prove that it is convergent. The following definitions and lemmas come from [41], which are needed for our derivation.

Definition 2.1. A set of matrices is said to be convex if for and . Let denote a convex subset of .

Definition 2.2. A matrix function is said to be convex if for and .

Definition 2.3. Let be a continuous and differentiable function. The gradient of is defined as .

Lemma 2.4. Let be a continuous and differentiable function. Then is convex on if and only if for all .

Lemma 2.5. Let be a continuous and differentiable function, and there exists in the interior of such that , then .

Note that the set is unbounded, open, and convex. Denote then is a continuous, differentiable, and convex function on . Hence, by applying Lemmas 2.4 and 2.5, we obtain the following lemma.

Lemma 2.6. Let be defined by (2.3), then there exists if and only if , then .

From the Taylor series expansion, we have On the other hand, by the basic properties of Frobenius norm and the matrix inner product, we get the expression Note that Thus, we have By comparing (2.4) with (2.7), we have

According to Lemma 2.6 and (2.8), we obtain the following theorem.

Theorem 2.7. A matrix is a solution of Problem 1 if and only if .

For the convenience of discussion, we adopt the following notations: The following algorithm is constructed to solve Problems 1 and 2.

Algorithm 2.8. Step 1. Input matrices , , and two generalized reflection matrix ;Step 2. Choose an arbitrary matrix . Compute Step 3. If , then stop. Else go to Step 4.Step 4. Compute Step 5. If , then stop. Else, let , and go to Step 4.

Remark 2.9. Obviously, it can be seen that , and , where .

Lemma 2.10. Suppose that , then

Proof. One has This completes the proof.

Lemma 2.11. For the sequences and generated by Algorithm 2.8, if there exists a positive number such that for all , then

Proof. Since holds for all matrices and in , we only need prove that for all . We prove the conclusion by induction and two steps are required.
Step 1. We will show that
To prove this conclusion, we also use induction.
For , by Algorithm 2.8 and Lemma 2.10, we have that
Assume (2.15) holds for . For , by Lemma 2.10, we have Hence, (2.15) holds for . Therefore, (2.15) holds by the principle of induction.
Step 2. Assume that , then we show that In fact, by Algorithm 2.8 we have By the principle of induction, (2.14) is implied in Steps 1 and 2. This completes the proof.

Lemma 2.12. Assume that is an arbitrary solution of Problem 1, then where the sequences ,, and are generated by Algorithm 2.8.

Proof. First, by Algorithm 2.8, it is easy to verify that Thus This complete the proof.

Remark 2.13. Lemma 2.12 implies that if , then , thus .

Theorem 2.14. For an arbitrary initial matrix , a solution of Problem 1 can be obtained with finite iteration steps in the absence of round-off errors.

Proof. If , by Lemma 2.12 we have , then we can compute by Algorithm 2.8.
By Lemma 2.11, we have
It can be seen that the set of is an orthogonal basis of the matrix space , which implies that , that is, is a solution of Problem 1. This completes the proof.

To show the least-norm generalized reflexive solution of Problem 1, we first introduce the following result.

Lemma 2.15 (see [16, Lemma 2.7]). Suppose that the minimum residual problem min has a solution , then is the unique least Frobenius norm solution of the minimum residual problem.

By Lemma 2.15, the following result can be obtained.

Theorem 2.16. If one chooses the initial iterative matrix , where are arbitrary matrices, especially, let , one can obtain the unique least-norm generalized reflexive solution of Problem 1 within finite iteration steps in the absence of round-off errors by using Algorithm 2.8.

Proof. By Algorithm 2.8 and Theorem 2.14, if we let , where are arbitrary matrices, we can obtain the solution of Problem 1 within finite iteration steps in the absence of round-off errors, the solution can be represented that
In the sequel, we will prove that is just the least-norm solution of Problem 1. Consider the following minimum residual problem
Obviously, the solvability of Problem 1 is equivalent to that of the minimum residual problem (2.25), and the least-norm solution of Problem 1 must be the least-norm solution of the minimum residual problem (2.25).
In order to prove that is the least-norm solution of Problem 1, it is enough to prove that is the least-norm solution of the minimum residual problem (2.25). Denote , then the minimum residual problem (2.25) is equivalent to the minimum residual problem as follows: Noting that by Lemma 2.15 we can see that is the least-norm solution of the minimum residual problem (2.26). Since vector operator is isomorphic, is the unique least-norm solution of the minimum residual problem (2.25);furthermore is the unique least-norm solution of Problem 1.

3. Solution of Problem 2

Since the solution set of Problem 1 is no empty, when , then

Let , then Problem 2 is equivalent to finding the least-norm generalized reflexive solution of a new corresponding minimum residual problem

By using Algorithm 2.8, let initially iterative matrix , or more especially, let ; we can obtain the unique least-norm generalized reflexive solution of minimum residual problem (3.2), then we can obtain the generalized reflexive solution of Problem 2, and can be represented that .

4. Numerical Examples

In this section, we will show several numerical examples to illustrate our results. All the tests are performed by MATLAB 7.8.

Example 4.1. Consider the generalized reflexive solution of Problem 1, where Let
We will find the least squares generalized reflexive solution of the matrix equation pair by using Algorithm 2.8. Because of the influence of the error of calculation, is usually unequal to zero in the process of the iteration, where . For any chosen positive number , however small enough, for example, , whenever , stop the iteration, and is regarded to be the least squares generalized reflexive solution of the matrix equation pair .
Let by Algorithm 2.8, we have the unique least Frobenius norm generalized reflexive solution of Problem 1 The convergence curve for the Frobenius norm of the residual is shown in Figure 1.

857284.fig.001
Figure 1: Convergence curve for the Frobenius norm of the residual for Example 4.1 with .

Example 4.2. Consider the least-norm generalized reflexive solution of the minimum residual problem in Example 4.1. Let By using Algorithm 2.8, we have the unique least Frobenius norm generalized reflexive solution of Problem 1

The convergence curve for the Frobenius norm of the residual is shown in Figure 2.

857284.fig.002
Figure 2: Convergence curve for the Frobenius norm of the residual for Example 4.2.

Example 4.3. Let denote the set of all generalized reflexive solutions of Problem 1 in Example 4.1. For a given matrix we will find , such that that is, find the optimal approximate solution to the matrix in .

Let , by the method mentioned in Section 3, we can obtain the least-norm generalized reflexive solution of the minimum residual problem (3.2) by choosing the initial iteration matrix , and is such that

The convergence curve for the Frobenius norm of the residual is shown in Figure 3.

857284.fig.003
Figure 3: Convergence curve for the Frobenius norm of the residual for Example 4.3.

5. Conclusion

This paper mainly solves the minimum Frobenius norm residual problem and its optimal approximate problem over generalized reflexive matrices by constructing an iterative algorithm. We solve the minimum Frobenius norm residual problem by constructing an iterative algorithm, that is, for an arbitrary initial matrix we obtain a solution of Problem 1 within finite iterative steps in the absence of round-off errors. The convergence of the algorithm is also proved. Let , where are arbitrary matrices, or more especially, let ; we obtain the unique least-norm solution of the minimum Frobenius norm residual problem. Then we give the generalized reflexive solution of the optimal approximate problem by finding the least-norm generalized reflexive solution of a corresponding new minimum Frobenius norm residual problem.

Several numerical examples are given to confirm our theoretical results. We can see that our iterative algorithm is effective. We also note that for the minimum Frobenius norm residual problem with large but not sparse matrices , and , Algorithm 2.8 may be terminated more than steps because of computational errors.

Acknowledgments

This work was partially supported by the Research Fund Project (Natural Science 2010XJKYL018) of Sichuan University of Science and Engineering. This work is also supported by Open Fund of Geomathematics Key Laboratory of Sichuan Province (scsxdz2011005). Key Project of Natural Science of Educational Department in Sichuan Province (102A007).

References

  1. F. Li, X. Hu, and L. Zhang, β€œThe generalized reflexive solution for a class of matrix equations AX=B; XC=D,” Acta Mathematica Scientia Series B, vol. 28, no. 1, pp. 185–193, 2008. View at Publisher Β· View at Google Scholar
  2. J.-C. Zhang, X.-Y. Hu, and L. Zhang, β€œThe (P,Q) generalized reflexive and anti-reflexive solutions of the matrix equation AX=B,” Applied Mathematics and Computation, vol. 209, no. 2, pp. 254–258, 2009. View at Publisher Β· View at Google Scholar
  3. B. Zhou, Z.-Y. Li, G.-R. Duan, and Y. Wang, β€œWeighted least squares solutions to general coupled Sylvester matrix equations,” Journal of Computational and Applied Mathematics, vol. 224, no. 2, pp. 759–776, 2009. View at Publisher Β· View at Google Scholar
  4. A.-P. Liao and Y. Lei, β€œLeast-squares solution with the minimum-norm for the matrix equation (AXB,GXH)=(C,D),” Computers & Mathematics with Applications, vol. 50, no. 3-4, pp. 539–549, 2005. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  5. F. Ding, P. X. Liu, and J. Ding, β€œIterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle,” Applied Mathematics and Computation, vol. 197, no. 1, pp. 41–50, 2008. View at Publisher Β· View at Google Scholar
  6. F. Ding and T. Chen, β€œOn iterative solutions of general coupled matrix equations,” SIAM Journal on Control and Optimization, vol. 44, no. 6, pp. 2269–2284, 2006. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  7. L. Xie, J. Ding, and F. Ding, β€œGradient based iterative solutions for general linear matrix equations,” Computers & Mathematics with Applications, vol. 58, no. 7, pp. 1441–1448, 2009. View at Publisher Β· View at Google Scholar
  8. F. Ding and T. Chen, β€œGradient based iterative algorithms for solving a class of matrix equations,” IEEE Transactions on Automatic Control, vol. 50, no. 8, pp. 1216–1221, 2005. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  9. J. Ding, Y. Liu, and F. Ding, β€œIterative solutions to matrix equations of the form AiXBi=Fi,” Computers & Mathematics with Applications, vol. 59, no. 11, pp. 3500–3507, 2010. View at Publisher Β· View at Google Scholar
  10. F. Ding and T. Chen, β€œIterative least-squares solutions of coupled Sylvester matrix equations,” Systems & Control Letters, vol. 54, no. 2, pp. 95–107, 2005. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  11. F. Ding and T. Chen, β€œHierarchical gradient-based identification of multivariable discrete-time systems,” Automatica, vol. 41, no. 2, pp. 315–325, 2005. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  12. F. Ding and T. Chen, β€œHierarchical least squares identification methods for multivariable systems,” IEEE Transactions on Automatic Control, vol. 50, no. 3, pp. 397–402, 2005. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  13. F. Ding and T. Chen, β€œHierarchical identification of lifted state-space models for general dual-rate systems,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 52, no. 6, pp. 1179–1187, 2005. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  14. Y.-X. Peng, X.-Y. Hu, and L. Zhang, β€œAn iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations,” Applied Mathematics and Computation, vol. 183, no. 2, pp. 1127–1137, 2006. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  15. X. Sheng and G. Chen, β€œA finite iterative method for solving a pair of linear matrix equations (AXB,CXD)=(E,F),” Applied Mathematics and Computation, vol. 189, no. 2, pp. 1350–1358, 2007. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  16. Z.-H. Peng, X.-Y. Hu, and L. Zhang, β€œAn efficient algorithm for the least-squares reflexive solution of the matrix equation A1XB1=C1, A2XB2=C2,” Applied Mathematics and Computation, vol. 181, no. 2, pp. 988–999, 2006. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  17. M. Dehghan and M. Hajarian, β€œAn iterative algorithm for solving a pair of matrix equations AYB=E, CYD=F over generalized centro-symmetric matrices,” Computers & Mathematics with Applications, vol. 56, no. 12, pp. 3246–3260, 2008. View at Publisher Β· View at Google Scholar
  18. J. Cai and G. Chen, β€œAn iterative algorithm for the least squares bisymmetric solutions of the matrix equations A1XB1=C1; A2XB2=C2,” Mathematical and Computer Modelling, vol. 50, no. 7-8, pp. 1237–1244, 2009. View at Publisher Β· View at Google Scholar
  19. A. Kılıçman and Z. A. A. A. Zhour, β€œVector least-squares solutions for coupled singular matrix equations,” Journal of Computational and Applied Mathematics, vol. 206, no. 2, pp. 1051–1069, 2007. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  20. M. Dehghan and M. Hajarian, β€œAn iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices,” Applied Mathematical Modelling, vol. 34, no. 3, pp. 639–654, 2010. View at Publisher Β· View at Google Scholar
  21. A.-G. Wu, G. Feng, G.-R. Duan, and W.-J. Wu, β€œIterative solutions to coupled Sylvester-conjugate matrix equations,” Computers & Mathematics with Applications, vol. 60, no. 1, pp. 54–66, 2010. View at Publisher Β· View at Google Scholar
  22. A.-G. Wu, B. Li, Y. Zhang, and G.-R. Duan, β€œFinite iterative solutions to coupled Sylvester-conjugate matrix equations,” Applied Mathematical Modelling, vol. 35, no. 3, pp. 1065–1080, 2011. View at Publisher Β· View at Google Scholar
  23. A.-G. Wu, G. Feng, G.-R. Duan, and W.-J. Wu, β€œFinite iterative solutions to a class of complex matrix equations with conjugate and transpose of the unknowns,” Mathematical and Computer Modelling, vol. 52, no. 9-10, pp. 1463–1478, 2010. View at Publisher Β· View at Google Scholar
  24. I. Jonsson and B. Kågström, β€œRecursive blocked algorithm for solving triangular systems. I. One-sided and coupled Sylvester-type matrix equations,” ACM Transactions on Mathematical Software, vol. 28, no. 4, pp. 392–415, 2002. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  25. I. Jonsson and B. Kågström, β€œRecursive blocked algorithm for solving triangular systems. II. Two-sided and generalized Sylvester and Lyapunov matrix equations,” ACM Transactions on Mathematical Software, vol. 28, no. 4, pp. 416–435, 2002. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  26. G. X. Huang, N. Wu, F. Yin, Z. L. Zhou, and K. Guo, β€œFinite iterative algorithms for solving generalized coupled Sylvester systems—part I: one-sided and generalized coupled Sylvester matrix equations over generalized reflexive solutions,” Applied Mathematical Modelling, vol. 36, no. 4, pp. 1589–1603, 2012. View at Publisher Β· View at Google Scholar
  27. F. Yin, G. X. Huang, and D. Q. Chen, β€œFinite iterative algorithms for solving generalized coupled Sylvester systems-Part II: two-sided and generalized coupled Sylvester matrix equations over reflexive solutions,” Applied Mathematical Modelling, vol. 36, no. 4, pp. 1604–1614, 2012. View at Publisher Β· View at Google Scholar
  28. M. Dehghan and M. Hajarian, β€œThe general coupled matrix equations over generalized bisymmetric matrices,” Linear Algebra and Its Applications, vol. 432, no. 6, pp. 1531–1552, 2010. View at Publisher Β· View at Google Scholar
  29. M. Dehghan and M. Hajarian, β€œAnalysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equations,” Applied Mathematical Modelling, vol. 35, no. 7, pp. 3285–3300, 2011. View at Publisher Β· View at Google Scholar
  30. G.-X. Huang, F. Yin, and K. Guo, β€œAn iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB=C,” Journal of Computational and Applied Mathematics, vol. 212, no. 2, pp. 231–244, 2008. View at Publisher Β· View at Google Scholar
  31. Z.-Y. Peng, β€œNew matrix iterative methods for constraint solutions of the matrix equation AXB=C,” Journal of Computational and Applied Mathematics, vol. 235, no. 3, pp. 726–735, 2010. View at Publisher Β· View at Google Scholar
  32. Z.-Y. Peng and X.-Y. Hu, β€œThe reflexive and anti-reflexive solutions of the matrix equation AX=B,” Linear Algebra and Its Applications, vol. 375, pp. 147–155, 2003. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  33. Z.-h. Peng, X.-Y. Hu, and L. Zhang, β€œAn efficient algorithm for the least-squares reflexive solution of the matrix equation A1XB1=C1, A2XB2=C2,” Applied Mathematics and Computation, vol. 181, no. 2, pp. 988–999, 2006. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  34. X. Sheng and G. Chen, β€œAn iterative method for the symmetric and skew symmetric solutions of a linear matrix equation AXB+CYD=E,” Journal of Computational and Applied Mathematics, vol. 233, no. 11, pp. 3030–3040, 2010. View at Publisher Β· View at Google Scholar
  35. Q.-W. Wang, J.-H. Sun, and S.-Z. Li, β€œConsistency for bi(skew)symmetric solutions to systems of generalized Sylvester equations over a finite central algebra,” Linear Algebra and Its Applications, vol. 353, pp. 169–182, 2002. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  36. Q.-W. Wang, β€œA system of matrix equations and a linear matrix equation over arbitrary regular rings with identity,” Linear Algebra and Its Applications, vol. 384, pp. 43–54, 2004. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  37. Q.-W. Wang, β€œBisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations,” Computers & Mathematics with Applications, vol. 49, no. 5-6, pp. 641–650, 2005. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  38. A.-G. Wu, G.-R. Duan, and Y. Xue, β€œKronecker maps and Sylvester-polynomial matrix equations,” IEEE Transactions on Automatic Control, vol. 52, no. 5, pp. 905–910, 2007. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  39. A.-G. Wu, G. Feng, G.-R. Duan, and W.-J. Wu, β€œClosed-form solutions to Sylvester-conjugate matrix equations,” Computers & Mathematics with Applications, vol. 60, no. 1, pp. 95–111, 2010. View at Publisher Β· View at Google Scholar
  40. Y. Yuan and H. Dai, β€œGeneralized reflexive solutions of the matrix equation AXB=D and an associated optimal approximation problem,” Computers & Mathematics with Applications, vol. 56, no. 6, pp. 1643–1649, 2008. View at Publisher Β· View at Google Scholar
  41. A. Antoniou and W.-S. Lu, Practical Optimization: Algorithm and Engineering Applications, Springer, New York, NY, USA, 2007.