Least Squares Based Iterative Algorithm for the Coupled Sylvester Matrix Equations
By analyzing the eigenvalues of the related matrices, the convergence analysis of the least squares based iteration is given for solving the coupled Sylvester equations and in this paper. The analysis shows that the optimal convergence factor of this iterative algorithm is 1. In addition, the proposed iterative algorithm can solve the generalized Sylvester equation . The analysis demonstrates that if the matrix equation has a unique solution then the least squares based iterative solution converges to the exact solution for any initial values. A numerical example illustrates the effectiveness of the proposed algorithm.
Matrix equations arise in systems and control, such as Lyapunov matrix equation and Riccati equation. How to solve these matrix equations becomes a major field of the matrix computations [1–4]. Main points of this field are the decompositions and transformations of the matrices, eigenvalues and eigenvectors, and Krylov subspace [5–8]. Other points contain the algorithms and the convergence analysis. The algorithms provided a set of operation steps, and according to these steps it can find a solution of a matrix equation within finite steps in given error bounds [9–11]. The convergence analysis offered more details of an algorithm, and in general, these details indicated some new research areas [12–15].
The direct method and the indirect method are the two main approaches of solving the matrix equations [16–18]. With the development of the requirement of the computation and the matrix theory, the indirect method or iterative method becomes the main approach of solving the matrix equations [19, 20].
Iterative method is active in studying other engineering problems. Particularly, the iterative method can be used to identify systems [21–23] and estimate parameter of systems [24–28]. For example, iterative method is useful in the identification [29–32] and parameter estimation [33–36] of linear and nonlinear systems [37–46].
The Jacobi and Gauss-Seidel iterative methods were discussed in the literature. By extending the Jacobi and Gauss-Seidel iterative methods, Ding and his coworkers recently presented a large family of the least squares based iterative methods for solving the matrix equations and [47–49]. It has been proved that these least squares based iterative solutions always converge fast to the exact ones as long as the unique solutions exist. But the range of the convergence factor is still open. Motivated by the importance of this algorithm, we develop a new way to prove the convergence of the least squares based iterative algorithm for the coupled Sylvester matrix equations and . According to the new proof, we obtain the optimal convergence factor of this iterative algorithm and extend the iterative algorithm to solve the generalized Sylvester matrix equation . The algorithm in this paper can be extended to other more general and complex matrix equations [50–52].
The paper is organized as follows. Section 2 gives some preliminaries. Section 3 presents a new proof to the least squares iterative algorithm for solving the coupled Sylvester matrix equations and . Section 4 revises this algorithm to solve the equation . Section 5 gives an example to illustrate the effectiveness of the proposed results. Finally, we offer some concluding remarks in Section 6.
2. Basic Preliminaries
Some symbols and lemmas are introduced first. is the identity matrix of size . is an identity matrix with an appropriate size. denotes the eigenvalue set of matrix . denotes the matrix determinant. denotes the Frobenius norm of and is defined as . For an matrix denotes an -dimensional vector formed by represents the Kronecker product of matrices and . A formula involved the vec-operator and the Kronecker product is .
If matrix is invertible, then is symmetric and idempotent. To be more specific, we have the following lemma.
Lemma 1. For the symmetric matrix , there exists an orthogonal matrix such that Moreover, .
3. The Coupled Sylvester Matrix Equations
In this section, we will prove the convergence of the least squares iterative algorithm for solving the following coupled Sylvester matrix equations: where , , and are known and are to be determined. By using the hierarchical identification principle, the least squares based iterative algorithm for solving (4) is given by  Here, and To initialize the algorithm, we take and as some small real matrices, for example, with being an matrix whose elements are all 1.
Theorem 2. If (4) has unique solutions and , then for any initial values and the iterative solutions and given by iteration in (5) converge to and ; that is, or the error matrices and converge to zero; in this case, the optimal convergence factor is .
Proof. Define two error matrices
Using (4) and (5) gives
Expanding (9) gives
Simplifying (10) gives
Using the formula gives
Using these symbols, a compact form of (12) is
The eigenvalues of inside the unit circle will complete the proof. Let be an eigenvalue of ; we will show that , or, if , then .
Consider the following characteristic polynomial of : It is not hard to show that by calculation. Since it follows from the formula that On the other hand, since (4) has a unique solution, it follows that the matrix is invertible. A determinant expansion shows that So it gives . Since expanding it gives Setting , it can be rewritten as According to Lemma 1, there exists an orthogonal matrix such that Using (25) can be manipulated to get If , then there exist the nonzero vectors that satisfy , which can be written as Since and are orthogonal matrices, it follows that where and . Then from (28), it gives Using (26) gives According to (29) and (30), (27) can be manipulated to get So it gives Suppose that , where , . From Schur decomposition theorem, there exists a decomposition , where is an orthogonal matrix, , and is a strictly upper triangular matrix. Then from (23), it gives It follows that . Since , we obtain , . Thus, .
Next, we determine the optimal convergence factor. From and one gets . Set . Taking absolute values of these eigenvalues, the optimal convergence factor satisfies Equation (35) is equivalent to . Solving it gives . The proof is completed.
4. The Generalized Sylvester Matrix Equation AXB + CXD = F
In this section, we use iteration in (5) to solve the generalized Sylvester matrix equation. Consider the following equation: where , , and are given constant matrices and is the unknown matrix to be solved. The following conclusion is obvious.
Equation (36) has a unique solution if and only if In this case, the unique solution is given by .
Setting and , (36) can be equivalently expressed as If then (38) has a unique solution. It is easy to show that if or , then (38) is equivalent to (36). Next, we show that if or , then (39) is equivalent to (37). That is, we have the following determinant result: According to Theorem 2, (38) can be solved by iteration in (5), and from or , (36) can be solved.
In this section, an example is offered to illustrate the convergence of the proposed iterative algorithm.
Example 1. Consider the coupled Sylvester matrix equations in the form of (4) with
The unique solution is found to be
Taking as the initial iterative values and using iteration (5) to compute and , the iterative values of and are shown in Table 1 with the relative error The effect of changing the convergence factor is illustrated in Figure 1.
From Table 1 and Figure 1, we find that the relative error goes to zero with the increasing of the iterative times. This shows that the proposed iterative algorithm is effective. In addition, Figure 1 shows that the optimal convergence factor . This indicates that the result of the optimal convergence suggested in this paper is correct.
This paper proved the convergence of the least squares based iterative algorithm of the coupled Sylvester matrix equations and , and the proof determined the range of the convergence factor and the optimal convergence factor. The suggested algorithm can also be used to solve the generalized Sylvester equation . An example indicated that the iterative solution given by the least squares based iterative algorithm converges fast to its exact solution under proper conditions.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work was supported by the National Natural Science Foundation of China (no. 6110218).
X. L. Luan, P. Shi, and F. Liu, “Stabilization of networked control systems with random delays,” IEEE Transactions on Industrial Electronics, vol. 58, no. 9, pp. 4323–4330, 2011.View at: Google Scholar
D. Q. Zhu, J. Bai, and Q. Liu, “Multi-fault diagnosis method for sensor system based on PCA,” Sensors, vol. 10, no. 1, pp. 241–253, 2010.View at: Google Scholar
D. Q. Zhu, J. Liu, and S. X. Yang, “Particle swarm optimization approach to thruster fault-tolerant control of unmanned underwater vehicles,” International Journal of Robotics and Automation, vol. 26, no. 3, pp. 426–432, 2011.View at: Google Scholar
Y. J. Liu, Y. S. Xiao, and X. L. Zhao, “Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model,” Applied Mathematics and Computation, vol. 215, no. 4, pp. 1477–1483, 2009.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
D. Q. Zhu, Q. Liu, and Z. Hu, “Fault-tolerant control algorithm of the manned submarine with multi-thruster based on quantum-behaved particle swarm optimisation,” International Journal of Control, vol. 84, no. 11, pp. 1817–1829, 2011.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
D. Q. Zhu, Y. Zhao, and M. Z. Yan, “A bio-inspired neurodynamics based backstepping path-following control of an AUV with ocean current,” International Journal of Robotics and Automation, vol. 27, no. 3, pp. 280–287, 2012.View at: Google Scholar
D. Q. Zhu, H. Huang, and S. X. Yang, “Dynamic task assignment and path planning of multi-auv system based on an improved self-organizing map and velocity synthesis method in 3D underwater workspace,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 504–514, 2013.View at: Publisher Site | Google Scholar