• Views 332
• Citations 1
• ePub 13
• PDF 222
`Abstract and Applied AnalysisVolume 2014 (2014), Article ID 649524, 10 pageshttp://dx.doi.org/10.1155/2014/649524`
Research Article

## Iterative Solutions of a Set of Matrix Equations by Using the Hierarchical Identification Principle

Department of Mathematics and Physics, Bengbu College, Bengbu 233030, China

Received 13 January 2014; Revised 29 March 2014; Accepted 30 March 2014; Published 4 May 2014

Copyright © 2014 Huamin Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper is concerned with iterative solution to a class of the real coupled matrix equations. By using the hierarchical identification principle, a gradient-based iterative algorithm is constructed to solve the real coupled matrix equations and . The range of the convergence factor is derived to guarantee that the iterative algorithm is convergent for any initial value. The analysis indicates that if the coupled matrix equations have a unique solution, then the iterative solution converges fast to the exact one for any initial value under proper conditions. A numerical example is provided to illustrate the effectiveness of the proposed algorithm.

#### 1. Introduction

For systems with certain parameters, the controllability and stability are the important topics which are worth studying [13]. If the parameters of systems are uncertain, then how to identify these parameters is to put on the agenda. To identify the parameters of the large-scale systems, the hierarchical identification principle was proposed in [46]. The hierarchical gradient-based and least squares based identifications were presented for multivariable system [7]. The fruits of these effective strategies include identifications and adaptive control for dual-rate systems [8] and Hammerstein nonlinear systems [9].

Many publications have studied the solutions to matrix equations from the different points of view [1014]. Zhou et al. proposed the positive definite solutions of the nonlinear matrix equation [15]; Li et al. discussed a class of iterative methods for the generalized Sylvester equation [16]; the Riccati equation and a class of the coupled transpose matrix equations are investigated in [17, 18].

Unlike the above methods and just like the Jacobi and Gauss-Seidel iterations, Ding and Chen proposed the gradient-based and least squares based iterations for solving and [19, 20] and a large family of iterations. These iterations include the gradient iteration, the least squares iteration, and some classical iterations as their special cases [21, 22]. By using the hierarchical identification principle, the gradient-based iterative algorithms were derived for solving different real matrix equations, such as the generalized Sylvester matrix equations and general linear matrix equations [2325].

Ding’s strategy has received much attention. With the real representations of complex matrices as tools, Wu et al. applied the Ding’s strategy to solve the extended Sylvester-conjugate matrix equations [26], the complex conjugate and transpose matrix equations [27], and the extended coupled Sylvester conjugate matrix equations [28]; Song and Chen presented the gradient based iterative algorithm for solving the extended Sylvester-conjugate transpose matrix equations [29].

Different from the various single real matrix equations in [2125] and the complex matrix equations in [2629], the iterative algorithm for the real coupled matrix equations and is discussed by using the hierarchical identification principle, and the gradient-based iterative algorithm is proposed. Moreover, the gradient-based iteration is reported for solving the general real coupled matrix equations. We prove that the iterative solution always converges to the exact one for any initial value, if there exists a unique solution of these kinds of real coupled matrix equations.

The iterative methods can be applied to nonlinear system identification [30, 31]. The proposed methods of this paper can combine the iterative identification methods [3235], the auxiliary model identification methods [3639], the multi-innovation identification methods [4044], and the two-stage or multistage identification methods [45] to study identification problems for other linear systems [4650] or nonlinear systems [5154] and other systems with colored noises [5558].

This paper is organized as follows. Section 2 offers some notation and basic lemmas. Section 3 derives the gradient-based iterative algorithm for solving the matrix equations and . The iteration of real general coupled matrix equations is discussed in Section 4. Section 5 presents a numerical example to illustrate the effectiveness of the proposed algorithm. Finally, we offer some concluding remarks in Section 6.

#### 2. Notations and Basic Lemmas

Some notation and lemmas are introduced first. The symbol represents the transpose of . For a square matrix , indicates the maximum eigenvalue of . denotes the norm of and is defined as . For an matrix is defined as If and , then their Kronecker product is defined as

The relationship between the vec-operator and the Kronecker product can be expressed as the following lemma [59].

Lemma 1. If , , and , then we have

The gradient-based iterative algorithm for solving matrix equation is listed as follows [22].

Lemma 2. For , if is a full-column rank matrix and is a full-row rank matrix, then the iterative solution given by the following gradient-based iterative algorithm converges to the exact solution (i.e., ) for any initial values [22]:

#### 3. The Coupled Matrix Equations

In this section, we consider the following real coupled matrix equations: where , , and , , are the given matrices, and is the unknown matrix to be determined.

##### 3.1. The Exact Solution

According to Lemma 1, we rewrite (6) as where is a block matrix. The exact solution of (6) can be given by the following theorem.

Theorem 3. Equation (6) has a unique solution if and only if is a full-column rank matrix; in this case, the unique solution is where If , then (6) has a unique solution .

##### 3.2. The Gradient Iterative Algorithm

The hierarchical identification principle implies that the related system can be decomposed into several subsystems, so we introduce four intermediate matrices: Using these four intermediate matrices, we decompose (6) into four subsystems According to Lemma 2, it is not hard to get the iterative solutions , , , and of these four subsystems , , , and as follows: We will determine the convergence factor later. Using (10), we have Because the unknown matrix appears in the right-hand side, this algorithm is impossible to realize. By using the hierarchical identification principle, we replace the unknown matrix with its estimate at iteration . Hence, we obtain In fact, only an iterative solution is needed in this algorithm; we take the average of , , , and as and obtain the gradient-based iterative algorithm:

Theorem 4. If (6) has a unique solution , then the iterative solution given by (15)–(20) converges to , that is, , or the error matrix converges to zero for any initial value .

Proof. We define estimation error matrices: From this definition and (15)–(19), we have the following error formulas: Set By the trace formula and from (22), we obtain Adding (25) to (26) gives Similarly, adding (27) to (28) yields Using (29) and (30), we have Taking the norm of both sides of (23) gives Substituting (31) into (32) yields Thus, we have Since that is, if the convergence factor is chosen to satisfy , then we have It follows that and , as or as , we obtain From Theorem 3, we have , as . This completes the proof of Theorem 4.

#### 4. General Real Coupled Matrix Equations

In this section, we study the general real coupled matrix equations of form where , , , , and are the given matrices, and is the unknown matrix to be determined.

##### 4.1. The Exact Solution

According to Lemma 1, (39) can be rewritten as where is a block matrix. Set The exact solution of (39) can be given by the following theorem.

Theorem 5. Equation (39) has a unique solution if and only if is a full-column rank matrix; in this case, the unique solution is
If , then (39) has a unique solution .

##### 4.2. The Gradient-Based Iterative Algorithm

We define the intermediate matrices: By using the hierarchical identification principle, we decompose (39) into According to Lemma 2, we have where and denote the iterative solutions of equations and at iteration , respectively. Substituting into (45) and into (46) gives To realize the above algorithm, we replace the unknown matrix in the right-hand side of the above two equations with and , respectively, and obtain In fact, only an iterative solution is needed; taking the average of and as , we obtain the gradient iterative algorithm for solving (39) as follows:

Theorem 6. If (39) has a unique solution , then the iterative solution given by (49)–(52) converges to , that is, , or the error matrix converges to zero for any initial value .

Proof. The estimation error matrices are defined as By using (49)–(51), we have Set According to the trace formula and using (54), for and , we obtain For and , adding (57) to (58) gives Taking the norm of both sides of (55) and using the norm inequality gives Dividing (59) by and using the inequality (60) gives In the second “,” the assumption that is used. Since , we have If the convergence factor is chosen to satisfy , then we have It follows that , as or as , we obtain According to Theorem 5, we have , as . This completes the proof of Theorem 6.

#### 5. A Numerical Example

This section offers a numerical example to illustrate the performance of the proposed algorithm. Consider (6) with From Theorem 3, the exact solution is

Taking as initial iterative value, we apply the gradient-based iterative algorithm in (15)–(19) to compute . The iterative solutions are shown in Table 1, where is the relative error and the convergence factor is . The relation of the relative error with different convergence factors is shown in Figure 1. From Table 1 and Figure 1, we can find that becomes smaller and smaller and tends to zero as the iterative times increase. This demonstrates that the algorithm proposed in this paper is effective.

Table 1: The gradient iterative solution ().
Figure 1: The relative error versus   (dots: ).

A simple calculation indicates that the range of the convergence factor is conservative. Based on the deep analysis of Figure 1, we find that the rate of convergence increases when enlarges from to . However, if we keep enlarging from to , the rate of convergence will drop. This shows that the best convergence factor is uncovered in this algorithm. How to find the best convergence factor subtly is our work in the future.

#### 6. Conclusions

This paper has proposed a gradient-based iterative algorithm for solving a class of the real coupled matrix equations. By using the hierarchical identification principle, we prove that the iterative solution is convergent if the unique solution exists. An example demonstrates that the algorithm is effective and indicates that the best convergence factor is existent. We will find the best convergence factor of this algorithm in the future.

#### Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This work was supported by the National Natural Science Foundation of China.

#### References

1. Y. Shi and B. Yu, “Output feedback stabilization of networked control systems with random delays modeled by Markov chains,” IEEE Transactions on Automatic Control, vol. 54, no. 7, pp. 1668–1674, 2009.
2. H. Li and Y. Shi, “State-feedback ${H}_{\infty }$ control for stochastic time-delay nonlinear systems with state and disturbance-dependent noise,” International Journal of Control, vol. 85, no. 10, pp. 1515–1531, 2012.
3. Y. Shi and B. Yu, “Robust mixed ${H}_{2}/{H}_{\infty }$ control of networked control systems with random time delays in both forward and backward communication links,” Automatica, vol. 47, no. 4, pp. 754–760, 2011.
4. F. Ding and T. Chen, “Hierarchical least squares identification methods for multivariable systems,” IEEE Transactions on Automatic Control, vol. 50, no. 3, pp. 397–402, 2005.
5. F. Ding, L. Qiu, and T. Chen, “Reconstruction of continuous-time systems from their non-uniformly sampled discrete-time systems,” Automatica, vol. 45, no. 2, pp. 324–332, 2009.
6. Y. Liu, F. Ding, and Y. Shi, “An efficient hierarchical identification method for general dual-rate sampled-data systems,” Automatica, vol. 50, no. 3, pp. 962–973, 2014.
7. F. Ding and T. Chen, “Hierarchical gradient-based identification of multivariable discrete-time systems,” Automatica, vol. 41, no. 2, pp. 315–325, 2005.
8. F. Ding and T. Chen, “Hierarchical identification of lifted state-space models for general dual-rate systems,” IEEE Transactions on Circuits and Systems. I. Regular Papers, vol. 52, no. 6, pp. 1179–1187, 2005.
9. F. Ding, T. Chen, and Z. Iwai, “Adaptive digital control of Hammerstein nonlinear systems with limited output sampling,” SIAM Journal on Control and Optimization, vol. 45, no. 6, pp. 2257–2276, 2007.
10. B. Zhou, J. Lam, and G.-R. Duan, “Toward solution of matrix equation $X=Af\left(X\right)B+C$,” Linear Algebra and Its Applications, vol. 435, no. 6, pp. 1370–1398, 2011.
11. H. Zhang and F. Ding, “A property of the eigenvalues of the symmetric positive definite matrix and the iterative algorithm for coupled Sylvester matrix equations,” Journal of the Franklin Institute, vol. 351, no. 1, pp. 340–357, 2014.
12. X.-F. Duan, Q.-W. Wang, and J.-F. Li, “On the low-rank approximation arising in the generalized Karhunen-Loeve transform,” Abstract and Applied Analysis, vol. 2013, Article ID 528281, 8 pages, 2013.
13. J.-F. Li, X.-Y. Hu, and L. Zhang, “New symmetry preserving method for optimal correction of damping and stiffness matrices using measured modes,” Journal of Computational and Applied Mathematics, vol. 234, no. 5, pp. 1572–1585, 2010.
14. J. Liu, Z. Huang, L. Zhu, and Z. Huang, “Theorems on Schur complement of block diagonally dominant matrices and their application in reducing the order for the solution of large scale linear systems,” Linear Algebra and Its Applications, vol. 435, no. 12, pp. 3085–3100, 2011.
15. B. Zhou, G.-B. Cai, and J. Lam, “Positive definite solutions of the nonlinear matrix equation $X+{A}^{H}{\stackrel{-}{X}}^{-1}A=I$,” Applied Mathematics and Computation, vol. 219, no. 14, pp. 7377–7391, 2013.
16. J.-F. Li, X.-Y. Hu, and X.-F. Duan, “A symmetric preserving iterative method for generalized Sylvester equation,” Asian Journal of Control, vol. 13, no. 3, pp. 408–417, 2011.
17. J. Liu, J. Zhang, and Y. Liu, “New solution bounds for the continuous algebraic Riccati equation,” Journal of the Franklin Institute, vol. 348, no. 8, pp. 2128–2141, 2011.
18. K. Liang and J. Liu, “Iterative algorithms for the minimum-norm solution and the least-squares solution of the linear matrix equations ${A}_{1}X{B}_{1}+{C}_{1}{X}^{\text{T}}{D}_{1}={M}_{1}$, ${A}_{2}X{B}_{2}+{C}_{2}{X}^{\text{T}}{D}_{2}={M}_{2}$,” Applied Mathematics and Computation, vol. 218, no. 7, pp. 3166–3175, 2011.
19. F. Ding and T. Chen, “Gradient based iterative algorithms for solving a class of matrix equations,” IEEE Transactions on Automatic Control, vol. 50, no. 8, pp. 1216–1221, 2005.
20. F. Ding and T. Chen, “Iterative least-squares solutions of coupled Sylvester matrix equations,” Systems & Control Letters, vol. 54, no. 2, pp. 95–107, 2005.
21. F. Ding and T. Chen, “On iterative solutions of general coupled matrix equations,” SIAM Journal on Control and Optimization, vol. 44, no. 6, pp. 2269–2284, 2006.
22. F. Ding, P. X. Liu, and J. Ding, “Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle,” Applied Mathematics and Computation, vol. 197, no. 1, pp. 41–50, 2008.
23. L. Xie, J. Ding, and F. Ding, “Gradient based iterative solutions for general linear matrix equations,” Computers & Mathematics with Applications, vol. 58, no. 7, pp. 1441–1448, 2009.
24. L. Xie, Y. Liu, and H. Yang, “Gradient based and least squares based iterative algorithms for matrix equations $AXB+{CX}^{\text{T}}D=F$,” Applied Mathematics and Computation, vol. 217, no. 5, pp. 2191–2199, 2010.
25. J. Ding, Y. Liu, and F. Ding, “Iterative solutions to matrix equations of the form ${A}_{i}X{B}_{i}={F}_{i}$,” Computers & Mathematics with Applications, vol. 59, no. 11, pp. 3500–3507, 2010.
26. A.-G. Wu, X. Zeng, G.-R. Duan, and W.-J. Wu, “Iterative solutions to the extended Sylvester-conjugate matrix equations,” Applied Mathematics and Computation, vol. 217, no. 1, pp. 130–142, 2010.
27. A.-G. Wu, L. Lv, and G.-R. Duan, “Iterative algorithms for solving a class of complex conjugate and transpose matrix equations,” Applied Mathematics and Computation, vol. 217, no. 21, pp. 8343–8353, 2011.
28. A.-G. Wu, G. Feng, G.-R. Duan, and W.-J. Wu, “Iterative solutions to coupled Sylvester-conjugate matrix equations,” Computers & Mathematics with Applications, vol. 60, no. 1, pp. 54–66, 2010.
29. C. Song and G. Chen, “An efficient algorithm for solving extended Sylvester-conjugate transpose matrix equations,” Arab Journal of Mathematical Sciences, vol. 17, no. 2, pp. 115–134, 2011.
30. F. Ding, X. P. Liu, and G. Liu, “Identification methods for Hammerstein nonlinear systems,” Digital Signal Processing, vol. 21, no. 2, pp. 215–238, 2011.
31. F. Ding, “Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling,” Applied Mathematical Modelling, vol. 37, no. 4, pp. 1694–1704, 2013.
32. F. Ding, Y. Liu, and B. Bao, “Gradient-based and least-squares-based iterative estimation algorithms for multi-input multi-output systems,” Proceedings of the Institution of Mechanical Engineers I: Journal of Systems and Control Engineering, vol. 226, no. 1, pp. 43–55, 2012.
33. F. Ding, P. X. Liu, and G. Liu, “Gradient based and least-squares based iterative identification methods for OE and OEMA systems,” Digital Signal Processing, vol. 20, no. 3, pp. 664–677, 2010.
34. F. Ding, “Decomposition based fast least squares algorithm for output error systems,” Signal Processing, vol. 93, no. 5, pp. 1235–1242, 2013.
35. D. Q. Wang, “Least squares-based recursive and iterative estimation for output error moving average systems using data filtering,” IET Control Theory & Applications, vol. 5, no. 14, pp. 1648–1657, 2011.
36. F. Ding, Y. Shi, and T. Chen, “Auxiliary model-based least-squares identification methods for Hammerstein output-error systems,” Systems & Control Letters, vol. 56, no. 5, pp. 373–380, 2007.
37. F. Ding, P. X. Liu, and G. Liu, “Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises,” Signal Processing, vol. 89, no. 10, pp. 1883–1890, 2009.
38. F. Ding and Y. Gu, “Performance analysis of the auxiliary model-based least-squares identification algorithm for one-step state-delay systems,” International Journal of Computer Mathematics, vol. 89, no. 15, pp. 2019–2028, 2012.
39. F. Ding and Y. Gu, “Performance analysis of the auxiliary model-based stochastic gradient parameter estimation algorithm for state-space systems with one-step state delay,” Circuits, Systems, and Signal Processing, vol. 32, no. 2, pp. 585–599, 2013.
40. F. Ding and T. Chen, “Performance analysis of multi-innovation gradient type identification methods,” Automatica, vol. 43, no. 1, pp. 1–14, 2007.
41. F. Ding, “Several multi-innovation identification methods,” Digital Signal Processing, vol. 20, no. 4, pp. 1027–1039, 2010.
42. L. Han and F. Ding, “Multi-innovation stochastic gradient algorithms for multi-input multi-output systems,” Digital Signal Processing, vol. 19, no. 4, pp. 545–554, 2009.
43. D. Wang and F. Ding, “Performance analysis of the auxiliary models based multi-innovation stochastic gradient estimation algorithm for output error systems,” Digital Signal Processing, vol. 20, no. 3, pp. 750–762, 2010.
44. Y. Liu, L. Yu, and F. Ding, “Multi-innovation extended stochastic gradient algorithm and its performance analysis,” Circuits, Systems, and Signal Processing, vol. 29, no. 4, pp. 649–667, 2010.
45. F. Ding, “Two-stage least squares based iterative estimation algorithm for CARARMA system modeling,” Applied Mathematical Modelling, vol. 37, no. 7, pp. 4798–4808, 2013.
46. Y. Liu and R. Ding, “Consistency of the extended gradient identification algorithm for multi-input multi-output systems with moving average noises,” International Journal of Computer Mathematics, vol. 90, no. 9, pp. 1840–1852, 2013.
47. Y. Zhang, “Unbiased identification of a class of multi-input single-output systems with correlated disturbances using bias compensation methods,” Mathematical and Computer Modelling, vol. 53, no. 9-10, pp. 1810–1819, 2011.
48. Y. Zhang and G. Cui, “Bias compensation methods for stochastic systems with colored noise,” Applied Mathematical Modelling, vol. 35, no. 4, pp. 1709–1716, 2011.
49. F. Ding, “Combined state and least squares parameter estimation algorithms for dynamic systems,” Applied Mathematical Modelling, vol. 38, no. 1, pp. 403–412, 2014.
50. F. Ding, “Coupled-least-squares identification for multivariable systems,” IET Control Theory & Applications, vol. 7, no. 1, pp. 68–79, 2013.
51. D. Wang and F. Ding, “Least squares based and gradient based iterative identification for Wiener nonlinear systems,” Signal Processing, vol. 91, no. 5, pp. 1182–1189, 2011.
52. D. Q. Wang and F. Ding, “Hierarchical least squares estimation algorithm for Hammerstein-Wiener systems,” IEEE Signal Processing Letters, vol. 19, no. 12, pp. 825–828, 2012.
53. D. Wang, F. Ding, and Y. Chu, “Data filtering based recursive least squares algorithm for Hammerstein systems using the key-term separation principle,” Information Sciences, vol. 222, pp. 203–212, 2013.
54. F. Ding, X. Liu, and J. Chu, “Gradient-based and least-squares-based iterative algorithms for Hammerstein systems using the hierarchical identification principle,” IET Control Theory & Applications, vol. 7, no. 2, pp. 176–184, 2013.
55. F. Ding, Y. Shi, and T. Chen, “Performance analysis of estimation algorithms of nonstationary ARMA processes,” IEEE Transactions on Signal Processing, vol. 54, no. 3, pp. 1041–1053, 2006.
56. F. Ding, T. Chen, and L. Qiu, “Bias compensation based recursive least-squares identification algorithm for MISO systems,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 53, no. 5, pp. 349–353, 2006.
57. X. Luan, P. Shi, and F. Liu, “Stabilization of networked control systems with random delays,” IEEE Transactions on Industrial Electronics, vol. 58, no. 9, pp. 4323–4330, 2011.
58. X. Luan, S. Zhao, and F. Liu, “${H}_{\infty }$ control for discrete-time Markov jump systems with uncertain transition probabilities,” IEEE Transactions on Automatic Control, vol. 58, no. 6, pp. 1566–1572, 2013.
59. H. Zhang and F. Ding, “On the Kronecker products and their applications,” Journal of Applied Mathematics, vol. 2013, Article ID 296185, 8 pages, 2013.