About this Journal Submit a Manuscript Table of Contents
ISRN Computational Mathematics
Volume 2012 (2012), Article ID 126908, 6 pages
http://dx.doi.org/10.5402/2012/126908
Research Article

A New Iterative Algorithm for Solving a Class of Matrix Nearness Problem

1College of Mathematics and Computational Science, Guilin University of Electronic Technology, Guilin 541004, China
2Department of Mathematics, Shanghai University, Shanghai 200444, China

Received 14 September 2011; Accepted 3 October 2011

Academic Editors: T. Allahviranloo and K. Eom

Copyright © 2012 Xuefeng Duan and Chunmei Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Based on the alternating projection algorithm, which was proposed by Von Neumann to treat the problem of finding the projection of a given point onto the intersection of two closed subspaces, we propose a new iterative algorithm to solve the matrix nearness problem associated with the matrix equations 𝐴𝑋𝐵=𝐸, 𝐶𝑋𝐷=𝐹, which arises frequently in experimental design. If we choose the initial iterative matrix 𝑋0=0, the least Frobenius norm solution of these matrix equations is obtained. Numerical examples show that the new algorithm is feasible and effective.

1. Introduction

Denoted by 𝑅𝑚×𝑛 be the set of 𝑚×𝑛 real matrices, 𝐴𝑇 and 𝐴+ be the transpose and Moore-Penrose generalized inverse of the matrix 𝐴, respectively. For 𝐴,𝐵𝑅𝑚×𝑛,  𝐴,𝐵=trace(𝐵𝑇𝐴) denotes the inner product of the matrix 𝐴 and 𝐵. The induced norm is the so-called Frobenius norm, that is, 𝐴=𝐴,𝐴1/2; then 𝑅𝑚×𝑛 is a real Hilbert space. Let 𝑀 be a closed convex subset in a real Hilbert space 𝐻 and 𝑢 be a point in 𝐻; then the point in 𝑀 nearest to 𝑢 is called the projection of 𝑢 onto 𝑀 and denoted by 𝑃𝑀(𝑢); that is to say, 𝑃𝑀(𝑢) is the solution of the following minimization problem (see [1, 2]) min𝑥𝑀𝑥𝑢,(1) that is, 𝑃𝑀(𝑢)𝑢=min𝑥𝑀𝑥𝑢.(2)

The problem of finding a nearness matrix 𝑋 in a constraint matrix set to a given matrix 𝑋 is called the matrix nearness problem. Because the preliminary estimation 𝑋 is frequently obtained from experiments, it may not satisfy the given restrictions. Hence it is necessary to find a nearness matrix 𝑋 in this constraint matrix set to replace the estimation 𝑋 [3]. In the area of structure design, finite element model updating and control theory, and so forth, the matrix set is always the (constraint) solution set or the least square (constraint) solution set of some matrix equations [46]. Thus, the problem mentioned above is also called the matrix nearness problem associated with the matrix equation. Recently, there are many discussions on the matrix nearness problem associated with some matrix equations. For instance, see [4, 614].

In this paper, we consider the following problem.

Problem 1. Given matrices  𝐴𝑅𝑝×𝑛,𝐵𝑅𝑛×𝑞,𝐶𝑅𝑠×𝑛,𝐷𝑅𝑛×𝑡,𝐸𝑅𝑝×𝑞,𝐹𝑅𝑠×𝑡, and 𝑋𝑅𝑛×𝑛, find 𝑋Ω such that 𝑋𝑋=min𝑋Ω𝑋𝑋,(3) where Ω=𝑋𝑅𝑛×𝑛𝐴𝑋𝐵=𝐸,𝐶𝑋𝐷=𝐹.(4) Obviously, Ω is the solution set of the matrix equations 𝐴𝑋𝐵=𝐸,𝐶𝑋𝐷=𝐹,(5) and 𝑋 is the optimal approximate solution of (5) to the given matrix 𝑋. In particular, if 𝑋=0, then the solution 𝑋 of Problem 1 is just the least Frobenius norm solution of the matrix equations (5). It is easy to verify that Ω is convex set; then the solution of Problem 1 is unique.

The matrix equations (5) and its matrix nearness problem have been extensively studied for the past 40 or more years. Wang [15] and Navarra et al. [16] gave some conditions for the existence of a solution and some representations of the general common solution to (5). By the projection theorem and matrix decompositions, Liao et al. [6] gave an analytical expression of the optimal approximate least square symmetric solution of (5). However, these direct methods may be less efficient for the large sparse coefficient matrices due to the limited storages and the speeds of the computers. Therefore, iterative methods for solving the matrix equations (5) have attracted much interests recently. Peng et al. [11] and Chen et al. [7] proposed some iterative methods to compute the symmetric solutions and optimal approximate symmetric solution of (5). An efficient iterative method was presented to solve the matrix nearness Problem 1 associated with the matrix equations (5) in [13]. Ding et al. [17] considered the unique solution of the matrix equations (5) and used gradient-based iterative algorithm to compute the unique solution. The (least square) solution and the optimal approximate (least square) solution of (5), which is constrained as bisymmetric, reflexive, generalized reflexive, generalized centrosymmetric, were studied in [710, 12].

The alternating projection algorithm dates back to von Neumann [18], who treated the problem of finding the projection of a given point onto the intersection of two closed subspaces. Later, Bauschke and Borwein [1] extended the analysis of Von Neumann’s alternating projection scheme to the case of two closed affine subspaces. There are many variations and extensions of the alternating projection algorithm, and we can use them to find the projection of a given point onto the intersection of 𝑘(𝑘2) closed subspaces [19] and 𝑘(𝑘2) closed convex sets [20, 21]. For a complete discussion on the alternation projection algorithm see Deutsch [2].

In this paper, we propose a new algorithm to solve Problem 1. We state Problem 1 as the minimization of a convex quadratic function over the intersection of two closed affine subspaces in the vector space 𝑅𝑛×𝑛. From this point of view, Problem 1 can be solved by the alternating projection algorithm. If we choose the initial iterative matrix 𝑋0=0, the least Frobenius norm solution of the matrix equations 𝐴𝑋𝐵=𝐸,𝐶𝑋𝐷=𝐹 is obtained. In the end, we use some numerical examples to show that the new algorithm is faster and lower computational cost for each step than the algorithm proposed by Sheng and Chen [13] to solve Problem 1. Especially, the CPU time and iteration steps of our algorithm increase slowly as the dimension of the matrix is increasing; so our algorithm is suitable for large-scale problems.

2. Alternating Projection Algorithm for Solving Problem 1

In this section, we apply the alternating projection algorithm to solve Problem 1. We begin with two lemmas.

Lemma 1 (see [1, Theorem  4.1]). Let 𝑀1𝑀=𝑎+1,𝑀2𝑀=𝑏+2 be closed affine subspaces in a Hilbert space 𝐻 and u be a point in 𝐻. Here, 𝑀1 and 𝑀2 are closed subspaces and 𝑎,𝑏𝐻. If 𝑀1𝑀2, then the sequences {𝑥𝑘} and {𝑦𝑘} generated by the alternating projection algorithm 𝑥0=𝑢,𝑦𝑘+1=𝑃𝑀1𝑥𝑘,𝑥𝑘+1=𝑃𝑀2𝑦𝑘+1,𝑘=0,1,2,(6) both converge to the projection of the point 𝑢 onto the intersection of 𝑀1 and 𝑀2, that is, 𝑥𝑘𝑃𝑀1𝑀2(𝑢),𝑦𝑘𝑃𝑀1𝑀2(𝑢),𝑘+.(7)

Lemma 2 (see [22, Theorem  9.3.1]). Let 𝐴𝑅𝑝×𝑛,𝐵𝑅𝑛×𝑞 and 𝐸𝑅𝑝×𝑞, be known matrices. Then the matrix equation 𝐴𝑋𝐵=𝐸 has a solution if and only if 𝐴𝐴+𝐸𝐵+𝐵=𝐸,(8) and the representation of the solution is 𝑋=𝐴+𝐸𝐵++𝑌𝐴+𝐴𝑌𝐵𝐵+,(9) where 𝑌𝑅𝑛×𝑛 is arbitrary.

Lemma 3 (see [22, Theorem  9.3.2]). Given 𝑍𝑅𝑛×𝑛, set =𝑋𝑅𝑛×𝑛𝐴𝑋𝐵=𝐸,𝐴𝑅𝑝×𝑛,𝐵𝑅𝑛×𝑞,𝐸𝑅𝑝×𝑞,(10) then the solution 𝑋 of the following problem min𝑋𝑋𝑍(11) is 𝑋=𝑍+𝐴+(𝐸𝐴𝑍𝐵)𝐵+,(12) that is, 𝑋𝑍=min𝑋𝑋𝑍.(13)

Now we begin to use the alternating projection algorithm (6) to solve Problem 1. Firstly, we define two sets Ω1=𝑋𝑅𝑛×𝑛,Ω𝐴𝑋𝐵=𝐸2=𝑋𝑅𝑛×𝑛.𝐶𝑋𝐷=𝐹(14) It is easy to know that Ω=Ω1Ω2, and if the set Ω is nonempty, then Ω=Ω1Ω2.(15) And by Lemma 2, the sets Ω1 and Ω2 can be equivalently written as Ω1=={𝑋𝐴𝑋𝐵=𝐸}𝑋=𝐴+𝐸𝐵++𝑌𝐴+𝐴𝑌𝐵𝐵+𝑌𝑅𝑛×𝑛=𝐴+𝐸𝐵++𝑌𝐴+𝐴𝑌𝐵𝐵+𝑌𝑅𝑛×𝑛,Ω2=={𝑋𝐶𝑋𝐷=𝐹}𝑋=𝐶+𝐹𝐷++𝑌𝐶+𝐶𝑌𝐷𝐷+𝑌𝑅𝑛×𝑛=𝐶+𝐹𝐷++𝑌𝐶+𝐶𝑌𝐷𝐷+𝑌𝑅𝑛×𝑛.(16) Hence, Ω1 and Ω2 are closed affine subspaces.

After defining the sets Ω1 and Ω2, Problem 1 can be rewritten as finding 𝑋Ω=Ω1Ω2, such that 𝑋𝑋=min𝑋Ω1Ω2𝑋𝑋.(17) Noting the equalities (17) and (2), it is easy to find that 𝑋=𝑃Ω1Ω2𝑋.(18) Therefore, Problem 1 can be converted equivalently into finding the projection 𝑃Ω1Ω2(𝑋) of a given matrix 𝑋 onto the intersection set Ω1Ω2. Now we will use alternating projection algorithm (6) to compute the projection 𝑃Ω1Ω2(𝑋). Consequently, we can get the solution 𝑋 of Problem 1.

By (6) we can see that the key problems to realize the alternating projection algorithm (6) are how to compute the projections 𝑃Ω1(𝑍),𝑃Ω2(𝑍) of a matrix 𝑍 onto Ω1 and Ω2, respectively. Such problems are perfectly solvable in the following theorems.

Theorem 1. Suppose that the set Ω1 is nonempty. For a given 𝑛×𝑛 matrix 𝑍, we have 𝑃Ω1(𝑍)=𝑍+𝐴+(𝐸𝐴𝑍𝐵)𝐵+.(19)

Proof. By (1) and (2), we know that the projection 𝑃Ω1(𝑍) is the solution of the following minimization problem min𝑋Ω1𝑋𝑍,(20) and according to Lemma 3 we know that the solution of the minimization problem (20) is 𝑍+𝐴+(𝐸𝐴𝑍𝐵)𝐵+. Hence, 𝑃Ω1(𝑍)=𝑍+𝐴+(𝐸𝐴𝑍𝐵)𝐵+.(21)

Theorem 2. Suppose that the set  Ω2 is nonempty. For a given 𝑛×𝑛 matrix 𝑍, we have 𝑃Ω2(𝑍)=𝑍+𝐶+(𝐹𝐶𝑍𝐷)𝐷+.(22)

Proof. The proof is similar to that of Theorem 1 and is omitted here.

By the alternation projection algorithm (6) and Theorems 1 and 2, we get a new algorithm to solve Problem 1 which can be stated as follows.

Algorithm 1. One has(1)set 𝐴=𝐴+,𝐵=𝐵+,𝐶=𝐶+,𝐷=𝐷+;(2)set 𝑋0=𝑋;(3)for 𝑘=0,1,2,3,𝑌𝑘+1=𝑃Ω1𝑋𝑘=𝑋𝑘+𝐴𝐸𝐴𝑋𝑘𝐵𝑋𝐵,𝑘+1=𝑃Ω2𝑌𝑘+1=𝑌𝑘+1+𝐶𝐹𝐶𝑌𝑘+1𝐷𝐷,(23) end.
By Lemma 1 and (15) and (16), we get the convergence theorem for Algorithm 1.

Theorem 3. If the set Ω is nonempty, then the matrix sequences {𝑋𝑘} and {𝑌𝑘} generated by Algorithm 1 both converge to the projection 𝑃Ω1Ω2(𝑋) of  𝑋 onto the intersection of Ω1 and Ω2, that is, 𝑋𝑘𝑃Ω1Ω2𝑋,𝑌𝑘𝑃Ω1Ω2𝑋,𝑘+.(24)

Proof. If the set Ω is nonempty, by (15) we have Ω1Ω2.(25) And noting (16), we know that the sets Ω1 and Ω2 are closed affine subspaces in Hilbert space 𝑅𝑚×𝑛. Hence, by Lemma 1 we derive that the matrix sequences {𝑋𝑘} and {𝑌k} generated by Algorithm 1 both converge to the projection 𝑃Ω1Ω2(𝑋) of 𝑋 onto the intersection of Ω1 and Ω2, that is, 𝑋𝑘𝑃Ω1Ω2𝑋,𝑌𝑘𝑃Ω1Ω2𝑋,𝑘+.(26)

Combining Theorem 3 and the equalities (18) and (17), we have the following.

Theorem 4. If the set Ω is nonempty, then the matrix sequence {𝑋𝑘} and {𝑌𝑘} generated by Algorithm 1 both converge to the unique solution of Problem 1. Moreover, if the initial matrix 𝑋0=𝑋=0, then the matrix sequence {𝑋𝑘} and {𝑌𝑘} both converge to the least Frobenius norm solution of the matrix equations 𝐴𝑋𝐵=𝐸,𝐶𝑋𝐷=𝐹.

3. Numerical Experiments

In this section, we give some numerical examples to illustrate that the new algorithm is feasible and effective to solve Problem 1. All programs are written in MATLAB 7.8. We denote Error=𝐸𝐴𝑋𝐵+𝐹𝐶𝑋𝐷,(27) and use the practical stopping criterion Error1.0×1010.

Example 1. Consider Problem 1 with ,,,,,.𝐴=131313737311611611555559494913131𝐵=1414151515121213939378787𝐸=117181171811765106510655859058590585651065106545570455704551171811718117𝐶=353574329535357𝐷=54151423525335131536323611971719𝐹=30753927457511027514399165275307539274575(28) Here we use ones(𝑛) and zeros(𝑛) to stand for 𝑛×𝑛 matrix of ones and zeros. It is easy to verify that 𝑋=ones(5) is a solution of the matrix equations (5); that is to say, the set Ω is nonempty. Therefore we can use Algorithm 1 to solve Problem 1.

Let 𝑋0=𝑋=zeros(5). After 5 iterations of Algorithm 1, we get the optimal approximate solution 𝑋𝑋5=0.68170.78130.55030.96370.78080.01520.81850.28660.96990.81810.01130.81780.29170.96980.81730.60910.92800.48930.99800.92780.94970.99070.93420.99850.9907,(29) which is also the least Frobenius norm solution of the matrix equations (5), and its residual errorError𝐸𝐴𝑋5𝐵+𝐹𝐶𝑋5𝐷=2.78×1011.(30) By concrete computations, we know that the distance from 𝑋 to the solution set Ω is min𝑋Ω𝑋𝑋=𝑋𝑋𝑋5𝑋=3.9057.(31)

Let 𝑋0=𝑋=ones(5). After 6 iterations of Algorithm 1, we get the optimal approximate solution 𝑋𝑋6=5.74671.87477.20131.14521.87704.93921.72596.14631.12051.72784.95481.72886.16681.12091.73072.56361.28813.04271.4781.28891.20131.03711.26301.00621.372,(32) and its residual error Error𝐸𝐴𝑋6𝐵+𝐹𝐶𝑋6𝐷=3.89×1011.(33) By concrete computations, we know that the distance from 𝑋 to the solution set Ω is min𝑋Ω𝑋𝑋=𝑋𝑋𝑋6𝑋=16.0902.(34)

Example 1 shows that Algorithm 1 is feasible to solve Problem 1.

Example 2. Consider Problem 1 with 𝐴=rand(100,𝑛),𝐵=rand(𝑛,150),𝐸=𝐴ones(𝑛)𝐵,𝐶=rand(70,𝑛),𝐷=rand(𝑛,120),𝐹=𝐶ones(𝑛)𝐷,(35) where rand(𝑠,𝑡) stand for 𝑠×𝑡 random matrix. Let the given matrix 𝑋=zeros(𝑛). It is easy to verify that 𝑋=ones(𝑛) is the solution of the matrix equations (5); that is, the set Ω is nonempty; therefore, we can use Algorithm 1 and the following algorithm proposed by Sheng and Chen [13] to solve Problem 1.

Algorithm 2. One has(1)input 𝐴,𝐵,𝐶,𝐷,𝐸,𝐹 and 𝑋0;(2)calculate 𝑅0=𝐸𝐴𝑋0𝑟𝐵,0=𝐹𝐶𝑋0𝑃𝐷,0=𝐴𝑇𝑅0𝐵𝑇,𝑄0=𝐶𝑇𝑟0𝐷𝑇;(36)(3)if 𝑅𝑘=0,𝑟0=0, then stop; else, 𝑘=𝑘+1; (4)calculate 𝑋𝑘=𝑋𝑘1+𝑅𝑘12+𝑟𝑘12𝑃𝑘1+𝑄𝑘12𝑃𝑘1+𝑄𝑘1,𝑅𝑘=𝐸𝐴𝑋𝑘𝑟𝐵,𝑘=𝐹𝐶𝑋𝑘𝑃𝐷,𝑘=𝐴𝑇𝑅𝑘𝐵𝑇+𝑅𝑘2+𝑟𝑘2𝑅𝑘12+𝑟𝑘12,𝑄𝑘=𝐶𝑇𝑟𝑘𝐷𝑇+𝑅𝑘2+𝑟𝑘2𝑅𝑘12+𝑟𝑘12𝑄𝑘1;(37)(5)go to step 3.

It is easy to see that Algorithm 1 has lower computational cost for each step in the comparison with Algorithm 2. Experiments show that Algorithm 1 and Algorithm 2 are feasible to solve Problem 1. We list the iteration steps (denoted by IT), CPU time (denoted by CPU), residual error (denoted by ERR), and the distance 𝑋IT𝑋 (denoted by DIS) in Table 1.

tab1
Table 1

From Table 1, we can see that Algorithm 1 outperforms Algorithm 2 in iteration step and CPU time. Therefore our algorithm is faster than the algorithm proposed by Sheng and Chen [13]. Especially, the CPU time and iteration steps of our algorithm increase slowly as the dimension 𝑛 is increasing; so our algorithm is suitable for large-scale problems.

4. Conclusion

The alternating projection algorithm dates back to von Neumann [18], who treated the problem of finding the projection of a given point onto the intersection of two closed subspace. In this paper, we first apply the alternating projection algorithm to solve Problem 1, which occurs frequently in experimental design [23]. If we choose the initial matrix 𝑋0=0, the least Frobenius norm solution of the matrix equations 𝐴𝑋𝐵=𝐸,𝐶𝑋𝐷=𝐹 can be obtained. Numerical examples show that the new algorithm is faster and lower computational cost for each step than the algorithm proposed by Sheng and Chen [13] to solve Problem 1. Especially, the CPU time and iteration steps of the new algorithm increase slowly as the matrix’s dimension is increasing; so the alternating projection algorithm is suitable for large-scale matrix nearness problems.

Acknowledgments

The work was supported in part by National Science Foundation of China (11101100; 10861005) and Natural Science Foundation of Guangxi Province (0991238; 2011GXNSFA018138).

References

  1. H. H. Bauschke and J. M. Borwein, “Dykstra′s alternating projection algorithm for two sets,” Journal of Approximation Theory, vol. 79, no. 3, pp. 418–443, 1994. View at Publisher · View at Google Scholar · View at Scopus
  2. F. Deutsch, Best Approximation in Inner Product Spaces, Springer, New York, NY, USA, 2001.
  3. N. Higham, Matrix Nearness Problems and Applications, Oxford University Press, London, UK, 1989.
  4. H. Dai and P. Lancaster, “Linear matrix equations from an inverse problem of vibration theory,” Linear Algebra and Its Applications, vol. 246, pp. 31–47, 1996. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Friswell and J. Mottorshead, Finite Element Model Updating in Structure Dynamics, Kluwer Academic publisher, Dodrecht, The Netherlands, 1995.
  6. A. P. Liao, Y. Lei, and S. F. Yuan, “The matrix nearness problem for symmetric matrices associated with the matrix equation [ATXA, BTXB] = [C,D],” Linear Algebra and Its Applications, vol. 418, no. 2-3, pp. 939–954, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. Y. Chen, Z. Peng, and T. Zhou, “LSQR iterative common symmetric solutions to matrix equations AXB=E and CXD=F,” Applied Mathematics and Computation, vol. 217, no. 1, pp. 230–236, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Cai and G. Chen, “An iterative algorithm for the least squares bisymmetric solutions of the matrix equations A1XB1=C1, A2XB2=C2,” Mathematical and Computer Modelling, vol. 50, no. 7-8, pp. 1237–1244, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Dehghan and M. Hajarian, “An iterative algorithm for solving a pair of matrix equations AXB=E and CXD=F over generalized centro-symmetric matrices,” Computers and Mathematics with Applications, vol. 56, no. 12, pp. 3246–3260, 2008. View at Publisher · View at Google Scholar · View at Scopus
  10. L. Fanliang, H. Xiyan, and Z. Lei, “The generalized reflexive solution for a class of matrix equations (AX=B, XC=D),” Acta Mathematica Scientia, vol. 28, no. 1, pp. 185–193, 2008. View at Publisher · View at Google Scholar · View at Scopus
  11. Y. X. Peng, X. Y. Hu, and L. Zhang, “An iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations A1XB1=C1, A2XB2=C2,” Applied Mathematics and Computation, vol. 183, no. 2, pp. 1127–1137, 2006. View at Publisher · View at Google Scholar · View at Scopus
  12. Z. H. Peng, X. Y. Hu, and L. Zhang, “An efficient algorithm for the least-squares reflexive solution of the matrix equation A1XB1=C1, A2XB2=C2,” Applied Mathematics and Computation, vol. 181, no. 2, pp. 988–999, 2006. View at Publisher · View at Google Scholar · View at Scopus
  13. X. Sheng and G. Chen, “A finite iterative method for solving a pair of linear matrix equations (AXB, CXD) = (CXD),” Applied Mathematics and Computation, vol. 189, no. 2, pp. 1350–1358, 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. S. Yuan, A. Liao, and G. Yao, “The matrix nearness problem associated with the quaternion matrix equation ATXA+BTXB=C,” Journal of Applied Mathematics and Computing, vol. 37, no. 1-2, pp. 133–144, 2011. View at Publisher · View at Google Scholar
  15. Q. W. Wang, “A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity,” Linear Algebra and Its Applications, vol. 384, no. 1-3, pp. 43–54, 2004. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Navarra, P. L. Odell, and D. M. Young, “Representation of the general common solution to the matrix equations A1XB1=C1, A2XB2=C2 with applications,” Computers and Mathematics with Applications, vol. 41, no. 7-8, pp. 929–935, 2001. View at Publisher · View at Google Scholar · View at Scopus
  17. J. Ding, Y. Liu, and F. Ding, “Iterative solutions to matrix equations of the form AiXBi=Fi,” Computers and Mathematics with Applications, vol. 59, no. 11, pp. 3500–3507, 2010. View at Publisher · View at Google Scholar · View at Scopus
  18. J. von Numann, Functional Operators, vol. 11, Princeton University Press, Princeton, NJ, USA, 1950.
  19. I. Halperin, “The product of projection operators,” Acta Scientiarum Mathematicarum, vol. 23, pp. 96–99, 1962.
  20. R.L. Dykstra, “An algorithm for restricted least-squares regression,” Journal of the American Statistical Association, vol. 78, pp. 837–842, 1983.
  21. R. Escalante and M. Raydan, “Dykstra's algorithm for a constrained least-squares matrix problem,” Numerical Linear Algebra with Applications, vol. 3, no. 6, pp. 459–471, 1996. View at Scopus
  22. H. Dai, Matrix Theory, Science Press, Beijing, China, 2001.
  23. T. Meng, “Experimental design and decision support,” in Expert system the Technology of Knowledge Management and Decision Making for 21st Centry, C. Leondes, Ed., vol. 1, Academic Press, New York, NY, USA, 2001.