Abstract

Based on the alternating projection algorithm, which was proposed by Von Neumann to treat the problem of finding the projection of a given point onto the intersection of two closed subspaces, we propose a new iterative algorithm to solve the matrix nearness problem associated with the matrix equations 𝐴𝑋𝐡=𝐸, 𝐢𝑋𝐷=𝐹, which arises frequently in experimental design. If we choose the initial iterative matrix 𝑋0=0, the least Frobenius norm solution of these matrix equations is obtained. Numerical examples show that the new algorithm is feasible and effective.

1. Introduction

Denoted by π‘…π‘šΓ—π‘› be the set of π‘šΓ—π‘› real matrices, 𝐴𝑇 and 𝐴+ be the transpose and Moore-Penrose generalized inverse of the matrix 𝐴, respectively. For 𝐴,π΅βˆˆπ‘…π‘šΓ—π‘›,β€‰β€‰βŸ¨π΄,𝐡⟩=trace(𝐡𝑇𝐴) denotes the inner product of the matrix 𝐴 and 𝐡. The induced norm is the so-called Frobenius norm, that is, ‖𝐴‖=⟨𝐴,𝐴⟩1/2; then π‘…π‘šΓ—π‘› is a real Hilbert space. Let 𝑀 be a closed convex subset in a real Hilbert space 𝐻 and 𝑒 be a point in 𝐻; then the point in 𝑀 nearest to 𝑒 is called the projection of 𝑒 onto 𝑀 and denoted by 𝑃𝑀(𝑒); that is to say, 𝑃𝑀(𝑒) is the solution of the following minimization problem (see [1, 2]) minπ‘₯βˆˆπ‘€β€–π‘₯βˆ’π‘’β€–,(1) that is, ‖‖𝑃𝑀‖‖(𝑒)βˆ’π‘’=minπ‘₯βˆˆπ‘€β€–π‘₯βˆ’π‘’β€–.(2)

The problem of finding a nearness matrix 𝑋 in a constraint matrix set to a given matrix 𝑋 is called the matrix nearness problem. Because the preliminary estimation 𝑋 is frequently obtained from experiments, it may not satisfy the given restrictions. Hence it is necessary to find a nearness matrix 𝑋 in this constraint matrix set to replace the estimation 𝑋 [3]. In the area of structure design, finite element model updating and control theory, and so forth, the matrix set is always the (constraint) solution set or the least square (constraint) solution set of some matrix equations [4–6]. Thus, the problem mentioned above is also called the matrix nearness problem associated with the matrix equation. Recently, there are many discussions on the matrix nearness problem associated with some matrix equations. For instance, see [4, 6–14].

In this paper, we consider the following problem.

Problem 1. Given matrices β€‰π΄βˆˆπ‘…π‘Γ—π‘›,π΅βˆˆπ‘…π‘›Γ—π‘ž,πΆβˆˆπ‘…π‘ Γ—π‘›,π·βˆˆπ‘…π‘›Γ—π‘‘,πΈβˆˆπ‘…π‘Γ—π‘ž,πΉβˆˆπ‘…π‘ Γ—π‘‘, and π‘‹βˆˆπ‘…π‘›Γ—π‘›, find ξπ‘‹βˆˆΞ© such that β€–β€–ξπ‘‹βˆ’π‘‹β€–β€–=minπ‘‹βˆˆΞ©β€–β€–π‘‹βˆ’π‘‹β€–β€–,(3) where ξ€½Ξ©=π‘‹βˆˆπ‘…π‘›Γ—π‘›ξ€Ύβˆ£π΄π‘‹π΅=𝐸,𝐢𝑋𝐷=𝐹.(4) Obviously, Ξ© is the solution set of the matrix equations 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹,(5) and 𝑋 is the optimal approximate solution of (5) to the given matrix 𝑋. In particular, if 𝑋=0, then the solution 𝑋 of Problem 1 is just the least Frobenius norm solution of the matrix equations (5). It is easy to verify that Ξ© is convex set; then the solution of Problem 1 is unique.

The matrix equations (5) and its matrix nearness problem have been extensively studied for the past 40 or more years. Wang [15] and Navarra et al. [16] gave some conditions for the existence of a solution and some representations of the general common solution to (5). By the projection theorem and matrix decompositions, Liao et al. [6] gave an analytical expression of the optimal approximate least square symmetric solution of (5). However, these direct methods may be less efficient for the large sparse coefficient matrices due to the limited storages and the speeds of the computers. Therefore, iterative methods for solving the matrix equations (5) have attracted much interests recently. Peng et al. [11] and Chen et al. [7] proposed some iterative methods to compute the symmetric solutions and optimal approximate symmetric solution of (5). An efficient iterative method was presented to solve the matrix nearness Problem 1 associated with the matrix equations (5) in [13]. Ding et al. [17] considered the unique solution of the matrix equations (5) and used gradient-based iterative algorithm to compute the unique solution. The (least square) solution and the optimal approximate (least square) solution of (5), which is constrained as bisymmetric, reflexive, generalized reflexive, generalized centrosymmetric, were studied in [7–10, 12].

The alternating projection algorithm dates back to von Neumann [18], who treated the problem of finding the projection of a given point onto the intersection of two closed subspaces. Later, Bauschke and Borwein [1] extended the analysis of Von Neumann’s alternating projection scheme to the case of two closed affine subspaces. There are many variations and extensions of the alternating projection algorithm, and we can use them to find the projection of a given point onto the intersection of π‘˜(π‘˜β‰₯2) closed subspaces [19] and π‘˜(π‘˜β‰₯2) closed convex sets [20, 21]. For a complete discussion on the alternation projection algorithm see Deutsch [2].

In this paper, we propose a new algorithm to solve Problem 1. We state Problem 1 as the minimization of a convex quadratic function over the intersection of two closed affine subspaces in the vector space 𝑅𝑛×𝑛. From this point of view, Problem 1 can be solved by the alternating projection algorithm. If we choose the initial iterative matrix 𝑋0=0, the least Frobenius norm solution of the matrix equations 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹 is obtained. In the end, we use some numerical examples to show that the new algorithm is faster and lower computational cost for each step than the algorithm proposed by Sheng and Chen [13] to solve Problem 1. Especially, the CPU time and iteration steps of our algorithm increase slowly as the dimension of the matrix is increasing; so our algorithm is suitable for large-scale problems.

2. Alternating Projection Algorithm for Solving Problem 1

In this section, we apply the alternating projection algorithm to solve Problem 1. We begin with two lemmas.

Lemma 1 (see [1, Theorem  4.1]). Let 𝑀1𝑀=π‘Ž+1,𝑀2𝑀=𝑏+2 be closed affine subspaces in a Hilbert space 𝐻 and u be a point in 𝐻. Here, 𝑀1 and 𝑀2 are closed subspaces and π‘Ž,π‘βˆˆπ». If 𝑀1βˆ©π‘€2β‰ βˆ…, then the sequences {π‘₯π‘˜} and {π‘¦π‘˜} generated by the alternating projection algorithm π‘₯0=𝑒,π‘¦π‘˜+1=𝑃𝑀1ξ€·π‘₯π‘˜ξ€Έ,π‘₯π‘˜+1=𝑃𝑀2ξ€·π‘¦π‘˜+1ξ€Έ,π‘˜=0,1,2,…(6) both converge to the projection of the point 𝑒 onto the intersection of 𝑀1 and 𝑀2, that is, π‘₯π‘˜βŸΆπ‘ƒπ‘€1βˆ©π‘€2(𝑒),π‘¦π‘˜βŸΆπ‘ƒπ‘€1βˆ©π‘€2(𝑒),π‘˜βŸΆ+∞.(7)

Lemma 2 (see [22, Theorem  9.3.1]). Let π΄βˆˆπ‘…π‘Γ—π‘›,π΅βˆˆπ‘…π‘›Γ—π‘ž and πΈβˆˆπ‘…π‘Γ—π‘ž, be known matrices. Then the matrix equation 𝐴𝑋𝐡=𝐸 has a solution if and only if 𝐴𝐴+𝐸𝐡+𝐡=𝐸,(8) and the representation of the solution is 𝑋=𝐴+𝐸𝐡++π‘Œβˆ’π΄+π΄π‘Œπ΅π΅+,(9) where π‘Œβˆˆπ‘…π‘›Γ—π‘› is arbitrary.

Lemma 3 (see [22, Theorem  9.3.2]). Given π‘βˆˆπ‘…π‘›Γ—π‘›, set ξ€½β„œ=π‘‹βˆˆπ‘…π‘›Γ—π‘›βˆ£π΄π‘‹π΅=𝐸,π΄βˆˆπ‘…π‘Γ—π‘›,π΅βˆˆπ‘…π‘›Γ—π‘ž,πΈβˆˆπ‘…π‘Γ—π‘žξ€Ύ,(10) then the solution 𝑋 of the following problem minπ‘‹βˆˆβ„œβ€–π‘‹βˆ’π‘β€–(11) is 𝑋=𝑍+𝐴+(πΈβˆ’π΄π‘π΅)𝐡+,(12) that is, β€–β€–ξβ€–β€–π‘‹βˆ’π‘=minπ‘‹βˆˆβ„œβ€–π‘‹βˆ’π‘β€–.(13)

Now we begin to use the alternating projection algorithm (6) to solve Problem 1. Firstly, we define two sets Ξ©1=ξ€½π‘‹βˆˆπ‘…π‘›Γ—π‘›ξ€Ύ,Ξ©βˆ£π΄π‘‹π΅=𝐸2=ξ€½π‘‹βˆˆπ‘…π‘›Γ—π‘›ξ€Ύ.βˆ£πΆπ‘‹π·=𝐹(14) It is easy to know that Ξ©=Ξ©1∩Ω2, and if the set Ξ© is nonempty, then Ξ©=Ξ©1∩Ω2β‰ βˆ….(15) And by Lemma 2, the sets Ξ©1 and Ξ©2 can be equivalently written as Ξ©1=ξ€½={π‘‹βˆ£π΄π‘‹π΅=𝐸}𝑋=𝐴+𝐸𝐡++π‘Œβˆ’π΄+π΄π‘Œπ΅π΅+βˆ£π‘Œβˆˆπ‘…π‘›Γ—π‘›ξ€Ύ=𝐴+𝐸𝐡++ξ€½π‘Œβˆ’π΄+π΄π‘Œπ΅π΅+βˆ£π‘Œβˆˆπ‘…π‘›Γ—π‘›ξ€Ύ,Ξ©2=ξ€½={π‘‹βˆ£πΆπ‘‹π·=𝐹}𝑋=𝐢+𝐹𝐷++π‘Œβˆ’πΆ+πΆπ‘Œπ·π·+βˆ£π‘Œβˆˆπ‘…π‘›Γ—π‘›ξ€Ύ=𝐢+𝐹𝐷++ξ€½π‘Œβˆ’πΆ+πΆπ‘Œπ·π·+βˆ£π‘Œβˆˆπ‘…π‘›Γ—π‘›ξ€Ύ.(16) Hence, Ξ©1 and Ξ©2 are closed affine subspaces.

After defining the sets Ξ©1 and Ξ©2, Problem 1 can be rewritten as finding ξπ‘‹βˆˆΞ©=Ξ©1∩Ω2, such that β€–β€–ξπ‘‹βˆ’π‘‹β€–β€–=minπ‘‹βˆˆΞ©1∩Ω2β€–β€–π‘‹βˆ’π‘‹β€–β€–.(17) Noting the equalities (17) and (2), it is easy to find that 𝑋=𝑃Ω1∩Ω2𝑋.(18) Therefore, Problem 1 can be converted equivalently into finding the projection 𝑃Ω1∩Ω2(𝑋) of a given matrix 𝑋 onto the intersection set Ξ©1∩Ω2. Now we will use alternating projection algorithm (6) to compute the projection 𝑃Ω1∩Ω2(𝑋). Consequently, we can get the solution 𝑋 of Problem 1.

By (6) we can see that the key problems to realize the alternating projection algorithm (6) are how to compute the projections 𝑃Ω1(𝑍),𝑃Ω2(𝑍) of a matrix 𝑍 onto Ξ©1 and Ξ©2, respectively. Such problems are perfectly solvable in the following theorems.

Theorem 1. Suppose that the set Ξ©1 is nonempty. For a given 𝑛×𝑛 matrix 𝑍, we have 𝑃Ω1(𝑍)=𝑍+𝐴+(πΈβˆ’π΄π‘π΅)𝐡+.(19)

Proof. By (1) and (2), we know that the projection 𝑃Ω1(𝑍) is the solution of the following minimization problem minπ‘‹βˆˆΞ©1β€–π‘‹βˆ’π‘β€–,(20) and according to Lemma 3 we know that the solution of the minimization problem (20) is 𝑍+𝐴+(πΈβˆ’π΄π‘π΅)𝐡+. Hence, 𝑃Ω1(𝑍)=𝑍+𝐴+(πΈβˆ’π΄π‘π΅)𝐡+.(21)

Theorem 2. Suppose that the set  Ω2 is nonempty. For a given 𝑛×𝑛 matrix 𝑍, we have 𝑃Ω2(𝑍)=𝑍+𝐢+(πΉβˆ’πΆπ‘π·)𝐷+.(22)

Proof. The proof is similar to that of Theorem 1 and is omitted here.

By the alternation projection algorithm (6) and Theorems 1 and 2, we get a new algorithm to solve Problem 1 which can be stated as follows.

Algorithm 1. One has(1)set 𝐴=𝐴+,𝐡=𝐡+,𝐢=𝐢+,𝐷=𝐷+;(2)set 𝑋0=𝑋;(3)for π‘˜=0,1,2,3,β€¦π‘Œπ‘˜+1=𝑃Ω1ξ€·π‘‹π‘˜ξ€Έ=π‘‹π‘˜+ξ‚π΄ξ€·πΈβˆ’π΄π‘‹π‘˜π΅ξ€Έξ‚π‘‹π΅,π‘˜+1=𝑃Ω2ξ€·π‘Œπ‘˜+1ξ€Έ=π‘Œπ‘˜+1+ξ‚πΆξ€·πΉβˆ’πΆπ‘Œπ‘˜+1𝐷𝐷,(23) end.
By Lemma 1 and (15) and (16), we get the convergence theorem for Algorithm 1.

Theorem 3. If the set Ξ© is nonempty, then the matrix sequences {π‘‹π‘˜} and {π‘Œπ‘˜} generated by Algorithm 1 both converge to the projection 𝑃Ω1∩Ω2(𝑋) of  𝑋 onto the intersection of Ξ©1 and Ξ©2, that is, π‘‹π‘˜βŸΆπ‘ƒΞ©1∩Ω2𝑋,π‘Œπ‘˜βŸΆπ‘ƒΞ©1∩Ω2𝑋,π‘˜βŸΆ+∞.(24)

Proof. If the set Ξ© is nonempty, by (15) we have Ξ©1∩Ω2β‰ βˆ….(25) And noting (16), we know that the sets Ξ©1 and Ξ©2 are closed affine subspaces in Hilbert space π‘…π‘šΓ—π‘›. Hence, by Lemma 1 we derive that the matrix sequences {π‘‹π‘˜} and {π‘Œk} generated by Algorithm 1 both converge to the projection 𝑃Ω1∩Ω2(𝑋) of 𝑋 onto the intersection of Ξ©1 and Ξ©2, that is, π‘‹π‘˜βŸΆπ‘ƒΞ©1∩Ω2𝑋,π‘Œπ‘˜βŸΆπ‘ƒΞ©1∩Ω2𝑋,π‘˜βŸΆ+∞.(26)

Combining Theorem 3 and the equalities (18) and (17), we have the following.

Theorem 4. If the set Ξ© is nonempty, then the matrix sequence {π‘‹π‘˜} and {π‘Œπ‘˜} generated by Algorithm 1 both converge to the unique solution of Problem 1. Moreover, if the initial matrix 𝑋0=𝑋=0, then the matrix sequence {π‘‹π‘˜} and {π‘Œπ‘˜} both converge to the least Frobenius norm solution of the matrix equations 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹.

3. Numerical Experiments

In this section, we give some numerical examples to illustrate that the new algorithm is feasible and effective to solve Problem 1. All programs are written in MATLAB 7.8. We denote Error=β€–πΈβˆ’π΄π‘‹π΅β€–+β€–πΉβˆ’πΆπ‘‹π·β€–,(27) and use the practical stopping criterion Error≀1.0Γ—10βˆ’10.

Example 1. Consider Problem 1 with βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,βŽ›βŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽ ,βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,βŽ›βŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽ .𝐴=131313βˆ’73βˆ’7311611611βˆ’55βˆ’55βˆ’59494913131𝐡=βˆ’14βˆ’14βˆ’15βˆ’15βˆ’15βˆ’1βˆ’2βˆ’1βˆ’2βˆ’1393937βˆ’87βˆ’87𝐸=1171811718117βˆ’65βˆ’10βˆ’65βˆ’10βˆ’655859058590585βˆ’65βˆ’10βˆ’65βˆ’10βˆ’6545570455704551171811718117𝐢=3βˆ’53βˆ’57βˆ’43βˆ’2953βˆ’53βˆ’57𝐷=54βˆ’1514βˆ’235βˆ’25335βˆ’13βˆ’153βˆ’6323βˆ’611971719𝐹=30753927457511027514399165275307539274575(28) Here we use ones(𝑛) and zeros(𝑛) to stand for 𝑛×𝑛 matrix of ones and zeros. It is easy to verify that 𝑋=ones(5) is a solution of the matrix equations (5); that is to say, the set Ξ© is nonempty. Therefore we can use Algorithm 1 to solve Problem 1.

Let 𝑋0=𝑋=zeros(5). After 5 iterations of Algorithm 1, we get the optimal approximate solution ξπ‘‹β‰ˆπ‘‹5=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βˆ’0.68170.7813βˆ’0.55030.96370.78080.01520.8185βˆ’0.28660.96990.81810.01130.8178βˆ’0.29170.96980.81730.60910.92800.48930.99800.92780.94970.99070.93420.99850.9907,(29) which is also the least Frobenius norm solution of the matrix equations (5), and its residual errorβ€–β€–Errorβ‰ˆπΈβˆ’π΄π‘‹5𝐡‖‖+β€–β€–πΉβˆ’πΆπ‘‹5𝐷‖‖=2.78Γ—10βˆ’11.(30) By concrete computations, we know that the distance from 𝑋 to the solution set Ξ© is minπ‘‹βˆˆΞ©β€–β€–π‘‹βˆ’π‘‹β€–β€–=β€–β€–ξπ‘‹βˆ’π‘‹β€–β€–β‰ˆβ€–β€–π‘‹5βˆ’π‘‹β€–β€–=3.9057.(31)

Let 𝑋0=𝑋=ones(5). After 6 iterations of Algorithm 1, we get the optimal approximate solution ξπ‘‹β‰ˆπ‘‹6=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 5.74671.87477.20131.14521.87704.93921.72596.14631.12051.72784.95481.72886.16681.12091.73072.56361.28813.04271.4781.28891.20131.03711.26301.00621.372,(32) and its residual error β€–β€–Errorβ‰ˆπΈβˆ’π΄π‘‹6𝐡‖‖+β€–β€–πΉβˆ’πΆπ‘‹6𝐷‖‖=3.89Γ—10βˆ’11.(33) By concrete computations, we know that the distance from 𝑋 to the solution set Ξ© is minπ‘‹βˆˆΞ©β€–β€–π‘‹βˆ’π‘‹β€–β€–=β€–β€–ξπ‘‹βˆ’π‘‹β€–β€–β‰ˆβ€–β€–π‘‹6βˆ’π‘‹β€–β€–=16.0902.(34)

Example 1 shows that Algorithm 1 is feasible to solve Problem 1.

Example 2. Consider Problem 1 with 𝐴=rand(100,𝑛),𝐡=rand(𝑛,150),𝐸=π΄βˆ—ones(𝑛)βˆ—π΅,𝐢=rand(70,𝑛),𝐷=rand(𝑛,120),𝐹=πΆβˆ—ones(𝑛)βˆ—π·,(35) where rand(𝑠,𝑑) stand for 𝑠×𝑑 random matrix. Let the given matrix 𝑋=zeros(𝑛). It is easy to verify that 𝑋=ones(𝑛) is the solution of the matrix equations (5); that is, the set Ξ© is nonempty; therefore, we can use Algorithm 1 and the following algorithm proposed by Sheng and Chen [13] to solve Problem 1.

Algorithm 2. One has(1)input 𝐴,𝐡,𝐢,𝐷,𝐸,𝐹 and 𝑋0;(2)calculate 𝑅0=πΈβˆ’π΄π‘‹0π‘Ÿπ΅,0=πΉβˆ’πΆπ‘‹0𝑃𝐷,0=𝐴𝑇𝑅0𝐡𝑇,𝑄0=πΆπ‘‡π‘Ÿ0𝐷𝑇;(36)(3)if π‘…π‘˜=0,π‘Ÿ0=0, then stop; else, π‘˜βˆΆ=π‘˜+1; (4)calculate π‘‹π‘˜=π‘‹π‘˜βˆ’1+β€–π‘…π‘˜βˆ’1β€–2+β€–π‘Ÿπ‘˜βˆ’1β€–2β€–π‘ƒπ‘˜βˆ’1+π‘„π‘˜βˆ’1β€–2ξ€·π‘ƒπ‘˜βˆ’1+π‘„π‘˜βˆ’1ξ€Έ,π‘…π‘˜=πΈβˆ’π΄π‘‹π‘˜π‘Ÿπ΅,π‘˜=πΉβˆ’πΆπ‘‹π‘˜π‘ƒπ·,π‘˜=π΄π‘‡π‘…π‘˜π΅π‘‡+β€–π‘…π‘˜β€–2+β€–π‘Ÿπ‘˜β€–2β€–π‘…π‘˜βˆ’1β€–2+β€–π‘Ÿπ‘˜βˆ’1β€–2,π‘„π‘˜=πΆπ‘‡π‘Ÿπ‘˜π·π‘‡+β€–π‘…π‘˜β€–2+β€–π‘Ÿπ‘˜β€–2β€–π‘…π‘˜βˆ’1β€–2+β€–π‘Ÿπ‘˜βˆ’1β€–2π‘„π‘˜βˆ’1;(37)(5)go to step 3.

It is easy to see that Algorithm 1 has lower computational cost for each step in the comparison with Algorithm 2. Experiments show that Algorithm 1 and Algorithm 2 are feasible to solve Problem 1. We list the iteration steps (denoted by IT), CPU time (denoted by CPU), residual error (denoted by ERR), and the distance ‖𝑋ITβˆ’π‘‹β€– (denoted by DIS) in Table 1.

From Table 1, we can see that Algorithm 1 outperforms Algorithm 2 in iteration step and CPU time. Therefore our algorithm is faster than the algorithm proposed by Sheng and Chen [13]. Especially, the CPU time and iteration steps of our algorithm increase slowly as the dimension 𝑛 is increasing; so our algorithm is suitable for large-scale problems.

4. Conclusion

The alternating projection algorithm dates back to von Neumann [18], who treated the problem of finding the projection of a given point onto the intersection of two closed subspace. In this paper, we first apply the alternating projection algorithm to solve Problem 1, which occurs frequently in experimental design [23]. If we choose the initial matrix 𝑋0=0, the least Frobenius norm solution of the matrix equations 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹 can be obtained. Numerical examples show that the new algorithm is faster and lower computational cost for each step than the algorithm proposed by Sheng and Chen [13] to solve Problem 1. Especially, the CPU time and iteration steps of the new algorithm increase slowly as the matrix’s dimension is increasing; so the alternating projection algorithm is suitable for large-scale matrix nearness problems.

Acknowledgments

The work was supported in part by National Science Foundation of China (11101100; 10861005) and Natural Science Foundation of Guangxi Province (0991238; 2011GXNSFA018138).