Abstract

The least-squares solutions of the matrix equations and for symmetric arrowhead matrices are discussed. By using the Kronecker product and stretching function of matrices, the explicit representations of the general solution are given. Also, it is shown that the best approximation solution is unique and an explicit expression of the solution is derived.

1. Introduction

An matrix is called an arrowhead matrix if it has the following form: If , then is said to be a symmetric arrowhead matrix. We denote all real-valued symmetric arrowhead matrices by . Such matrices arise in the description of radiationless transitions in isolated molecules [1], oscillators vibrationally coupled with a Fermi liquid [2] and quantum optics [3], and so forth. Numerically efficient algorithms for computing eigenvalues and eigenvectors of arrowhead matrices were discussed in [48]. The inverse problem of constructing the symmetric arrowhead matrix from spectral data has been investigated by Xu [9], Peng et al. [10], and Borges et al. [11]. In this paper, we will further consider the least-squares solutions of the matrix equations for symmetric arrowhead matrices and associated approximation problems, which can be described as follows.

Problem 1. Given , and , find nontrivial real-valued symmetric arrowhead matrices and such that

Problem 2. Given real-valued symmetric arrowhead matrices , find such that where is the solution set of Problem 1.

Problem 3. Given , and , find nontrivial real-valued symmetric arrowhead matrix such that

Problem 4. Given a real-valued symmetric arrowhead matrix , find such that where is the solution set of Problem 3.

Recently, Li et al. [12] considered the least-squares solutions of the matrix equation for symmetric arrowhead matrices. By using Moore-Penrose inverses and the Kronecker product, the minimum-norm and least-squares solution to the matrix equation for symmetric arrowhead matrices was provided. However, we can easily see that the method used in [12] involves complicated computations for Moore-Penrose generalized inverses of partitioned matrices, and the expression of the minimum-norm and least-squares solution was not explicit. Compared with the approach proposed in [12], the method in this paper is more concise and easy to perform.

The paper is organized as follows. In Section 2, using the Kronecker product and stretching function of matrices, we give an explicit representation of the solution set of Problem 1. Furthermore, we show that there exists a unique solution in Problem 2 and present the expression of the unique solution of Problem 2. In Section 3, we provide an explicit representation of the solution set of Problem 3 and present the expression of the unique solution of Problem 4. In Section 4, a numerical algorithm to acquire the optimal approximation solution for Problem 2 under the Frobenius norm sense is described and a numerical example is provided. Some concluding remarks are given in Section 5.

Throughout this paper, we denote the real matrix space by and the transpose and the Moore-Penrose generalized inverse of a real matrix by and , respectively. represents the identity matrix of size . For , an inner product in is defined by ; then is a Hilbert space. The matrix norm induced by the inner product is the Frobenius norm. Given two matrices and , the Kronecker product of and is defined by . Also, for an matrix , where , is the th column vector of , the stretching function is defined by .

2. The Solutions of Problems 1 and 2

To begin with, we introduce two lemmas.

Lemma 5 (see [13]). If , then the general solution of can be expressed as , where is an arbitrary vector.

Lemma 6 (see [14]). Let . Then

Let and . It is easily seen that and . Define where is the th column vector of the identity matrix . It is easy to verify that and form orthonormal bases of the subspaces and , respectively. That is, Now, if and , then and can be expressed as where the real numbers , and , are yet to be determined.

It follows from (10) that the relation of (2) can be equivalently written as When setting By Lemma 6, we see that the relation of (11) is equivalent to We note that where . It follows from Lemma 5 and (17) that if and only if where , and are arbitrary vectors.

Substituting (19) into (18), we obtain where .

In summary of the above discussion, we have proved the following result.

Theorem 7. Suppose that , and . Let be given as in (7), (8), (13), (14), and (15), respectively. Write , and . Then the solution set of Problem 1 can be expressed as where are, respectively, given by (20) and (19) with being arbitrary vectors.

From (17), we can easily obtain the following corollary.

Corollary 8. Under the same assumptions as in Theorem 7, the matrix equation has a solution if and only if In this case, the solution set of (24) is given by (21).

It follows from Theorem 7 that the solution set is always nonempty. It is easy to verify that is a closed convex subset of . From the best approximation theorem [15], we know there exists a unique solution in such that (3) holds.

We now focus our attention on seeking the unique solution in . For the real-valued symmetric arrowhead matrices and , it is easily seen that can be expressed as the linear combinations of the orthonormal bases and ; that is, where , and , are uniquely determined by the elements of and . Let Then, for any pair of matrices in (21), by the relations of (9) and (26), we see that Substituting (19) and (20) into the function of , we have Therefore, Clearly, if and only if which yields Upon substituting (32) into (19) and (20), we obtain

By now, we have proved the following result.

Theorem 9. Let the real-valued symmetric arrowhead matrices and be given. Then Problem 2 has a unique solution and the unique solution of Problem 2 can be expressed as where are given by (33) and (34), respectively.

3. The Solutions of Problems 3 and 4

It follows from (10) that the minimization problem of (4) can be equivalently written as Using Lemma 6, we see that the relation of (36) is equivalent to where . It follows from Lemma 5 that the general solution of with respect to can be expressed as where and is an arbitrary vector.

To summarize, we have obtained the following result.

Theorem 10. Suppose that , and . Let be given as in (7), (13), and (15), respectively. Write , and . Then the solution set of Problem 3 can be expressed as where and are given by (22) and (38) with being arbitrary vectors.

Similarly, for the real-valued symmetric arrowhead matrix , it is easily seen that can be expressed as the linear combination of the orthonormal basis ; that is, , where , are uniquely determined by the elements of . Then, for any matrix in (39), by the relation of (9), we have where .

In order to solve Problem 4, we need the following lemma [16].

Lemma 11. Suppose that , and where and .Then if and only if , in which case, .

It follows from Lemma 11 and that if and only if ; that is, Substituting (43) into (38), we obtain

By now, we have proved the following result.

Theorem 12. Let the real-valued symmetric arrowhead matrix be given. Then Problem 4 has a unique solution and the unique solution of Problem 4 can be expressed as where , and is given by (44).

4. A Numerical Example

Based on Theorems and we can state the following algorithm.

Algorithm 13 (an algorithm for solving the optimal approximation solution of Problem 2). Consider the following.(1)Input .(2)Form the orthonormal bases and by (7) and (8), respectively.(3)Compute according to (13), (14) and (15), respectively.(4)Compute , , ,.(5)Form the vectors , by (26), (27).(6)Compute by (22) and (23), respectively.(7)Compute by (33) and (34), respectively.(8)Compute the unique optimal approximation solution of (2) by (35).

Example 14. GivenAccording to Algorithm 13, we can figure out

5. Concluding Remarks

The symmetric arrowhead matrix arises in many important practical applications. In this paper, the least-squares solutions of the matrix equations and for symmetric arrowhead matrices are provided by using the Kronecker product and stretching function of matrices. The explicit representations of the general solution are given. The best approximation solution to the given matrices is derived. A simple recipe for constructing the optimal approximation solution of Problem 2 is described, which can serve as the basis for numerical computation. The approach is demonstrated by a numerical example and reasonable results are produced.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author would like to express his heartfelt thanks to the anonymous reviewers for their valuable comments and suggestions which helped to improve the presentation of this paper.