Research Article | Open Access
Least-Squares Solutions of the Matrix Equations and for Symmetric Arrowhead Matrices and Associated Approximation Problems
The least-squares solutions of the matrix equations and for symmetric arrowhead matrices are discussed. By using the Kronecker product and stretching function of matrices, the explicit representations of the general solution are given. Also, it is shown that the best approximation solution is unique and an explicit expression of the solution is derived.
An matrix is called an arrowhead matrix if it has the following form: If , then is said to be a symmetric arrowhead matrix. We denote all real-valued symmetric arrowhead matrices by . Such matrices arise in the description of radiationless transitions in isolated molecules , oscillators vibrationally coupled with a Fermi liquid  and quantum optics , and so forth. Numerically efficient algorithms for computing eigenvalues and eigenvectors of arrowhead matrices were discussed in [4–8]. The inverse problem of constructing the symmetric arrowhead matrix from spectral data has been investigated by Xu , Peng et al. , and Borges et al. . In this paper, we will further consider the least-squares solutions of the matrix equations for symmetric arrowhead matrices and associated approximation problems, which can be described as follows.
Problem 1. Given , and , find nontrivial real-valued symmetric arrowhead matrices and such that
Problem 2. Given real-valued symmetric arrowhead matrices , find such that where is the solution set of Problem 1.
Problem 3. Given , and , find nontrivial real-valued symmetric arrowhead matrix such that
Problem 4. Given a real-valued symmetric arrowhead matrix , find such that where is the solution set of Problem 3.
Recently, Li et al.  considered the least-squares solutions of the matrix equation for symmetric arrowhead matrices. By using Moore-Penrose inverses and the Kronecker product, the minimum-norm and least-squares solution to the matrix equation for symmetric arrowhead matrices was provided. However, we can easily see that the method used in  involves complicated computations for Moore-Penrose generalized inverses of partitioned matrices, and the expression of the minimum-norm and least-squares solution was not explicit. Compared with the approach proposed in , the method in this paper is more concise and easy to perform.
The paper is organized as follows. In Section 2, using the Kronecker product and stretching function of matrices, we give an explicit representation of the solution set of Problem 1. Furthermore, we show that there exists a unique solution in Problem 2 and present the expression of the unique solution of Problem 2. In Section 3, we provide an explicit representation of the solution set of Problem 3 and present the expression of the unique solution of Problem 4. In Section 4, a numerical algorithm to acquire the optimal approximation solution for Problem 2 under the Frobenius norm sense is described and a numerical example is provided. Some concluding remarks are given in Section 5.
Throughout this paper, we denote the real matrix space by and the transpose and the Moore-Penrose generalized inverse of a real matrix by and , respectively. represents the identity matrix of size . For , an inner product in is defined by ; then is a Hilbert space. The matrix norm induced by the inner product is the Frobenius norm. Given two matrices and , the Kronecker product of and is defined by . Also, for an matrix , where , is the th column vector of , the stretching function is defined by .
To begin with, we introduce two lemmas.
Lemma 5 (see ). If , then the general solution of can be expressed as , where is an arbitrary vector.
Lemma 6 (see ). Let . Then
Let and . It is easily seen that and . Define where is the th column vector of the identity matrix . It is easy to verify that and form orthonormal bases of the subspaces and , respectively. That is, Now, if and , then and can be expressed as where the real numbers , and , are yet to be determined.
It follows from (10) that the relation of (2) can be equivalently written as When setting By Lemma 6, we see that the relation of (11) is equivalent to We note that where . It follows from Lemma 5 and (17) that if and only if where , and are arbitrary vectors.
In summary of the above discussion, we have proved the following result.
Theorem 7. Suppose that , and . Let be given as in (7), (8), (13), (14), and (15), respectively. Write , and . Then the solution set of Problem 1 can be expressed as where are, respectively, given by (20) and (19) with being arbitrary vectors.
From (17), we can easily obtain the following corollary.
It follows from Theorem 7 that the solution set is always nonempty. It is easy to verify that is a closed convex subset of . From the best approximation theorem , we know there exists a unique solution in such that (3) holds.
We now focus our attention on seeking the unique solution in . For the real-valued symmetric arrowhead matrices and , it is easily seen that can be expressed as the linear combinations of the orthonormal bases and ; that is, where , and , are uniquely determined by the elements of and . Let Then, for any pair of matrices in (21), by the relations of (9) and (26), we see that Substituting (19) and (20) into the function of , we have Therefore, Clearly, if and only if which yields Upon substituting (32) into (19) and (20), we obtain
By now, we have proved the following result.
Theorem 9. Let the real-valued symmetric arrowhead matrices and be given. Then Problem 2 has a unique solution and the unique solution of Problem 2 can be expressed as where are given by (33) and (34), respectively.
It follows from (10) that the minimization problem of (4) can be equivalently written as Using Lemma 6, we see that the relation of (36) is equivalent to where . It follows from Lemma 5 that the general solution of with respect to can be expressed as where and is an arbitrary vector.
To summarize, we have obtained the following result.
Theorem 10. Suppose that , and . Let be given as in (7), (13), and (15), respectively. Write , and . Then the solution set of Problem 3 can be expressed as where and are given by (22) and (38) with being arbitrary vectors.
Similarly, for the real-valued symmetric arrowhead matrix , it is easily seen that can be expressed as the linear combination of the orthonormal basis ; that is, , where , are uniquely determined by the elements of . Then, for any matrix in (39), by the relation of (9), we have where .
Lemma 11. Suppose that , and where and .Then if and only if , in which case, .
By now, we have proved the following result.
4. A Numerical Example
Based on Theorems and we can state the following algorithm.
Algorithm 13 (an algorithm for solving the optimal approximation solution of Problem 2). Consider the following.(1)Input .(2)Form the orthonormal bases and by (7) and (8), respectively.(3)Compute according to (13), (14) and (15), respectively.(4)Compute , , ,.(5)Form the vectors , by (26), (27).(6)Compute by (22) and (23), respectively.(7)Compute by (33) and (34), respectively.(8)Compute the unique optimal approximation solution of (2) by (35).
Example 14. GivenAccording to Algorithm 13, we can figure out
5. Concluding Remarks
The symmetric arrowhead matrix arises in many important practical applications. In this paper, the least-squares solutions of the matrix equations and for symmetric arrowhead matrices are provided by using the Kronecker product and stretching function of matrices. The explicit representations of the general solution are given. The best approximation solution to the given matrices is derived. A simple recipe for constructing the optimal approximation solution of Problem 2 is described, which can serve as the basis for numerical computation. The approach is demonstrated by a numerical example and reasonable results are produced.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
The author would like to express his heartfelt thanks to the anonymous reviewers for their valuable comments and suggestions which helped to improve the presentation of this paper.
- M. Bixon and J. Jortner, “Intramolecular radiationless transitions,” The Journal of Chemical Physics, vol. 48, no. 2, pp. 715–726, 1968.
- J. W. Gadzuk, “Localized vibrational modes in Fermi liquids. General theory,” Physical Review B, vol. 24, no. 4, pp. 1651–1663, 1981.
- D. Mogilevtsev, A. Maloshtan, S. Kilin, L. E. Oliveira, and S. B. Cavalcanti, “Spontaneous emission and qubit transfer in spin-1/2 chains,” Journal of Physics B: Atomic, Molecular and Optical Physics, vol. 43, no. 9, Article ID 095506, 2010.
- A. M. Lietuofu, The Stability of the Nonlinear Adjustment Systems, Science Press, 1959, (Chinese).
- G. W. Bing, Introduction to the Nonlinear Control Systems, Science Press, 1988, (Chinese).
- D. P. O'Leary and G. W. Stewart, “Computing the eigenvalues and eigenvectors of symmetric arrowhead matrices,” Journal of Computational Physics, vol. 90, no. 2, pp. 497–505, 1990.
- O. Walter, L. S. Cederbaum, and J. Schirmer, “The eigenvalue problem for “arrow” matrices,” Journal of Mathematical Physics, vol. 25, no. 4, pp. 729–737, 1984.
- H. Pickmann, J. Egaña, and R. L. Soto, “Extremal inverse eigenvalue problem for bordered diagonal matrices,” Linear Algebra and Its Applications, vol. 427, no. 2-3, pp. 256–271, 2007.
- Y. F. Xu, “An inverse eigenvalue problem for a special kind of matrices,” Mathematica Applicata, vol. 6, no. 1, pp. 68–75, 1996 (Chinese).
- J. Peng, X. Y. Hu, and L. Zhang, “Two inverse eigenvalue problems for a special kind of matrices,” Linear Algebra and Its Applications, vol. 416, no. 2-3, pp. 336–347, 2006.
- C. F. Borges, R. Frezza, and W. B. Gragg, “Some inverse eigenproblems for Jacobi and arrow matrices,” Numerical Linear Algebra with Applications, vol. 2, no. 3, pp. 195–203, 1995.
- H. Li, Z. Gao, and D. Zhao, “Least squares solutions of the matrix equation with the least norm for symmetric arrowhead matrices,” Applied Mathematics and Computation, vol. 226, pp. 719–724, 2014.
- A. Ben-Israel and T. N. E. Greville, Generalized Inverses: Theory and Applications, Wiley, New York, NY, USA, 1974.
- P. Lancaster and M. Tismenetsky, The Theory of Matrices, Academic Press, New York, NY, USA, 2nd edition, 1985.
- J. P. Aubin, Applied Functional Analysis, John Wiley & Sons, New York, NY, USA, 1979.
- W. F. Trench, “Inverse eigenproblems and associated approximation problems for matrices with generalized symmetry or skew symmetry,” Linear Algebra and Its Applications, vol. 380, pp. 199–211, 2004.
Copyright © 2014 Yongxin Yuan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.