Journal of Applied Mathematics

Journal of Applied Mathematics / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 709356 | 7 pages | https://doi.org/10.1155/2014/709356

Least-Squares Solutions of the Matrix Equations and for Symmetric Arrowhead Matrices and Associated Approximation Problems

Academic Editor: Shan Zhao
Received07 Mar 2014
Accepted10 Jun 2014
Published30 Jun 2014

Abstract

The least-squares solutions of the matrix equations and for symmetric arrowhead matrices are discussed. By using the Kronecker product and stretching function of matrices, the explicit representations of the general solution are given. Also, it is shown that the best approximation solution is unique and an explicit expression of the solution is derived.

1. Introduction

An matrix is called an arrowhead matrix if it has the following form: If , then is said to be a symmetric arrowhead matrix. We denote all real-valued symmetric arrowhead matrices by . Such matrices arise in the description of radiationless transitions in isolated molecules [1], oscillators vibrationally coupled with a Fermi liquid [2] and quantum optics [3], and so forth. Numerically efficient algorithms for computing eigenvalues and eigenvectors of arrowhead matrices were discussed in [48]. The inverse problem of constructing the symmetric arrowhead matrix from spectral data has been investigated by Xu [9], Peng et al. [10], and Borges et al. [11]. In this paper, we will further consider the least-squares solutions of the matrix equations for symmetric arrowhead matrices and associated approximation problems, which can be described as follows.

Problem 1. Given , and , find nontrivial real-valued symmetric arrowhead matrices and such that

Problem 2. Given real-valued symmetric arrowhead matrices , find such that where is the solution set of Problem 1.

Problem 3. Given , and , find nontrivial real-valued symmetric arrowhead matrix such that

Problem 4. Given a real-valued symmetric arrowhead matrix , find such that where is the solution set of Problem 3.

Recently, Li et al. [12] considered the least-squares solutions of the matrix equation for symmetric arrowhead matrices. By using Moore-Penrose inverses and the Kronecker product, the minimum-norm and least-squares solution to the matrix equation for symmetric arrowhead matrices was provided. However, we can easily see that the method used in [12] involves complicated computations for Moore-Penrose generalized inverses of partitioned matrices, and the expression of the minimum-norm and least-squares solution was not explicit. Compared with the approach proposed in [12], the method in this paper is more concise and easy to perform.

The paper is organized as follows. In Section 2, using the Kronecker product and stretching function of matrices, we give an explicit representation of the solution set of Problem 1. Furthermore, we show that there exists a unique solution in Problem 2 and present the expression of the unique solution of Problem 2. In Section 3, we provide an explicit representation of the solution set of Problem 3 and present the expression of the unique solution of Problem 4. In Section 4, a numerical algorithm to acquire the optimal approximation solution for Problem 2 under the Frobenius norm sense is described and a numerical example is provided. Some concluding remarks are given in Section 5.

Throughout this paper, we denote the real matrix space by and the transpose and the Moore-Penrose generalized inverse of a real matrix by and , respectively. represents the identity matrix of size . For , an inner product in is defined by ; then is a Hilbert space. The matrix norm induced by the inner product is the Frobenius norm. Given two matrices and , the Kronecker product of and is defined by . Also, for an matrix , where , is the th column vector of , the stretching function is defined by .

2. The Solutions of Problems 1 and 2

To begin with, we introduce two lemmas.

Lemma 5 (see [13]). If , then the general solution of can be expressed as , where is an arbitrary vector.

Lemma 6 (see [14]). Let . Then

Let and . It is easily seen that and . Define where is the th column vector of the identity matrix . It is easy to verify that and form orthonormal bases of the subspaces and , respectively. That is, Now, if and , then and can be expressed as where the real numbers , and , are yet to be determined.

It follows from (10) that the relation of (2) can be equivalently written as When setting By Lemma 6, we see that the relation of (11) is equivalent to We note that where . It follows from Lemma 5 and (17) that if and only if where , and are arbitrary vectors.

Substituting (19) into (18), we obtain where .

In summary of the above discussion, we have proved the following result.

Theorem 7. Suppose that , and . Let be given as in (7), (8), (13), (14), and (15), respectively. Write , and . Then the solution set of Problem 1 can be expressed as where are, respectively, given by (20) and (19) with being arbitrary vectors.

From (17), we can easily obtain the following corollary.

Corollary 8. Under the same assumptions as in Theorem 7, the matrix equation has a solution if and only if In this case, the solution set of (24) is given by (21).

It follows from Theorem 7 that the solution set is always nonempty. It is easy to verify that is a closed convex subset of . From the best approximation theorem [15], we know there exists a unique solution in such that (3) holds.

We now focus our attention on seeking the unique solution in . For the real-valued symmetric arrowhead matrices and , it is easily seen that can be expressed as the linear combinations of the orthonormal bases and ; that is, where , and , are uniquely determined by the elements of and . Let Then, for any pair of matrices in (21), by the relations of (9) and (26), we see that Substituting (19) and (20) into the function of , we have Therefore, Clearly, if and only if which yields Upon substituting (32) into (19) and (20), we obtain

By now, we have proved the following result.

Theorem 9. Let the real-valued symmetric arrowhead matrices and be given. Then Problem 2 has a unique solution and the unique solution of Problem 2 can be expressed as where are given by (33) and (34), respectively.

3. The Solutions of Problems 3 and 4

It follows from (10) that the minimization problem of (4) can be equivalently written as Using Lemma 6, we see that the relation of (36) is equivalent to where . It follows from Lemma 5 that the general solution of with respect to can be expressed as where and is an arbitrary vector.

To summarize, we have obtained the following result.

Theorem 10. Suppose that , and . Let be given as in (7), (13), and (15), respectively. Write , and . Then the solution set of Problem 3 can be expressed as where and are given by (22) and (38) with being arbitrary vectors.

Similarly, for the real-valued symmetric arrowhead matrix , it is easily seen that can be expressed as the linear combination of the orthonormal basis ; that is, , where , are uniquely determined by the elements of . Then, for any matrix in (39), by the relation of (9), we have where .

In order to solve Problem 4, we need the following lemma [16].

Lemma 11. Suppose that , and where and .Then if and only if , in which case, .

It follows from Lemma 11 and that if and only if ; that is, Substituting (43) into (38), we obtain

By now, we have proved the following result.

Theorem 12. Let the real-valued symmetric arrowhead matrix be given. Then Problem 4 has a unique solution and the unique solution of Problem 4 can be expressed as where , and is given by (44).

4. A Numerical Example

Based on Theorems and we can state the following algorithm.

Algorithm 13 (an algorithm for solving the optimal approximation solution of Problem 2). Consider the following.(1)Input .(2)Form the orthonormal bases and by (7) and (8), respectively.(3)Compute according to (13), (14) and (15), respectively.(4)Compute , , ,.(5)Form the vectors , by (26), (27).(6)Compute by (22) and (23), respectively.(7)Compute by (33) and (34), respectively.(8)Compute the unique optimal approximation solution of (2) by (35).

Example 14. GivenAccording to Algorithm 13, we can figure out

5. Concluding Remarks

The symmetric arrowhead matrix arises in many important practical applications. In this paper, the least-squares solutions of the matrix equations and for symmetric arrowhead matrices are provided by using the Kronecker product and stretching function of matrices. The explicit representations of the general solution are given. The best approximation solution to the given matrices is derived. A simple recipe for constructing the optimal approximation solution of Problem 2 is described, which can serve as the basis for numerical computation. The approach is demonstrated by a numerical example and reasonable results are produced.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author would like to express his heartfelt thanks to the anonymous reviewers for their valuable comments and suggestions which helped to improve the presentation of this paper.

References

  1. M. Bixon and J. Jortner, “Intramolecular radiationless transitions,” The Journal of Chemical Physics, vol. 48, no. 2, pp. 715–726, 1968. View at: Publisher Site | Google Scholar
  2. J. W. Gadzuk, “Localized vibrational modes in Fermi liquids. General theory,” Physical Review B, vol. 24, no. 4, pp. 1651–1663, 1981. View at: Publisher Site | Google Scholar
  3. D. Mogilevtsev, A. Maloshtan, S. Kilin, L. E. Oliveira, and S. B. Cavalcanti, “Spontaneous emission and qubit transfer in spin-1/2 chains,” Journal of Physics B: Atomic, Molecular and Optical Physics, vol. 43, no. 9, Article ID 095506, 2010. View at: Publisher Site | Google Scholar
  4. A. M. Lietuofu, The Stability of the Nonlinear Adjustment Systems, Science Press, 1959, (Chinese).
  5. G. W. Bing, Introduction to the Nonlinear Control Systems, Science Press, 1988, (Chinese).
  6. D. P. O'Leary and G. W. Stewart, “Computing the eigenvalues and eigenvectors of symmetric arrowhead matrices,” Journal of Computational Physics, vol. 90, no. 2, pp. 497–505, 1990. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  7. O. Walter, L. S. Cederbaum, and J. Schirmer, “The eigenvalue problem for “arrow” matrices,” Journal of Mathematical Physics, vol. 25, no. 4, pp. 729–737, 1984. View at: Publisher Site | Google Scholar | MathSciNet
  8. H. Pickmann, J. Egaña, and R. L. Soto, “Extremal inverse eigenvalue problem for bordered diagonal matrices,” Linear Algebra and Its Applications, vol. 427, no. 2-3, pp. 256–271, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  9. Y. F. Xu, “An inverse eigenvalue problem for a special kind of matrices,” Mathematica Applicata, vol. 6, no. 1, pp. 68–75, 1996 (Chinese). View at: Google Scholar | MathSciNet
  10. J. Peng, X. Y. Hu, and L. Zhang, “Two inverse eigenvalue problems for a special kind of matrices,” Linear Algebra and Its Applications, vol. 416, no. 2-3, pp. 336–347, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  11. C. F. Borges, R. Frezza, and W. B. Gragg, “Some inverse eigenproblems for Jacobi and arrow matrices,” Numerical Linear Algebra with Applications, vol. 2, no. 3, pp. 195–203, 1995. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  12. H. Li, Z. Gao, and D. Zhao, “Least squares solutions of the matrix equation AXB+CYD=E with the least norm for symmetric arrowhead matrices,” Applied Mathematics and Computation, vol. 226, pp. 719–724, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  13. A. Ben-Israel and T. N. E. Greville, Generalized Inverses: Theory and Applications, Wiley, New York, NY, USA, 1974. View at: MathSciNet
  14. P. Lancaster and M. Tismenetsky, The Theory of Matrices, Academic Press, New York, NY, USA, 2nd edition, 1985. View at: MathSciNet
  15. J. P. Aubin, Applied Functional Analysis, John Wiley & Sons, New York, NY, USA, 1979. View at: MathSciNet
  16. W. F. Trench, “Inverse eigenproblems and associated approximation problems for matrices with generalized symmetry or skew symmetry,” Linear Algebra and Its Applications, vol. 380, pp. 199–211, 2004. View at: Publisher Site | Google Scholar | MathSciNet

Copyright © 2014 Yongxin Yuan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

594 Views | 353 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.