Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2013, Article ID 597628, 13 pages
http://dx.doi.org/10.1155/2013/597628
Research Article

Neural-Network-Based Approach for Extracting Eigenvectors and Eigenvalues of Real Normal Matrices and Some Extension to Real Matrices

1School of Optoelectronic Information, University of Electronic Science and Technology of China, Chengdu 610054, China
2School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
3School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China

Received 30 October 2012; Accepted 16 January 2013

Academic Editor: Nicola Mastronardi

Copyright © 2013 Xiongfei Zou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. S. Attallah and K. Abed-Meraim, “A fast adaptive algorithm for the generalized symmetric eigenvalue problem,” IEEE Signal Processing Letters, vol. 15, pp. 797–800, 2008. View at Publisher · View at Google Scholar · View at Scopus
  2. T. Laudadio, N. Mastronardi, and M. Van Barel, “Computing a lower bound of the smallest eigenvalue of a symmetric positive-definite Toeplitz matrix,” IEEE Transactions on Information Theory, vol. 54, no. 10, pp. 4726–4731, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  3. J. Shawe-Taylor, C. K. I. Williams, N. Cristianini, and J. Kandola, “On the eigenspectrum of the Gram matrix and the generalization error of kernel-PCA,” IEEE Transactions on Information Theory, vol. 51, no. 7, pp. 2510–2522, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  4. D. Q. Wang and M. Zhang, “A new approach to multiple class pattern classification with random matrices,” Journal of Applied Mathematics and Decision Sciences, no. 3, pp. 165–175, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. M. R. Bastian, J. H. Gunther, and T. K. Moon, “A simplified natural gradient learning algorithm,” Advances in Artificial Neural Systems, vol. 2011, Article ID 407497, 9 pages, 2011. View at Publisher · View at Google Scholar
  6. T. H. Le, “Applying artificial neural networks for face recognition,” Advances in Artificial Neural Systems, vol. 2011, Article ID 673016, 16 pages, 2011. View at Publisher · View at Google Scholar
  7. Y. Xia, “An extended projection neural network for constrained optimization,” Neural Computation, vol. 16, no. 4, pp. 863–883, 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. Y. Xia and J. Wang, “A recurrent neural network for solving nonlinear convex programs subject to linear constraints,” IEEE Transactions on Neural Networks, vol. 16, no. 2, pp. 379–386, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. T. Voegtlin, “Recursive principal components analysis,” Neural Networks, vol. 18, no. 8, pp. 1051–1063, 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Qiu, H. Wang, J. Lu, B. Zhang, and K.-L. Du, “Neural network implementations for PCA and its extensions,” ISRN Artificial Intelligence, vol. 2012, Article ID 847305, 19 pages, 2012. View at Publisher · View at Google Scholar
  11. H. Liu and J. Wang, “Integrating independent component analysis and principal component analysis with neural network to predict Chinese stock market,” Mathematical Problems in Engineering, vol. 2011, Article ID 382659, 15 pages, 2011. View at Publisher · View at Google Scholar
  12. F. L. Luo, R. Unbehauen, and A. Cichocki, “A minor component analysis algorithm,” Neural Networks, vol. 10, no. 2, pp. 291–297, 1997. View at Publisher · View at Google Scholar · View at Scopus
  13. G. H. Golub and C. F. Van Loan, Matrix Computations, vol. 3 of Johns Hopkins Series in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, Md, USA, 1983. View at MathSciNet
  14. C. Chatterjee, V. P. Roychowdhury, J. Ramos, and M. D. Zoltowski, “Self-organizing algorithms for generalized eigen-decomposition,” IEEE Transactions on Neural Networks, vol. 8, no. 6, pp. 1518–1530, 1997. View at Google Scholar · View at Scopus
  15. H. Kakeya and T. Kindo, “Eigenspace separation of autocorrelation memory matrices for capacity expansion,” Neural Networks, vol. 10, no. 5, pp. 833–843, 1997. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Liu, Z. You, and L. Cao, “A functional neural network for computing the largest modulus eigenvalues and their corresponding eigenvectors of an anti-symmetric matrix,” Neurocomputing, vol. 67, no. 1–4, pp. 384–397, 2005. View at Publisher · View at Google Scholar · View at Scopus
  17. Y. Liu, Z. You, and L. Cao, “A functional neural network computing some eigenvalues and eigenvectors of a special real matrix,” Neural Networks, vol. 18, no. 10, pp. 1293–1300, 2005. View at Publisher · View at Google Scholar · View at Scopus
  18. Y. Liu, Z. You, and L. Cao, “A recurrent neural network computing the largest imaginary or real part of eigenvalues of real matrices,” Computers & Mathematics with Applications, vol. 53, no. 1, pp. 41–53, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  19. E. Oja, “Principal components, minor components, and linear neural networks,” Neural Networks, vol. 5, no. 6, pp. 927–935, 1992. View at Google Scholar · View at Scopus
  20. R. Perfetti and E. Massarelli, “Training spatially homogeneous fully recurrent neural networks in eigenvalue space,” Neural Networks, vol. 10, no. 1, pp. 125–137, 1997. View at Publisher · View at Google Scholar · View at Scopus
  21. L. Xu, E. Oja, and C. Y. Suen, “Modified Hebbian learning for curve and surface fitting,” Neural Networks, vol. 5, no. 3, pp. 441–457, 1992. View at Google Scholar · View at Scopus
  22. Q. Zhang and Y. W. Leung, “A class of learning algorithms for principal component analysis and minor component analysis,” IEEE Transactions on Neural Networks, vol. 11, no. 2, pp. 529–533, 2000. View at Google Scholar · View at Scopus
  23. Y. Zhang, Y. Fu, and H. J. Tang, “Neural networks based approach for computing eigenvectors and eigenvalues of symmetric matrix,” Computers & Mathematics with Applications, vol. 47, no. 8-9, pp. 1155–1164, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  24. L. Mirsky, “On the minimization of matrix norms,” The American Mathematical Monthly, vol. 65, pp. 106–107, 1958. View at Publisher · View at Google Scholar · View at MathSciNet
  25. C. P. ] Huang and R. T. Gregory, A Norm-Reducing Jacobi-Like Algorithm for the Eigenvalues of Non-Normal Matrices, Colloquia Mathematica Societatis Janos Bolyai, Keszthely, Hungary, 1977.
  26. I. Kiessling and A. Paulik, “A norm-reducing Jacobi-like algorithm for the eigenvalues of non-normal matrices,” Journal of Computational and Applied Mathematics, vol. 8, no. 3, pp. 203–207, 1982. View at Google Scholar · View at Scopus
  27. R. Sacks-Davis, “A real norm-reducing Jacobi-type eigenvalue algorithm,” The Australian Computer Journal, vol. 7, no. 2, pp. 65–69, 1975. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet