About this Journal Submit a Manuscript Table of Contents
ISRN Artificial Intelligence
Volume 2012 (2012), Article ID 847305, 19 pages
http://dx.doi.org/10.5402/2012/847305
Review Article

Neural Network Implementations for PCA and Its Extensions

1Enjoyor Labs, Enjoyor Inc., Hangzhou 310030, China
2Faculty of Electromechanical Engineering, Guangdong University of Technology, Guangzhou 510006, China
3Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada H3G 1M8

Received 8 April 2012; Accepted 14 June 2012

Academic Editors: C. Kotropoulos and B. Schuller

Copyright © 2012 Jialin Qiu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. G. H. Golub and C. F. van Loan, Matrix Computation, John Hopkins University Press, Baltimore, Md, USA, 2nd edition, 1989.
  2. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, John Wiley & Sons, New York, NY, USA, 1973.
  3. K. R. Müller, S. Mika, G. Rätsch, K. Tsuda, and B. Schölkopf, “An introduction to kernel-based learning algorithms,” IEEE Transactions on Neural Networks, vol. 12, no. 2, pp. 181–201, 2001. View at Publisher · View at Google Scholar · View at Scopus
  4. M. Loeve, Probability Theory, Van Nostrand, New York, NY, USA, 3rd edition, 1963.
  5. J. Yang, D. Zhang, A. F. Frangi, and J. Y. Yang, “Two-dimensional PCA: a new approach to appearance-based face representation and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 131–137, 2004. View at Publisher · View at Google Scholar · View at Scopus
  6. L. Ljung, “Analysis of recursive stochastic algorithm,” IEEE Transactions on Automatic Control, vol. 22, no. 4, pp. 551–575, 1977. View at Scopus
  7. E. Oja and J. Karhunen, “On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix,” Journal of Mathematical Analysis and Applications, vol. 106, no. 1, pp. 69–84, 1985. View at Scopus
  8. T. D. Sanger, “Optimal unsupervised learning in a single-layer linear feedforward neural network,” Neural Networks, vol. 2, no. 6, pp. 459–473, 1989. View at Scopus
  9. D. O. Hebb, The Organization of Behavior, John Wiley & Sons, New York, NY, USA, 1949.
  10. K. L. Du and M. N. S. Swamy, Networks in a Softcomputing Framework, Springer, London, UK, 2006.
  11. M. H. Hassoun, Fundamentals of Artificial Neural Networks, MIT Press, Cambridge, Mass, USA, 1995.
  12. C. Von Der Malsburg, “Self organization of orientation sensitive cells in the striate cortex,” Kybernetik, vol. 14, no. 2, pp. 85–100, 1973. View at Scopus
  13. J. Rubner and P. Tavan, “A self-organizing network for principal-component analysis,” Europhysics Letters, vol. 10, pp. 693–698, 1989. View at Publisher · View at Google Scholar
  14. E. Oja, “A simplified neuron model as a principal component analyzer,” Journal of Mathematical Biology, vol. 15, no. 3, pp. 267–273, 1982. View at Scopus
  15. A. L. Yuille, D. M. Kammen, and D. S. Cohen, “Quadrature and the development of orientation selective cortical cells by Hebb rules,” Biological Cybernetics, vol. 61, no. 3, pp. 183–194, 1989. View at Scopus
  16. R. Linsker, “From basic network principles to neural architecture: emergence of orientation-selective cells,” Proceedings of the National Academy of Sciences of the United States of America, vol. 83, no. 21, pp. 8390–8394, 1986. View at Scopus
  17. P. J. Zufiria, “On the discrete-time dynamics of the basic Hebbian neural-network node,” IEEE Transactions on Neural Networks, vol. 13, no. 6, pp. 1342–1352, 2002. View at Publisher · View at Google Scholar · View at Scopus
  18. Z. Yi, M. Ye, J. C. Lv, and K. K. Tan, “Convergence analysis of a deterministic discrete time system of Oja's PCA learning algorithm,” IEEE Transactions on Neural Networks, vol. 16, no. 6, pp. 1318–1328, 2005. View at Publisher · View at Google Scholar · View at Scopus
  19. A. Cichocki and R. Unbehauen, Networks for Optimization and Signal Processing, John Wiley & Sons, New York, NY, USA, 1992.
  20. L. H. Chen and S. Chang, “Adaptive learning algorithm for principal component analysis,” IEEE Transactions on Neural Networks, vol. 6, no. 5, pp. 1255–1263, 1995. View at Publisher · View at Google Scholar · View at Scopus
  21. A. Krogh and J. A. Hertz, “Hebbian learning of principal components,” in Parallel Processing in Neural Systems and Computers, R. Eckmiller, G. Hartmann, and G. Hauske, Eds., pp. 183–186, North-Holland, Amsterdam, The Netherlands, 1990.
  22. E. Oja, “Principal components, minor components, and linear neural networks,” Neural Networks, vol. 5, no. 6, pp. 927–935, 1992. View at Scopus
  23. E. Oja, H. Ogawa, and J. Wangviwattana, “Principal component analysis by homogeneous neural networks,” IEICE Transactions on Information and Systems, vol. E75-D, no. 3, pp. 366–382, 1992.
  24. M. Jankovic and H. Ogawa, “Time-oriented hierarchical method for computation of principal components using subspace learning algorithm,” International Journal of Neural Systems, vol. 14, no. 5, pp. 313–323, 2004. View at Scopus
  25. A. Weingessel and K. Hornik, “A robust subspace algorithm for principal component analysis,” International Journal of Neural Systems, vol. 13, no. 5, pp. 307–313, 2003. View at Scopus
  26. H. Chen and R. W. Liu, “On-line unsupervised learning machine for adaptive feature extraction,” IEEE Transactions on Circuits and Systems II, vol. 41, no. 2, pp. 87–98, 1994. View at Publisher · View at Google Scholar · View at Scopus
  27. F. Peper and H. Noda, “A symmetric linear neural network that learns principal components and their variances,” IEEE Transactions on Neural Networks, vol. 7, no. 4, pp. 1042–1047, 1996. View at Scopus
  28. F. L. Luo, R. Unbehauen, and Y. D. Li, “A principal component analysis algorithm with invariant norm,” Neurocomputing, vol. 8, no. 2, pp. 213–221, 1995. View at Publisher · View at Google Scholar · View at Scopus
  29. L. Xu, “Least mean square error reconstruction principle for self-organizing neural-nets,” Neural Networks, vol. 6, no. 5, pp. 627–648, 1993. View at Scopus
  30. B. Yang, “Projection approximation subspace tracking,” IEEE Transactions on Signal Processing, vol. 43, no. 1, pp. 95–107, 1995. View at Publisher · View at Google Scholar · View at Scopus
  31. S. Bannour and M. R. Azimi-Sadjadi, “Principal component extraction using recursive least squares learning,” IEEE Transactions on Neural Networks, vol. 6, no. 2, pp. 457–469, 1995. View at Publisher · View at Google Scholar · View at Scopus
  32. Y. Miao and Y. Hua, “Fast subspace tracking and neural network learning by a novel information criterion,” IEEE Transactions on Signal Processing, vol. 46, no. 7, pp. 1967–1979, 1998. View at Scopus
  33. S. Ouyang, Z. Bao, and G. S. Liao, “Robust recursive least squares learning algorithm for principal component analysis,” IEEE Transactions on Neural Networks, vol. 11, no. 1, pp. 215–221, 2000. View at Publisher · View at Google Scholar · View at Scopus
  34. S. Ouyang and Z. Bao, “Fast principal component extraction by a weighted information criterion,” IEEE Transactions on Signal Processing, vol. 50, no. 8, pp. 1994–2002, 2002. View at Publisher · View at Google Scholar · View at Scopus
  35. S. Y. Kung, K. I. Diamantaras, and J. S. Taur, “Adaptive principal component extraction (APEX) and applications,” IEEE Transactions on Signal Processing, vol. 42, no. 5, pp. 1202–1271, 1994. View at Publisher · View at Google Scholar · View at Scopus
  36. A. C. S. Leung, K. W. Wong, and A. C. Tsoi, “Recursive algorithms for principal component extraction,” Network, vol. 8, no. 3, pp. 323–334, 1997. View at Scopus
  37. C. Chatterjee, V. P. Roychowdhury, and E. K. P. Chong, “On relative convergence properties of principal component analysis algorithms,” IEEE Transactions on Neural Networks, vol. 9, no. 2, pp. 319–329, 1998. View at Scopus
  38. B. Yang, “An extension of the PASTd algorithm to both rank and subspace tracking,” IEEE Signal Processing Letters, vol. 2, no. 9, pp. 179–182, 1995. View at Publisher · View at Google Scholar · View at Scopus
  39. Y. Chauvin, “Principal component analysis by gradient descent on a constrained linear Hebbian cell,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '89), pp. 373–380, Washington, DC, USA, June 1989. View at Scopus
  40. B. A. Pearlmutter, G. E. Hinton, and G. -maximization:, “An unsupervised learning procedure for discovering regularities,” in Proceedings of the Neural Networks for Computing, J. S. Denker, Ed., vol. 151, pp. 333–338, American Institute of Physics, Snowbird, Utah, USA, 1986.
  41. Z. Fu and E. M. Dowling, “Conjugate gradient eigenstructure tracking for adaptive spectral estimation,” IEEE Transactions on Signal Processing, vol. 43, no. 5, pp. 1151–1160, 1995. View at Publisher · View at Google Scholar · View at Scopus
  42. Z. Kang, C. Chatterjee, and V. P. Roychowdhury, “An adaptive quasi-newton algorithm for eigensubspace estimation,” IEEE Transactions on Signal Processing, vol. 48, no. 12, pp. 3328–3333, 2000. View at Scopus
  43. S. Ouyang, P. C. Ching, and T. Lee, “Robust adaptive quasi-Newton algorithms for eigensubspace estimation,” IEE Proceedings, vol. 150, no. 5, pp. 321–330, 2003. View at Publisher · View at Google Scholar · View at Scopus
  44. C. Chatterjee, Z. Kang, and V. P. Roychowdhury, “Algorithms for accelerated convergence of adaptive PCA,” IEEE Transactions on Neural Networks, vol. 11, no. 2, pp. 338–355, 2000. View at Publisher · View at Google Scholar · View at Scopus
  45. M. D. Plumbley, “Lyapunov functions for convergence of principal component algorithms,” Neural Networks, vol. 8, no. 1, pp. 11–23, 1995. View at Publisher · View at Google Scholar · View at Scopus
  46. R. Möller and A. Könies, “Coupled principal component analysis,” IEEE Transactions on Neural Networks, vol. 15, no. 1, pp. 214–222, 2004. View at Publisher · View at Google Scholar · View at Scopus
  47. R. Möller, “First-order approximation of Gram-Schmidt orthonormalization beats deflation in coupled PCA learning rules,” Neurocomputing, vol. 69, no. 13–15, pp. 1582–1590, 2006. View at Publisher · View at Google Scholar · View at Scopus
  48. P. Foldiak, “Adaptive network for optimal linear feature extraction,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '89), pp. 401–405, Washington, DC, USA, June 1989. View at Scopus
  49. J. Rubner and K. Schulten, “Development of feature detectors by self-organization,” Biological Cybernetics, vol. 62, no. 3, pp. 193–199, 1990. View at Scopus
  50. S. Y. Kung and K. I. Diamantaras, “A neural network learning algorithm for adaptive principal component extraction (APEX),” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICCASP '90), pp. 861–864, Albuquerque, NM, USA, April 1990. View at Scopus
  51. S. Fiori and F. Piazza, “A general class of ψ-APEX PCA neural algorithms,” IEEE Transactions on Circuits and Systems I, vol. 47, no. 9, pp. 1394–1397, 2000. View at Scopus
  52. S. Fiori, “Experimental comparison of three PCA neural networks,” Neural Processing Letters, vol. 11, no. 3, pp. 209–218, 2000. View at Publisher · View at Google Scholar · View at Scopus
  53. M. Collins, S. Dasgupta, and R. E. Schapire, “A generalization of principal component analysis to the exponential family,” in Advances in Neural Information Processing Systems, T. D. Dietterich, S. Becker, and Z. Ghahramani, Eds., vol. 14, pp. 617–624, MIT Press, Cambridge, Mass, USA, 2002.
  54. B. Schölkopf, A. Smola, and K. R. Müller, “Nonlinear component analysis as a kernel eigenvalue problem,” Neural Computation, vol. 10, no. 5, pp. 1299–1319, 1998. View at Scopus
  55. L. Xu and A. L. Yuille, “Robust principal component analysis by self-organizing rules based on statistical physics approach,” IEEE Transactions on Neural Networks, vol. 6, no. 1, pp. 131–143, 1995. View at Publisher · View at Google Scholar · View at Scopus
  56. P. J. Huber, Robust Statistics, John Wiley & Sons, New York, NY, USA, 1981.
  57. J. Karhunen and J. Joutsensalo, “Generalizations of principal component analysis, optimization problems, and neural networks,” Neural Networks, vol. 8, no. 4, pp. 549–562, 1995. View at Publisher · View at Google Scholar · View at Scopus
  58. E. Oja, H. Ogawa, and J. Wangviwattana, “Learning in non-linear constrained Hebbian networks,” in Proceedings of the International Conference on Artificial Neural Networks (ICANN '91), T. Kohonen, K. Makisara, O. Simula, and J. Kangas, Eds., pp. 385–390, North-Holland, Amsterdam, The Netherlands, 1991.
  59. T. D. Sanger, “An optimality principle for unsupervised learning,” in Advances in Neural Information Processing Systems, D. S. Touretzky, Ed., vol. 1, pp. 11–19, Morgan Kaufmann, San Mateo, Calif, USA, 1989.
  60. H. Bourlard and Y. Kamp, “Auto-association by multilayer perceptrons and singular value decomposition,” Biological Cybernetics, vol. 59, no. 4-5, pp. 291–294, 1988. View at Publisher · View at Google Scholar · View at Scopus
  61. P. Baldi and K. Hornik, “Neural networks and principal component analysis: learning from examples without local minima,” Neural Networks, vol. 2, no. 1, pp. 53–58, 1989. View at Scopus
  62. M. A. Kramer, “Nonlinear principal component analysis using autoassociative neural networks,” AIChE Journal, vol. 37, no. 2, pp. 233–243, 1991. View at Scopus
  63. C. M. Bishop, Neural Networks For Pattern Recogonition, Oxford Press, New York, NY, USA, 1995.
  64. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, D. E. Rumelhart and J. L. McClelland, Eds., vol. 1, pp. 318–362, MIT Press, Cambridge, Mass, USA, 1986.
  65. N. Kambhatla and T. K. Leen, “Fast non-linear dimension reduction,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 3, pp. 1213–1218, San Francisco, Calif, USA, 1993.
  66. R. Saegusa, H. Sakano, and S. Hashimoto, “Nonlinear principal component analysis to preserve the order of principal components,” Neurocomputing, vol. 61, no. 1–4, pp. 57–70, 2004. View at Publisher · View at Google Scholar · View at Scopus
  67. J. A. Catalan, J. S. Jin, and T. Gedeon, “Reducing the dimensions of texture features for image retrieval using multilayer neural networks,” Pattern Analysis and Applications, vol. 2, no. 2, pp. 196–203, 1999. View at Scopus
  68. T. Kohonen, “Self-organized formation of topologically correct feature maps,” Biological Cybernetics, vol. 43, no. 1, pp. 59–69, 1982. View at Scopus
  69. H. Ritter, “Self-organizing feature maps: kohonen maps,” in The Handbook of Brain Theory and Neural Networks, M. A. Arbib, Ed., pp. 846–851, MIT Press, Cambridge, Mass, USA, 1995.
  70. T. Kohonen, “Emergence of invariant-feature detectors in the adaptive-subspace self-organizing map,” Biological Cybernetics, vol. 75, no. 4, pp. 281–291, 1996. View at Scopus
  71. T. Kohonen, E. Oja, O. Simula, A. Visa, and J. Kangas, “Engineering applications of the self-organizing map,” Proceedings of the IEEE, vol. 84, no. 10, pp. 1358–1384, 1996. View at Scopus
  72. L. Xu, A. Krzyzak, and E. Oja, “Rival penalized competitive learning for clustering analysis, RBF net, and curve detection,” IEEE Transactions on Neural Networks, vol. 4, no. 4, pp. 636–649, 1993. View at Publisher · View at Google Scholar · View at Scopus
  73. K. Gao, M. O. Ahmad, and M. N. S. Swamy, “Constrained anti-Hebbian learning algorithm for total least-squares estimation with applications to adaptive FIR and IIR filtering,” IEEE Transactions on Circuits and Systems II, vol. 41, no. 11, pp. 718–729, 1994. View at Publisher · View at Google Scholar · View at Scopus
  74. D. Z. Feng, Z. Bao, and L. C. Jiao, “Total least mean squares algorithm,” IEEE Transactions on Signal Processing, vol. 46, no. 8, pp. 2122–2130, 1998. View at Scopus
  75. L. Xu, E. Oja, and C. Y. Suen, “Modified Hebbian learning for curve and surface fitting,” Neural Networks, vol. 5, no. 3, pp. 441–457, 1992. View at Scopus
  76. A. Taleb and G. Cirrincione, “Against the convergence of the minor component analysis neurons,” IEEE Transactions on Neural Networks, vol. 10, no. 1, pp. 207–210, 1999. View at Scopus
  77. K. Gao, M. O. Ahmad, and M. N. S. Swamy, “A modified Hebbian rule for total least-squares estimation with complex valued arguments,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS '92), pp. 1231–1234, San Diego, Calif, USA, 1992.
  78. B. Widrow and M. E. Hoff, “Adaptive switching circuits,” in Proceedings of the IRE Eastern Electronic Show & Convention Convention Record (WESCON '60), vol. 4, pp. 96–104, 1960.
  79. S. Ouyang, Z. Bao, and G. Liao, “Adaptive step-size minor component extraction algorithm,” Electronics Letters, vol. 35, no. 6, pp. 443–444, 1999. View at Publisher · View at Google Scholar · View at Scopus
  80. S. C. Douglas, S. Y. Kung, and S. I. Amari, “A self-stabilized minor subspace rule,” IEEE Signal Processing Letters, vol. 5, no. 12, pp. 328–330, 1998. View at Scopus
  81. S. Attallah and K. Abed-Meraim, “Fast algorithms for subspace tracking,” IEEE Signal Processing Letters, vol. 8, no. 7, pp. 203–206, 2001. View at Publisher · View at Google Scholar · View at Scopus
  82. T. Chen, “Modified Oja's algorithms for principal subspace and minor subspace extraction,” Neural Processing Letters, vol. 5, no. 2, pp. 105–110, 1997. View at Scopus
  83. T. Chen, S. I. Amari, and Q. Lin, “A unified algorithm for principal and minor components extraction,” Neural Networks, vol. 11, no. 3, pp. 385–390, 1998. View at Publisher · View at Google Scholar · View at Scopus
  84. K. Abed-Meraim, S. Attallah, A. Chkeif, and Y. Hua, “Orthogonal Oja algorithm,” IEEE Signal Processing Letters, vol. 7, no. 5, pp. 116–119, 2000. View at Publisher · View at Google Scholar
  85. F. L. Luo, R. Unbehauen, and A. Cichocki, “A minor component analysis algorithm,” Neural Networks, vol. 10, no. 2, pp. 291–297, 1997. View at Publisher · View at Google Scholar · View at Scopus
  86. T. Chen, S. I. Amari, and N. Murata, “Sequential extraction of minor components,” Neural Processing Letters, vol. 13, no. 3, pp. 195–201, 2001. View at Publisher · View at Google Scholar · View at Scopus
  87. Q. Zhang and Y. W. Leung, “A class of learning algorithms for principal component analysis and minor component analysis,” IEEE Transactions on Neural Networks, vol. 11, no. 1, pp. 200–204, 2000. View at Scopus
  88. G. Mathew, V. U. Reddy, and S. Dasgupta, “Adaptive estimation of eigensubspace,” IEEE Transactions on Signal Processing, vol. 43, no. 2, pp. 401–411, 1995. View at Publisher · View at Google Scholar · View at Scopus
  89. A. Weingessel, H. Bischof, K. Hornik, and F. Leisch, “Adaptive combination of PCA and VQ networks,” IEEE Transactions on Neural Networks, vol. 8, no. 5, pp. 1208–1211, 1997. View at Scopus
  90. R. Möller and H. Hoffmann, “An extension of neural gas to local PCA,” Neurocomputing, vol. 62, no. 1–4, pp. 305–326, 2004. View at Publisher · View at Google Scholar · View at Scopus
  91. T. M. Martinetz, S. G. Berkovich, and K. J. Schulten, “'Neural-gas' network for vector quantization and its application to time-series prediction,” IEEE Transactions on Neural Networks, vol. 4, no. 4, pp. 558–569, 1993. View at Publisher · View at Google Scholar · View at Scopus
  92. J. Karhunen and S. Malaroiu, “Locally linear Independent Component Analysis,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '99), pp. 882–887, Washington, DC, USA, July 1999. View at Scopus
  93. T. Kohonen, Self-Organizing Maps, Springer, Berlin, Germany, 1997.
  94. E. Oja and K. Valkealahti, “Local independent component analysis by the self-organizing map,” in Proceedings of the International Conference on Artificial Neural Networks, pp. 553–558, Lausanne, Switzerland, 1997.
  95. J. D. Horel, “Complex principal component analysis: theory and examples,” Journal of Climate and Applied Meteorology, vol. 23, no. 12, pp. 1660–1673, 1984. View at Scopus
  96. S. S. P. Rattan and W. W. Hsieh, “Complex-valued neural networks for nonlinear complex principal component analysis,” Neural Networks, vol. 18, no. 1, pp. 61–69, 2005. View at Publisher · View at Google Scholar · View at Scopus
  97. T. Kim and T. Adali, “Fully complex multi-layer perceptron network for nonlinear signal processing,” Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, vol. 32, no. 1-2, pp. 29–43, 2002. View at Publisher · View at Google Scholar · View at Scopus
  98. M. C. F. De Castro, F. C. C. De Castro, J. N. Amaral, and P. R. G. Franco, “Complex valued Hebbian learning algorithm,” in Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN '98), pp. 1235–1238, Anchorage, Alaska, USA, May 1998. View at Scopus
  99. Y. Chen and C. Hou, “High resolution adaptive bearing estimation using a complex-weighted neural network,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '92), vol. 2, pp. 317–320, San Francisco, Calif, USA, 1992.
  100. A. Cichocki, R. W. Swiniarski, and R. E. Bogner, “Hierarchical neural network for robust PCA computation of complex valued signals,” in Proceedings of the World Congress on Neural Networks, pp. 818–821, San Diego, Calif, USA, 1996.
  101. E. Bingham and A. Hyvarinen, “ICA of complex valued signals: a fast and robust deflationary algorithm,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '00), pp. 357–362, Como, Italy, July 2000. View at Scopus
  102. S. Fiori, “Blind separation of circularly distributed sources by neural extended APEX algorithm,” Neurocomputing, vol. 34, pp. 239–252, 2000. View at Publisher · View at Google Scholar · View at Scopus
  103. S. Fiori, “Extended Hebbian learning for blind separation of complex-valued sources,” IEEE Transactions on Circuits and Systems II, vol. 50, no. 4, pp. 195–202, 2003. View at Publisher · View at Google Scholar · View at Scopus
  104. Z. Yi, Y. Fu, and H. J. Tang, “Neural networks based approach for computing eigenvectors and eigenvalues of symmetric matrix,” Computers and Mathematics with Applications, vol. 47, no. 8-9, pp. 1155–1164, 2004. View at Publisher · View at Google Scholar · View at Scopus
  105. Y. Liu, Z. You, and L. Cao, “A simple functional neural network for computing the largest and smallest eigenvalues and corresponding eigenvectors of a real symmetric matrix,” Neurocomputing, vol. 67, no. 1–4, pp. 369–383, 2005. View at Publisher · View at Google Scholar · View at Scopus
  106. S. Chen and T. Sun, “Class-information-incorporated principal component analysis,” Neurocomputing, vol. 69, no. 1–3, pp. 216–223, 2005. View at Publisher · View at Google Scholar · View at Scopus
  107. M. S. Park and J. Y. Choi, “Theoretical analysis on feature extraction capability of class-augmented PCA,” Pattern Recognition, vol. 42, no. 11, pp. 2353–2362, 2009. View at Publisher · View at Google Scholar · View at Scopus
  108. A. D'Aspremont, F. Bach, and L. El Ghaoui, “Optimal solutions for sparse principal component analysis,” Journal of Machine Learning Research, vol. 9, pp. 1269–1294, 2008. View at Scopus
  109. S. Y. Kung, “Constrained principal component analysis via an orthogonal learning network,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS '90), pp. 719–722, New Orleans, La, USA, May 1990. View at Scopus
  110. A. Valizadeh and M. Karimi, “Fast subspace tracking algorithm based on the constrained projection approximation,” Eurasip Journal on Advances in Signal Processing, vol. 2009, Article ID 576972, 16 pages, 2009. View at Publisher · View at Google Scholar · View at Scopus
  111. J. Mao and A. K. Jain, “Artificial neural networks for feature extraction and multivariate data projection,” IEEE Transactions on Neural Networks, vol. 6, no. 2, pp. 296–317, 1995. View at Publisher · View at Google Scholar · View at Scopus
  112. G. K. Demir and K. Ozmehmet, “Online local learning algorithms for linear discriminant analysis,” Pattern Recognition Letters, vol. 26, no. 4, pp. 421–431, 2005. View at Publisher · View at Google Scholar · View at Scopus
  113. C. Chatterjee, V. P. Roychowdhury, J. Ramos, and M. D. Zoltowski, “Self-organizing algorithms for generalized eigen-decomposition,” IEEE Transactions on Neural Networks, vol. 8, no. 6, pp. 1518–1530, 1997. View at Scopus
  114. D. Xu, J. C. Principe, and H. C. Wu, “Generalized eigendecomposition with an on-line local algorithm,” IEEE Signal Processing Letters, vol. 5, no. 11, pp. 298–301, 1998. View at Scopus
  115. G. Mathew and V. U. Reddy, “A quasi-newton adaptive algorithm for generalized symmetric eigenvalue problem,” IEEE Transactions on Signal Processing, vol. 44, no. 10, pp. 2413–2422, 1996. View at Scopus
  116. Y. N. Rao, J. C. Principe, and T. F. Wong, “Fast RLS-like algorithm for generalized eigendecomposition and its applications,” Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, vol. 37, no. 2-3, pp. 333–344, 2004. View at Publisher · View at Google Scholar · View at Scopus
  117. Y. Tang and J. Li, “Notes on "Recurrent neural network model for computing largest and smallest generalized eigenvalue",” Neurocomputing, vol. 73, no. 4–6, pp. 1006–1012, 2010. View at Publisher · View at Google Scholar · View at Scopus
  118. J. Yang, Y. Zhao, and H. Xi, “Weighted rule based adaptive algorithm for simultaneously extracting generalized eigenvectors,” IEEE Transactions on Neural Networks, vol. 22, no. 5, pp. 800–806, 2011. View at Publisher · View at Google Scholar · View at Scopus
  119. D. Zhang, Z. H. Zhou, and Songcan Chen, “Diagonal principal component analysis for face recognition,” Pattern Recognition, vol. 39, no. 1, pp. 140–142, 2006. View at Publisher · View at Google Scholar · View at Scopus
  120. X. Li, Y. Pang, and Y. Yuan, “L1-norm-based 2DPCA,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 40, no. 4, pp. 1170–1175, 2010. View at Publisher · View at Google Scholar · View at Scopus
  121. W. Zuo, D. Zhang, and K. Wang, “Bidirectional PCA with assembled matrix distance metric for image recognition,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 36, no. 4, pp. 863–872, 2006. View at Publisher · View at Google Scholar · View at Scopus
  122. C. X. Ren and D. Q. Dai, “Incremental learning of bidirectional principal components for face recognition,” Pattern Recognition, vol. 43, no. 1, pp. 318–330, 2010. View at Publisher · View at Google Scholar · View at Scopus
  123. H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning,” IEEE Transactions on Neural Networks, vol. 20, no. 11, pp. 1820–1836, 2009. View at Scopus
  124. H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “MPCA: multilinear principal component analysis of tensor objects,” IEEE Transactions on Neural Networks, vol. 19, no. 1, pp. 18–39, 2008. View at Publisher · View at Google Scholar · View at Scopus
  125. K. I. Diamantaras and S. Y. Kung, “Cross-correlation neural network models,” IEEE Transactions on Signal Processing, vol. 42, no. 11, pp. 3218–3223, 1994. View at Publisher · View at Google Scholar · View at Scopus
  126. K. L. Du and M. N. S. Swamy, “Simple and practical cyclostationary beamforming algorithms,” IEE Proceedings, vol. 151, no. 3, pp. 175–179, 2004. View at Publisher · View at Google Scholar · View at Scopus
  127. D. Z. Feng, Z. Bao, and W. X. Shi, “Cross-correlation neural network models for the smallest singular component of general matrix,” Signal Processing, vol. 64, no. 3, pp. 333–346, 1998. View at Scopus
  128. R. Badeau, G. Richard, and B. David, “Sliding window adaptive SVD algorithms,” IEEE Transactions on Signal Processing, vol. 52, no. 1, pp. 1–10, 2004. View at Publisher · View at Google Scholar · View at Scopus
  129. A. Kaiser, W. Schenck, and R. Möller, “Coupled singular value decomposition of a cross-covariance matrix,” International Journal of Neural Systems, vol. 20, no. 4, pp. 293–318, 2010. View at Publisher · View at Google Scholar · View at Scopus
  130. L. Tucker, Implication of Factor Analysis of Three-Way Matrices for Measurement of Change, University Wisconsin Press, Madison, Wis, USA, 1963.
  131. R. Costantini, L. Sbaiz, and S. Süsstrunk, “Higher order SVD analysis for dynamic texture synthesis,” IEEE Transactions on Image Processing, vol. 17, no. 1, pp. 42–52, 2008. View at Publisher · View at Google Scholar · View at Scopus
  132. H. Hotelling, “Relations between two sets of variates,” Biometrika, vol. 28, pp. 321–377, 1936.
  133. M. S. Bartlett, “Further aspects of the theory of multiple regression,” in Proceedings of the Cambridge Philosophical Society, vol. 34, pp. 33–40, 1938. View at Publisher · View at Google Scholar
  134. T. Hastie, A. Buja, and R. Tibshirani, “Penalized discriminant analysis,” Annals of Statistics, vol. 23, no. 1, pp. 73–102, 1995. View at Publisher · View at Google Scholar
  135. J. R. Kettenring, “Canonical analysis of several sets of variables,” Biometrika, vol. 58, no. 3, pp. 433–451, 1971. View at Publisher · View at Google Scholar · View at Scopus
  136. L. Sun, S. Ji, and J. Ye, “Canonical correlation analysis for multilabel classification: a least-squares formulation, extensions, and analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 1, pp. 194–200, 2011. View at Publisher · View at Google Scholar · View at Scopus
  137. O. Kursun, E. Alpaydin, and O. V. Favorov, “Canonical correlation analysis using within-class coupling,” Pattern Recognition Letters, vol. 32, no. 2, pp. 134–144, 2011. View at Publisher · View at Google Scholar · View at Scopus
  138. T. Sun and S. Chen, “Locality preserving CCA with applications to data visualization and pose estimation,” Image and Vision Computing, vol. 25, no. 5, pp. 531–543, 2007. View at Publisher · View at Google Scholar · View at Scopus
  139. H. Wang, “Local two-dimensional canonical correlation analysis,” IEEE Signal Processing Letters, vol. 17, no. 11, pp. 921–924, 2010. View at Publisher · View at Google Scholar · View at Scopus
  140. G. Kukharev and E. Kamenskaya, “Application of two-dimensional canonical correlation analysis for face image processing and recognition,” Pattern Recognition and Image Analysis, vol. 20, no. 2, pp. 210–219, 2010. View at Publisher · View at Google Scholar · View at Scopus
  141. K. L. Du, A. K. Y. Lai, K. K. M. Cheng, and M. N. S. Swamy, “Neural methods for antenna array signal processing: a review,” Signal Processing, vol. 82, no. 4, pp. 547–561, 2002. View at Publisher · View at Google Scholar · View at Scopus
  142. G. W. Cottrell, P. Munro, and D. Zipser, “Learning internal representations from gray-scale images: an example of extensional programming,” in Proceedings of the 9th Conference of tile Cognitive Science Society, pp. 462–473, Seattle, Wash, USA, 1987.
  143. W. Chen, M. J. Er, and S. Wu, “PCA and LDA in DCT domain,” Pattern Recognition Letters, vol. 26, no. 15, pp. 2474–2482, 2005. View at Publisher · View at Google Scholar · View at Scopus
  144. P. Comon, “Independent component analysis, a new concept?” Signal Processing, vol. 36, no. 3, pp. 287–314, 1994. View at Scopus
  145. C. Jutten and J. Herault, “Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture,” Signal Processing, vol. 24, no. 1, pp. 1–10, 1991. View at Scopus
  146. J. Karhunen, P. Pajunen, and E. Oja, “The nonlinear PCA criterion in blind source separation: relations with other approaches,” Neurocomputing, vol. 22, no. 1–3, pp. 5–20, 1998. View at Publisher · View at Google Scholar · View at Scopus
  147. A. Hyvärinen and E. Oja, “A fast fixed-point algorithm for independent component analysis,” Neural Computation, vol. 9, no. 7, pp. 1483–1492, 1997. View at Scopus
  148. S. Choi, A. Cichocki, and S. Amari, “Equivariant nonstationary source separation,” Neural Networks, vol. 15, no. 1, pp. 121–130, 2002. View at Scopus
  149. A. Hyvärinen and P. Pajunen, “Nonlinear independent component analysis: existence and uniqueness results,” Neural Networks, vol. 12, no. 3, pp. 429–439, 1999. View at Publisher · View at Google Scholar · View at Scopus
  150. M. D. Plumbley, “Algorithms for nonnegative independent component analysis,” IEEE Transactions on Neural Networks, vol. 14, no. 3, pp. 534–543, 2003. View at Publisher · View at Google Scholar · View at Scopus
  151. W. Lu and J. C. Rajapakse, “Approach and applications of constrained ICA,” IEEE Transactions on Neural Networks, vol. 16, no. 1, pp. 203–212, 2005. View at Publisher · View at Google Scholar · View at Scopus
  152. C. S. Cruz and J. R. Dorronsoro, “A nonlinear discriminant algorithm for feature extraction and data classification,” IEEE Transactions on Neural Networks, vol. 9, no. 6, pp. 1370–1376, 1998. View at Scopus
  153. P. Howland and H. Park, “Generalizing discriminant analysis using the generalized singular value decomposition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 8, pp. 995–1006, 2004. View at Publisher · View at Google Scholar · View at Scopus
  154. S. Mika, G. Ratsch, J. Weston, B. Scholkopf, and K. R. Muller, “Fisher discriminant analysis with kernels,” in Proceedings of the 9th IEEE Workshop on Neural Networks for Signal Processing (NNSP '99), pp. 41–48, Piscataway, NJ, USA, August 1999. View at Scopus
  155. M. Li and B. Yuan, “2D-LDA: a statistical linear discriminant analysis for image matrix,” Pattern Recognition Letters, vol. 26, no. 5, pp. 527–532, 2005. View at Publisher · View at Google Scholar · View at Scopus
  156. H. Kong, L. Wang, E. K. Teoh, J. G. Wang, and R. Venkateswarlu, “A framework of 2D fisher discriminant analysis: application to face recognition with small number of training samples,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), pp. 1083–1088, San Diego, Calif, USA, June 2005. View at Scopus