Table of Contents
Advances in Artificial Neural Systems
Volume 2015, Article ID 931379, 16 pages
http://dx.doi.org/10.1155/2015/931379
Research Article

Stochastic Search Algorithms for Identification, Optimization, and Training of Artificial Neural Networks

Faculty of Management, 21000 Novi Sad, Serbia

Received 6 July 2014; Revised 19 November 2014; Accepted 19 November 2014

Academic Editor: Ozgur Kisi

Copyright © 2015 Kostantin P. Nikolic. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. W. R. Ashby, “Cybernetics today and its adjustment with technical sciences in the future,” in Computer Machine and Logical Thinking, Compman and Hall, 1952. View at Google Scholar
  2. L. A. Rastrigin, “Ashby’s Gomeostat,” in Stochastics Search Methods, pp. 50–58, Science, Moscow, Russia, 1965. View at Google Scholar
  3. L. A. Rastrigin, The Theory and Application Stochastics Search Methods, Zinatne, Riga, Latvia, 1969.
  4. K. K. Ripa, “Comparison of steepest deescent and stochastics search self-learning methods,” in Stochastic Search Optimization Problems, Zinatne, Riga, Latvia, 1968. View at Google Scholar
  5. A. Dvorezky, “On stochastisc approximation,” in Proceedings of the 3rd Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, University of California Press, Berkeley, Calif, USA, 1956. View at Google Scholar
  6. L. A. Rastrigin and L. S. Rubinshtain, “Comparison of stochastic search and stochastic approximation method,” in The Theory and Application Stochastics Search Methods, pp. 149–156, Zinatne, Riga, Latvia, 1968. View at Google Scholar
  7. A. A. Zhigljavsky, Theory of Global Random Search, Kluwer Academic, Boston, Mass, USA, 1991. View at MathSciNet
  8. N. Baba, T. Shoman, and Y. Sawaragi, “A modified convergence theorem for a random optimization method,” Information Sciences, vol. 13, no. 2, pp. 159–166, 1977. View at Publisher · View at Google Scholar · View at MathSciNet
  9. J. C. Spall, “Introduction to stochastic search and optimization: estimation, simulation and control,” Automation and Remote Control, vol. 26, pp. 224–251, 2003. View at Google Scholar
  10. K. P. Nikolic, “An identification of complex industrial systems by stochastic search method,” in Proceeding of the ETAN '79, vol. 3, pp. 179–186, 1979.
  11. K. P. Nikolic, “Neural networks in complex control systems and stochastic search algorithms,” in Proceeding of the ETRAN '09 Conference, vol. 3, pp. 170–173, Bukovička Banja, Aranđelovac, Serbia, 2009.
  12. K. P. Nikolic and B. Abramovic, “Neural networks synthesis by using of stochastic search methods,” in Proceeding of the ETRAN '04, pp. 115–118, Čačak, Serbia, 2004.
  13. K. P. Nikolic, B. Abramovic, and I. Scepanovic, “An approach to synthesis and analysis of complex recurrent neural network,” in Proceedings of the 8th Seminar on Neural Network Applications in Electrical Engineering (NEUREL '06), Belgrade, Serbia, 2006.
  14. J. A. Gentle, Random Number Generation and Monte Carlo Method, Springer, New York, NY, USA, 2nd edition, 2003. View at MathSciNet
  15. C. B. Moler, Numerical Computing with MATLAB, SIAM, Philadelphia, Pa, USA, 2003.
  16. P. Eykhoof, “Some fundamental aspects of process-parameter estimation,” IEEE Transactions on Automatic Control, vol. 8, no. 4, pp. 347–357, 1963. View at Publisher · View at Google Scholar
  17. C. S. Beighlar, Fundamental of Optimization, 1967.
  18. L. A. Rastrigin, Stochastic Model of Optimization of Multiple Parameters Objects, Zinatne, 1965.
  19. J. T. Tou, Modren Control Theory, McGraw-Hill, New York, NY, USA, 1964.
  20. J. Stanic, “Langrage's method of multiplicators,” in Book Introduction in Techno—Economic Theory of Process Optimization, pp. 35–40, Faculty of Mechanical Engineering, Belgrade, Serbia, 1983. View at Google Scholar
  21. G. A. Korn, “Derivation operators,” in Mathematical Handbook for Scientists and Engineers, pp. 166–170, McGraw-Hill, New York, NY, USA, 1961. View at Google Scholar
  22. L. A. Rastrigin, “Stochastic local search algorithms,” in Book Stochastics Search Methods, pp. 64–102, Science, Moscow, Russia, 1968. View at Google Scholar
  23. L. A. Rastrigin, “Characteristics of effectiveness of stochastic search method,” in Stochastics Search Methods, pp. 32–41, Science Publishing, Moscow, Russia, 1986. View at Google Scholar
  24. L. A. Rastrigin, “Comparison of methods of gradient and stochastics search methods,” in Book Stochastics Search Methods, pp. 102–108, Science, Moscow, Russia, 1968. View at Google Scholar
  25. K. P. Nikolic, “An approach of random variables generation for an adaptive stochastic search,” in Proceeding of the ETRAN '96, pp. 358–361, Zlatibor, Serbia, 1996.
  26. L. A. Rastrigin, “Multistep algorithms in the central field,” in Book Stochastics Search Methods, pp. 95–103, Science, Moscow, Russia, 1968. View at Google Scholar
  27. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986. View at Publisher · View at Google Scholar · View at Scopus
  28. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representation by error propagation,” in Parallel Distributed Processing Explorations in the Microstructures of Cognition, D. E. Rumelhart and J. L. Mc Clelland, Eds., vol. 1, pp. 318–362, MIT Press, Cambridge, Mass, USA, 1986. View at Google Scholar
  29. E. E. Baum and D. Haussler, “What size net gives valid generalization?” Neural Computation, vol. 1, no. 1, pp. 151–160, 1989. View at Publisher · View at Google Scholar
  30. E. B. Baum, “On the capabilities of multilayer perceptrons,” Journal of Complexity, vol. 4, no. 3, pp. 193–215, 1988. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. K. Hornik, M. Stinchcombe, and H. White, “Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks,” Neural Networks, vol. 3, no. 5, pp. 551–560, 1990. View at Publisher · View at Google Scholar · View at Scopus
  32. K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks, vol. 4, no. 2, pp. 251–257, 1991. View at Publisher · View at Google Scholar · View at Scopus
  33. M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken, “Multilayer feedforward networks with a nonpolynomial activation function can approximate any function,” Neural Networks, vol. 6, no. 6, pp. 861–867, 1993. View at Publisher · View at Google Scholar · View at Scopus
  34. J. Flacher and Z. Obradović, “Constructively learning a near-minimal neural network architecture,” in Proceedings of the International Conference on Neural Networks, pp. 204–208, Orlando, Fla, USA, 1994.
  35. S. E. Fahlman and C. Lobiere, “The Cascade-corellation learning architecture,” in Advances in Neural Information Processing Systems, D. Touretzky, Ed., vol. 2, pp. 524–532, Morgan Kaufmann, San Mat, Calif, USA, 1990. View at Google Scholar
  36. M. Mezard and J.-P. Nadal, “Learning in feedforward layered networks: the tiling algorithm,” Journal of Physics A: Mathematical and General, vol. 22, no. 12, pp. 2191–2203, 1989. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at Publisher · View at Google Scholar · View at MathSciNet
  38. S. Milenkovic, “The idea of adaptive selection of type preturbations in the algorithm of simulated annealing,” in Proceedings of the XXXVI YU Conference for ETRAN, vol. 2, pp. 67–74, Kopaonik, Serbia, 1992.
  39. K. P. Nikolic, “An implementation of stochastic search for complex systems identification and optimization,” in Proceedings of the ETRAN '82, vol. 3, pp. 221–227, Subotica, Serbia, 1982.
  40. G. A. Korn and T. M. Korn, Mathematical Handbook for Scientists and Engineers, McGraw-Hill, New York, NY, USA, 1961.
  41. M. Minsky and S. Pappert, “Perceptrons,” in An Introduction to Computational Geometry, MIT Press, Cambridge, Mass, USA, 1969. View at Google Scholar
  42. W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” The Bulletin of Mathematical Biophysics, vol. 5, pp. 115–133, 1943. View at Google Scholar · View at MathSciNet
  43. J. J. Hopfield, “Neural network and physical systems with emergent collective computational abilites,” Proceedings of the National Academy of Sciences of the United States of America, vol. 79, pp. 2554–2558, 1992. View at Google Scholar
  44. S. Haykin, “Summary of the back-propgation algorithm,” in Book Neural Networks (A Comprehensive Foundation), pp. 153–156, Macmillan College Publishing, New York, NY, USA, 1994. View at Google Scholar
  45. S. Milenkovic, “Algorithms for artificial neuron networks training,” in Ph.D dissssertation: Annealing Based Dynamic Learning in Second—Order Neuron Networks, (“Artificial Neuro Networks” Library Disertatio—Andrejevic, Belgrad, 1997), pp. 29–44, Univecity of Nish, ETF, 1996. View at Google Scholar
  46. K. P. Nikolić, “An identification of non-linear objects of complex industrial systems,” in Proceedings of ETRAN '98—XLII Conference for Electronics, Computers, Automation, and Nuclear Engineering, pp. 359–362, Vrnjacka Banja, Yugoslavia, 1998.
  47. G. M. Ostrovsky and Yu. M. Volin, “The mathematical description of process in fluo-solid reactors,” in Methods of Optimization of Chemical Reactors, pp. 30–47, Chemie, Moscow, Russia, 1967. View at Google Scholar