Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2018, Article ID 1947250, 8 pages
https://doi.org/10.1155/2018/1947250
Research Article

The Spiral Discovery Network as an Automated General-Purpose Optimization Tool

Department of Informatics, Széchenyi István University, Győr, Hungary

Correspondence should be addressed to Adam B. Csapo; uh.ezs@mada.opasc

Received 29 September 2017; Accepted 22 January 2018; Published 12 March 2018

Academic Editor: Kevin Wong

Copyright © 2018 Adam B. Csapo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. P. Baranyi, A. Csapo, and G. Sallai, “Cognitive infocommunications (CogInfoCom),” Cognitive Infocommunications (CogInfoCom), pp. 1–219, 2015. View at Publisher · View at Google Scholar · View at Scopus
  2. A. Csapo and P. Baranyi, “The spiral discovery method: An interpretable tuning model for CogInfoCom channels,” Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 16, no. 2, pp. 358–367, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. H. Takagi, “Interactive evolutionary computation: fusion of the capabilities of EC optimization and human evaluation,” Proceedings of the IEEE, vol. 89, no. 9, pp. 1275–1296, 2001. View at Publisher · View at Google Scholar · View at Scopus
  4. H. Takagi and H. Iba, “Preface interactive evolutionary computation,” New Generation Computing, vol. 23, no. 2, pp. 113-114, 2005. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Shalev-Shwartz, O. Shamir, and S. Shammah, Failures of deep learning, 2017, arXiv preprint arXiv:1703.07950.
  6. Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, vol. 87 of Applied Optimization, Springer, Amsterdam, The Netherlands, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  7. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” California Univ San Diego La Jolla Inst for Cognitive Science, 1985. View at Google Scholar
  8. N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural Networks, vol. 12, no. 1, pp. 145–151, 1999. View at Publisher · View at Google Scholar · View at Scopus
  9. I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in Proceedings of the 30th International Conference on Machine Learning, ICML 2013, pp. 2176–2184, usa, June 2013. View at Scopus
  10. J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” Journal of Machine Learning Research (JMLR), vol. 12, pp. 2121–2159, 2011. View at Google Scholar · View at MathSciNet
  11. D. Kingma and J. Ba, Adam: A method for stochastic optimization, 2014, arXiv preprint arXiv:1412.6980.
  12. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the 32nd International Conference on Machine Learning (ICML '15), pp. 448–456, July 2015. View at Scopus
  13. A. Neelakantan, L. Vilnis, Q. V. Le et al., Adding gradient noise improves learning for very deep networks, 2015, arXiv preprint arXiv:1511.06807.
  14. P. Baranyi, “TP model transformation as a way to LMI-based controller design,” IEEE Transactions on Industrial Electronics, vol. 51, no. 2, pp. 387–400, 2004. View at Publisher · View at Google Scholar · View at Scopus
  15. P. Baranyi, Y. Yam, and P. Várlaki, Tensor product model transformation in polytopic model-based control, CRC Press, 2013.
  16. P. Baranyi, TP-Model Transformation-Based-Control Design Frameworks, Springer International Publishing, 2016. View at Publisher · View at Google Scholar
  17. L. de Lathauwer, B. de Moor, and J. Vandewalle, “A multilinear singular value decomposition,” SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253–1278, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. M. Ishteva, L. De Lathauwer, P.-A. Absil, and S. Van Huffel, “Dimensionality reduction for higher-order tensors: algorithms and applications,” International Journal of Pure and Applied Mathematics, vol. 42, no. 3, pp. 337–343, 2008. View at Google Scholar · View at MathSciNet
  19. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. View at Publisher · View at Google Scholar · View at Scopus
  20. N. Kalchbrenner, I. Danihelka, and A. Graves, Grid long short-term memory, 2015, arXiv preprint arXiv:1507.01526.
  21. J. Kennedy, “Particle swarm optimization,” in Encyclopedia of Machine Learning, pp. 760–766, Springer US, Boston, MA, USA, 2011. View at Publisher · View at Google Scholar
  22. R. Poli, J. Kennedy, and T. Blackwell, “Particle swarm optimization,” Swarm Intelligence, vol. 1, no. 1, pp. 33–57, 2007. View at Publisher · View at Google Scholar
  23. L. Davis, Handbook of genetic algorithms, 1991.
  24. M. Gen and R. Cheng, Genetic algorithms and engineering optimization, John Wiley & Sons, 2000.
  25. J. H. Holland, Complexity: A Very Short Introduction, Oxford University Press, 2014.