Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2018, Article ID 9327536, 10 pages
https://doi.org/10.1155/2018/9327536
Research Article

Using Deep Learning to Predict Complex Systems: A Case Study in Wind Farm Generation

Department of Computer and Systems Engineering, Universidad de La Laguna, La Laguna 38200 Tenerife, Spain

Correspondence should be addressed to R. M. Aguilar; se.ude.llu@raliugar

Received 30 November 2017; Accepted 25 February 2018; Published 3 April 2018

Academic Editor: José Manuel Andújar

Copyright © 2018 J. M. Torres and R. M. Aguilar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. M. Abadi, A. Agarwal, P. Barham et al., {TensorFlow}: Large-Scale Machine Learning on Heterogeneous Systems, 2015.
  2. H. Yañez-Badillo, R. Tapia-Olvera, O. Aguilar-Mejía, and F. Beltran-Carbajal, “On Line Adaptive Neurocontroller for Regulating Angular Position and Trajectory of Quadrotor System,” RIAI - Revista Iberoamericana de Automatica e Informatica Industrial, vol. 14, no. 2, pp. 141–151, 2017. View at Publisher · View at Google Scholar · View at Scopus
  3. I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton, “On the importance of initialization and momentum in deep learning,” in Proceedings of the 30th International Conference on Machine Learning, vol. 28, pp. 1139–1147, PMLR, 2013.
  4. J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” Journal of Machine Learning Research (JMLR), vol. 12, pp. 2121–2159, 2011. View at Google Scholar · View at MathSciNet
  5. C. J. Willmott and K. Matsuura, “Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance,” Climate Research, vol. 30, no. 1, pp. 79–82, 2005. View at Publisher · View at Google Scholar · View at Scopus
  6. R. J. Hyndman and A. B. Koehler, “Another look at measures of forecast accuracy,” International Journal of Forecasting, vol. 22, no. 4, pp. 679–688, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014. View at Google Scholar · View at MathSciNet
  8. X. Glorot, X. Glorot, and Y. Bengio, in Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS’10), Society for Artificial Intelligence and Statistics, 2010.
  9. G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-Normalizing Neural Networks,” 2017, http://arxiv.org/abs/1706.02515.
  10. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2323, 1998. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. View at Publisher · View at Google Scholar
  12. K. Cho, B. van Merrienboer, C. Gulcehre et al., “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” http://arxiv.org/abs/1406.1078.
  13. D. L. Marino, K. Amarasinghe, and M. Manic, “Simultaneous generation-classification using LSTM,” in Proceedings of the IEEE Symposium Series on Computational Intelligence, SSCI '16, Greece, December 2016. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” 2014, http://arxiv.org/abs/1412.3555.
  15. J. Quan, X. Feng, G. Su, and G. Chen, Chinese Journal of Rock Mechanics and Engineering , vol. 26, article 2654, 2007 (Chinese).