Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2015, Article ID 818243, 13 pages
http://dx.doi.org/10.1155/2015/818243
Review Article

On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review

Department of Engineering, Roma Tre University, Via Vito Volterra 62, 00146 Rome, Italy

Received 7 May 2015; Revised 16 August 2015; Accepted 17 August 2015

Academic Editor: Saeid Sanei

Copyright © 2015 Antonino Laudani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A comprehensive review on the problem of choosing a suitable activation function for the hidden layer of a feed forward neural network has been widely investigated. Since the nonlinear component of a neural network is the main contributor to the network mapping capabilities, the different choices that may lead to enhanced performances, in terms of training, generalization, or computational costs, are analyzed, both in general-purpose and in embedded computing environments. Finally, a strategy to convert a network configuration between different activation functions without altering the network mapping capabilities will be presented.