Computational Intelligence and Neuroscience

Using Neuroevolution to Design Neural Networks


Publishing date
01 May 2020
Status
Published
Submission deadline
13 Dec 2019

Lead Editor

1Nova Information Management School (NOVA IMS), Lisbon, Portugal

2Università di Trieste, Trieste, Italy

3Instituto Tecnologico de Tijuana, Tijuana, Mexico

4University of Trieste, Trieste, Italy


Using Neuroevolution to Design Neural Networks

Description

Deep learning (DL) has gained popularity in the field of machine learning, with DL-based models nowadays used to address complex problems across many different domains. In DL, the training process of a neural network (NN) is performed through variants of the stochastic gradient descent algorithm.

A different yet not fully-explored approach comes from the field of neuroevolution, which uses evolutionary algorithms to optimize NNs. Neuroevolution has the potential to outperform DL-based models as it can optimize the entire NN architecture, its hyperparameters, and the learning algorithm. Independence from the gradient descent algorithm allows neuroevolution to learn NN hyperparameters that are novel and diverse, two concepts that are deemed important in the discovery of optimal solutions. All in all, the availability of computational resources gives researchers the possibility to fully exploit the hidden potential of neuroevolution and create NN architectures that are tailored to particular problems which need to be solved. Developments in the area of neuroevolution also allow researchers to better understand the relationship between the human brain and computational intelligence models that are inspired by its functioning, which can lead to the development of a new generation of NNs.

This special issue invites original research articles that focus on the design and implementation of new neuroevolution methods and techniques and consider their applications for solving complex optimization problems. The aim of this special issue is to contribute to the development of new methods and advances in the field of neuroevolution, both in the theoretical sense and in the application of these results. Research that discusses new generations of NNs, such as fuzzy NNs, attention-based NNs, and modular NNs, is particularly encouraged. Review articles that summarize the state of the art in neuroevolution are also welcome.

Potential topics include but are not limited to the following:

  • New methods and algorithms for learning the weights of a NN which overcome the limitations of the traditional gradient descent approach
  • New methods and techniques in evolutionary computation that can improve the design process of a NN in terms of architecture and hyperparameters
  • Definitions of methods that can efficiently explore the (infinite) search space of NN architectures
  • Applications of newly-defined neuroevolution techniques for addressing complex real-world optimization problems
  • Methods that overcome the limitations of existing neuroevolution methods, providing a significant step towards the automatic design of a NN architecture
  • Hybrid methods that combine evolutionary computation techniques with optimization paradigms
  • Analysis of the complexity of NN evolutionary algorithms (EAs)
  • Run time analysis of EAs for NNs optimization

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.