Applied Computational Intelligence and Soft Computing

Recent Developments in Artificial Neural Network Structure Optimization and Parameter Initialization


Publishing date
01 Jan 2025
Status
Open
Submission deadline
27 Sep 2024

Lead Editor

1National Institute of Technology Kurukshetra, Kurukshetra, India

2Netaji Subhas University of Technology, New Delhi, India

3Universidad Internacional De La Rioja, Logroño, Spain


Recent Developments in Artificial Neural Network Structure Optimization and Parameter Initialization


Call for papers

This Issue is now open for submissions.

Papers are published upon acceptance, regardless of the Special Issue publication date.

 Submit to this Special Issue

Description

In recent years, significant progress has been made in both theories and applications of artificial neural networks (ANNs) and their hybrid variants. They have been successfully applied to solve real-world problems, such as modeling, adaptive control, and classification. This is possible due to the development of various techniques that tune the parameters of these neural network-based models. These ANN models and their variants often require a large number of parameters and mathematical operations to make accurate predictions, therefore, designing optimal architectures for these networks will speed up their learning process and improve their generalization capability. The other important factor that can affect the accuracy and training time of these models is the initial setting of the parameter values of these models. If they are initialized correctly, they can help reduce training time and improve performance of the model. Neural network pruning is a process of simplifying a neural network by eliminating some of its components, such as weights, activations, or entire layers. The goal of pruning is to reduce the number of parameters, operations, and memory required by the network, without compromising its functionality or accuracy. In some instances, the neural network may begin small but then grows to meet the requirements of the problem – therefore, parameters can be eliminated or included to improve the performance of the network.

Neural network pruning-growing and parameter initialization are not easy tasks and can present some challenges and trade-offs. For example, it can be difficult to find the optimal pruning ratio between complexity and functionality, which may require trial-and-error or adaptive methods. Additionally, pruning-growing can affect not only the efficiency, but also the quality and reliability of the network. To evaluate the impact of pruning, rigorous testing and validation may be needed, or benchmarks that capture the trade-offs between speed, size, accuracy, and robustness.

The goal of this Special Issue is to gather papers offer novel insights, innovative approaches, and advancements in novel methods for optimizing the structure of neural models and frameworks for the initialization of their parameters to optimize training time. We welcome both original research and review articles.

Potential topics include but are not limited to the following:

  • Neural network-based model pruning techniques for efficient model compression using evolutionary and nature inspired algorithms
  • Detailed methodological approaches in evolutionary and nature inspire algorithms for model compression
  • Magnitude and structured pruning methods and their novel combinations
  • Application of structurally optimized neural models in modeling, adaptive control, and time-series forecasting
  • Development of novel methods based on zero, random, and Xavier methods for the model's initialization parameter setting
  • Structure optimization and initialization methods for neural-based models
  • Comparisons of various structural optimization strategies for neural-based models
  • Structurally optimized recurrent neural models such as long short-term memory (LSTM), locally, or fully connected neural network models
  • Application of constructive algorithms to synthesize arbitrarily connected neural networks
  • Combining novel weight initialization methods and constructive-covering algorithms to configure a single feedforward neural network
  • Self-organizing neural-based methods and their stability analysis
Applied Computational Intelligence and Soft Computing
 Journal metrics
See full report
Acceptance rate8%
Submission to final decision125 days
Acceptance to publication14 days
CiteScore3.400
Journal Citation Indicator0.460
Impact Factor2.9

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.