Computational Intelligence and Neuroscience

Computational Overhead vs. Learning Speed and Accuracy of Deep Networks


Publishing date
01 Jan 2023
Status
Published
Submission deadline
19 Aug 2022

Lead Editor

1National University of Computer and Emerging Sciences, Faisalabad, Pakistan

2Innopolis University, Innopolis, Russia

3University of Messina, Messina, Italy


Computational Overhead vs. Learning Speed and Accuracy of Deep Networks

Description

Traditional Machine Learning has been widely used for several complex real problems such as image classification, especially for such images that have nested and overlapping regions. The classification of such images is challenging due to their non-linear properties or relations among the pixels. Traditional methods primarily learn hand-crafted features and then fit those features into the machine learning model for classification. For instance, a Support Vector Machine (SVM) with a non-linear kernel function is most widely used, especially when the number of training examples is limited. However, the performance of SVM or similar non-linear methods is not satisfactory due to the non-linear relationships among the captured intensity values and the corresponding object, which makes classification more challenging for such methods. Another way around, hand-crafted features can effectively represent the various attributes of an image, hence working well with the data being analyzed. However, these features may be insubstantial in the case of real data, therefore it is difficult to fine-tune between robustness and discriminability as the set of optimal features considerably vary between different data. Furthermore, human involvement in designing the features considerably affects the classification process, as it requires a high level of domain expertise to design hand-crafted features.

To mitigate the limitations of hand-crafted feature designing, in recent years, Deep Learning (DL) has been proposed and shown great success for image classification and outperformed traditional methods due to their ability to automatically learn both the high and low-level features. More specifically, DL architectures can learn the behavior of any data without any prior knowledge regarding the statistical distribution of the input data and can extract both linear and non-linear features of input data without any pre-specified information. For example, Convolutional Neural Networks (CNN) extract feature maps that are invariant to the local changes with respect to its input. However, due to their larger depth, most DL methods suffer from the vanishing or exploding gradient problem.

In short, the following are some main challenges that come across when DL is applied:

- Complex training process: Training and optimizing Deep Networks is an NP-complete problem where the convergence of the optimization process is not guaranteed.

- Limited availability of training data: Deep Networks usually require many training examples otherwise their tendency to overfit increases significantly.

- Model’s interpretability: The training procedure is difficult to understand and interpret. The black-box nature of Deep Networks is considered as a potential weakness that may affect design decisions for the optimization process.

- High computational burden: Since Deep Networks deal with large amounts of data, this involves increased memory bandwidth, high computational cost, and storage consumption.

- Training accuracy degradation: It is assumed that that the deeper networks extract more rich information however it is not true that for all systems to achieve higher accuracy, they can simply add more layers. This is because by increasing the network’s depth, the gradient exploding or vanishing becomes more prominent and affects the convergence of the model.

This Special Issue welcomes original research and review articles that focus on the main challenges of traditional machine learning, and then compress the superiority of Deep Learning to address the challenges and related problems mentioned above.

Potential topics include but are not limited to the following:

  • Domain randomization and adaptation, for example training a model on a certain type of data and then using another type of data or widening the range of training parameters to make a model more generalized
  • Transformers, for example a deep network that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data
  • Different learning strategies, for example iteratively enlarging training data to systematically select the training data to effectively train a deep model
  • Multi-level and multi-scale feature fusion, for example gradually reducing one type of feature size and increasing another kind of feature size for fusion
  • Multi-level cost functions for alleviating the gradient vanishing problems
  • Traditional/LiDAR/multispectral/hyperspectral images for feature learning and classification
  • Fuzzy intelligence for tuning parameters of computer vision and pattern recognition problems

Articles

  • Special Issue
  • - Volume 2023
  • - Article ID 9828465
  • - Retraction

Retracted: Emotion Analysis Model of Microblog Comment Text Based on CNN-BiLSTM

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9897287
  • - Retraction

Retracted: Artificial Intelligence-Based Feature Analysis of Ultrasound Images of Liver Fibrosis

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9810910
  • - Retraction

Retracted: Real-Time Twitter Spam Detection and Sentiment Analysis using Machine Learning and Deep Learning Techniques

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9890718
  • - Retraction

Retracted: Magnetic Tile Surface Defect Detection Methodology Based on Self-Attention and Self-Supervised Learning

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9871068
  • - Retraction

Retracted: Machine Learning for Diagnosis of Systemic Lupus Erythematosus: A Systematic Review and Meta-Analysis

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9818249
  • - Retraction

Retracted: Study on Vertical Joint Performance of Single-Faced Superposed Shear Wall Based on Finite Element Analysis and Calculation

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9867270
  • - Retraction

Retracted: IoT-Based Hybrid Ensemble Machine Learning Model for Efficient Diabetes Mellitus Prediction

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9870153
  • - Retraction

Retracted: Design of IoT Gateway for Crop Growth Environmental Monitoring Based on Edge-Computing Technology

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9860329
  • - Retraction

Retracted: Comparative Analysis of Aesthetic Emotion of Dance Movement: A Deep Learning Based Approach

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 8585839
  • - Research Article

Impact of Tree Cover Loss on Carbon Emission: A Learning-Based Analysis

Abdul Haleem Butt | Muhammad Ali Jamshed | ... | Masoor Ur-Rehman

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.