Computational Intelligence and Neuroscience

Compression of Deep Learning Models for Resource-Constrained Devices


Publishing date
01 Mar 2022
Status
Closed
Submission deadline
05 Nov 2021

1Bennett University, Greater Noida, India

2Brunel University London, London, UK

This issue is now closed for submissions.

Compression of Deep Learning Models for Resource-Constrained Devices

This issue is now closed for submissions.

Description

In recent years, deep learning has become popular in research due to its applicability in many industries. For instance, deep learning can be applied in healthcare, security surveillance, self-drive car, human activity recognition, recommended systems, image quality enhancement, transportation, prediction, forecasting, etc. Before the introduction of deep learning, prediction and decision-making can be achieved using statistical methods and machine learning. Deep learning algorithms have been solving successfully complex real-time problems. This was previously not possible with machine learning and computer vision methods.

Deep learning has also gained popularity because it automatically extracts the important features from training data and these features help to make an appropriate decision. However, there are challenges in deep learning such as problems of high computation power and resources. Moreover, deep learning models are computationally extensive and require high storage space. Therefore, a deep learning model is not well suited for edge devices. Users are not able to get high computation resources in a real-time domain from a remote location or in the case of mobility. Hence, these deep learning models require significant improvement. For instance, there is a need to make deep learning models that include lightweight and better inference time so that the models can be compatible with resource-constrained devices. Recent research has shown significant improvement in compression techniques by applying pruning, lossy weight encoding, parameter sharing, multilayer pruning, low-rank factorization, etc. To compress deep learning models, two approaches exist: compression during training and compression of the trained model. Moreover, various techniques are available for model optimization and compression for resource-constrained devices. For instance, genetic algorithms, swarm optimization, swarm intelligence, nature-inspired optimization, game-theoretic approach, chemical reaction optimization, and differential evolution.

The aim of the Special Issue is to bring together original research and review articles discussing the compression of deep learning models for resource-constrained devices. We welcome submissions from researchers who have been working on the development and deployment of deep learning models in edge devices (e.g., Raspberry Pi, Google edge tensor processing unit (Google TPU), NVIDIA Jetson Nano Developer Kit, Android devices, etc). This Special Issue invites original research discussing innovative architectures and training methods for effective and efficient compression.

Potential topics include but are not limited to the following:

  • Model optimization and compression for medical applications
  • Model optimization and compression for Internet of Things (IoT) and edge applications
  • Model optimization and compression for deep learning algorithms in security analysis applications
  • New architectures for model compression include pruning, quantization, knowledge distillation, neural architecture search (NAS), etc.
  • Generalize lightweight architectures for deep learning problems
  • Compression approaches for deep reinforcement learning
  • Efficient use of computation resources for executing deep learning models
  • Architectures and models that work with less training data on remote applications
  • Compressed deep learning model for explainable artificial intelligence
  • Compressed and accelerated versions of famous pr-trained architectures (e.g., AlexNet, OxfordNet (VGG16), residual neural network (ResNet), etc.)
  • Compression and acceleration of object detection model such as "You Only Look Once" (YOLO) model, and single-shot detector (SSD)
  • Accelerate the UNet and VNet architectures
  • Methodology and framework for developing storage constraints in deep learning models

Articles

  • Special Issue
  • - Volume 2023
  • - Article ID 9879576
  • - Retraction

Retracted: Online Course Model of Social and Political Education Using Deep Learning

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9796358
  • - Retraction

Retracted: Application of Multilayer Perceptron Genetic Algorithm Neural Network in Chinese-English Parallel Corpus Noise Processing

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9808526
  • - Retraction

Retracted: Legal Text Recognition Using LSTM-CRF Deep Learning Model

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9835268
  • - Retraction

Retracted: Internet of Things and Edge Computing Model Optimization in Physical Training Status and Countermeasures in College Sports Basketball Optional Course

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9785062
  • - Retraction

Retracted: Optimization Research on Deep Learning and Temporal Segmentation Algorithm of Video Shot in Basketball Games

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9817853
  • - Retraction

Retracted: Application of Unsupervised Migration Method Based on Deep Learning Model in Basketball Training

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9831325
  • - Retraction

Retracted: Optimization of College English Classroom Teaching Efficiency by Deep Learning SDD Algorithm

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9812745
  • - Retraction

Retracted: Application of Neural Network Model Based on Multispecies Evolutionary Genetic Algorithm to Planning and Design of Diverse Plant Landscape

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9868135
  • - Retraction

Retracted: Application of the Deep Learning Algorithm and Similarity Calculation Model in Optimization of Personalized Online Teaching System of English Course

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9863014
  • - Retraction

Retracted: Construction of Financial Management Early Warning Model Based on Improved Ant Colony Neural Network

Computational Intelligence and Neuroscience
Computational Intelligence and Neuroscience
 Journal metrics
See full report
Acceptance rate35%
Submission to final decision51 days
Acceptance to publication27 days
CiteScore-
Journal Citation Indicator-
Impact Factor-
 Submit

Article of the Year Award: Impactful research contributions of 2022, as selected by our Chief Editors. Discover the winning articles.