Computational Intelligence and Neuroscience

Compression of Deep Learning Models for Resource-Constrained Devices


Publishing date
01 Mar 2022
Status
Published
Submission deadline
05 Nov 2021

1Bennett University, Greater Noida, India

2Brunel University London, London, UK


Compression of Deep Learning Models for Resource-Constrained Devices

Description

In recent years, deep learning has become popular in research due to its applicability in many industries. For instance, deep learning can be applied in healthcare, security surveillance, self-drive car, human activity recognition, recommended systems, image quality enhancement, transportation, prediction, forecasting, etc. Before the introduction of deep learning, prediction and decision-making can be achieved using statistical methods and machine learning. Deep learning algorithms have been solving successfully complex real-time problems. This was previously not possible with machine learning and computer vision methods.

Deep learning has also gained popularity because it automatically extracts the important features from training data and these features help to make an appropriate decision. However, there are challenges in deep learning such as problems of high computation power and resources. Moreover, deep learning models are computationally extensive and require high storage space. Therefore, a deep learning model is not well suited for edge devices. Users are not able to get high computation resources in a real-time domain from a remote location or in the case of mobility. Hence, these deep learning models require significant improvement. For instance, there is a need to make deep learning models that include lightweight and better inference time so that the models can be compatible with resource-constrained devices. Recent research has shown significant improvement in compression techniques by applying pruning, lossy weight encoding, parameter sharing, multilayer pruning, low-rank factorization, etc. To compress deep learning models, two approaches exist: compression during training and compression of the trained model. Moreover, various techniques are available for model optimization and compression for resource-constrained devices. For instance, genetic algorithms, swarm optimization, swarm intelligence, nature-inspired optimization, game-theoretic approach, chemical reaction optimization, and differential evolution.

The aim of the Special Issue is to bring together original research and review articles discussing the compression of deep learning models for resource-constrained devices. We welcome submissions from researchers who have been working on the development and deployment of deep learning models in edge devices (e.g., Raspberry Pi, Google edge tensor processing unit (Google TPU), NVIDIA Jetson Nano Developer Kit, Android devices, etc). This Special Issue invites original research discussing innovative architectures and training methods for effective and efficient compression.

Potential topics include but are not limited to the following:

  • Model optimization and compression for medical applications
  • Model optimization and compression for Internet of Things (IoT) and edge applications
  • Model optimization and compression for deep learning algorithms in security analysis applications
  • New architectures for model compression include pruning, quantization, knowledge distillation, neural architecture search (NAS), etc.
  • Generalize lightweight architectures for deep learning problems
  • Compression approaches for deep reinforcement learning
  • Efficient use of computation resources for executing deep learning models
  • Architectures and models that work with less training data on remote applications
  • Compressed deep learning model for explainable artificial intelligence
  • Compressed and accelerated versions of famous pr-trained architectures (e.g., AlexNet, OxfordNet (VGG16), residual neural network (ResNet), etc.)
  • Compression and acceleration of object detection model such as "You Only Look Once" (YOLO) model, and single-shot detector (SSD)
  • Accelerate the UNet and VNet architectures
  • Methodology and framework for developing storage constraints in deep learning models

Articles

  • Special Issue
  • - Volume 2023
  • - Article ID 9786963
  • - Retraction

Retracted: Efficient Approach towards Detection and Identification of Copy Move and Image Splicing Forgeries Using Mask R-CNN with MobileNet V1

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9842141
  • - Retraction

Retracted: Study on the Influence of Compression Ratio on the Rail Contact Fatigue Resistance Property and Its Mechanism

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9846578
  • - Retraction

Retracted: Vision Sensor-Based Real-Time Fire Detection in Resource-Constrained IoT Environments

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9873421
  • - Retraction

Retracted: Deep Learning Model for the Automatic Classification of White Blood Cells

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9836046
  • - Retraction

Retracted: An Ensemble Deep Learning Model for Automatic Modulation Classification in 5G and Beyond IoT Networks

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9830360
  • - Retraction

Retracted: Telemetry Data Compression Algorithm Using Balanced Recurrent Neural Network and Deep Learning

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9792304
  • - Retraction

Retracted: Machine Translation System Using Deep Learning for English to Urdu

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9858797
  • - Retraction

Retracted: Machine Learning and IoT-Based Waste Management Model

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9876154
  • - Retraction

Retracted: A Multiperson Pose Estimation Method Using Depthwise Separable Convolutions and Feature Pyramid Network

Computational Intelligence and Neuroscience
  • Special Issue
  • - Volume 2023
  • - Article ID 9865750
  • - Retraction

Retracted: A Post-training Quantization Method for the Design of Fixed-Point-Based FPGA/ASIC Hardware Accelerators for LSTM/GRU Algorithms

Computational Intelligence and Neuroscience

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.