Computational Intelligence and Neuroscience

Explainable and Reliable Machine Learning by Exploiting Large-Scale and Heterogeneous Data


Publishing date
01 Jan 2021
Status
Published
Submission deadline
14 Aug 2020

Lead Editor

1University of the District of Columbia, Washington, USA

2Southeast University, Nanjing, China

3University of Central Florida, Orlando, USA


Explainable and Reliable Machine Learning by Exploiting Large-Scale and Heterogeneous Data

Description

The exponentially growing availability of data such as images, videos, and speech from myriad sources, including social media and the Internet of Things, is driving the demand for high-performance data analysis algorithms. Deep learning is currently an extremely active research area in machine learning and pattern recognition. It provides computational models of multiple nonlinear processing neural network layers to learn and represent data with increasing levels of abstraction. Deep neural networks are able to implicitly capture intricate structures of large-scale data and deploy in cloud computing and high-performance computing platforms.

The deep learning approach has demonstrated remarkable performances across a range of applications, including computer vision, image classification, face/speech recognition, and medical communications. However, deep neural networks yield ‘black-box’ input-output mappings that can be challenging to explain to users. Especially in the medical, military, and legal fields, black-box machine learning techniques are unacceptable, since decisions may have a profound impact on peoples’ lives due to the lack of interpretability. In addition, many other open problems and challenges still exist, such as computational and time costs, repeatability of the results, convergence, and the ability to learn from a very small amount of data and to evolve dynamically.

This Special Issue aims to present the latest theoretical and technical advancements of machine and deep learning models and algorithms with improved computational efficiency and scalability. We hope that papers published will improve the understanding and explainability of deep neural networks, enhance the mathematical foundation of deep neural networks, and increase the computational efficiency and stability of the machine and deep learning training process with new algorithms that will scale.

Potential topics include but are not limited to the following:

  • Supervised, unsupervised, and reinforcement learning
  • Classification, clustering, and optimization for big data analytics
  • Extracting understanding from large-scale and heterogeneous data
  • Dimensionality reduction and analysis of large-scale and complex data
  • Quantifying or visualizing the interpretability of deep neural networks
  • Stability improvement of deep neural network optimization
  • Time series prediction and water flow forecasting using machine and deep learning
  • Novel machines and deep learning approaches in the applications of image/signal processing, business intelligence, games, healthcare, bioinformatics, and security

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.