Computational Intelligence and Neuroscience

Interpretable Methods of Artificial Intelligence Algorithms


Publishing date
01 Jan 2023
Status
Published
Submission deadline
02 Sep 2022

Lead Editor
Guest Editors

1Beijing Jiaotong University, Beijing, China

2East China Jiaotong University, Nanchang, China

3Kookmin University, Seoul, Republic of Korea


Interpretable Methods of Artificial Intelligence Algorithms

Description

The application of artificial intelligence technology in the information field, represented by deep learning, has greatly improved the efficiency of information utilization and mining value and profoundly influenced the business form in various fields. From amongst the deeply debated issues of AI-related ethics, algorithmic discrimination, algorithmic correctness, and security, the issue of interpretability of AI algorithms represented by deep learning algorithms has arisen.

The development of human rationality has led us to believe that if a judgment or decision is interpretable, it will be easier for us to understand its strengths and weaknesses, to assess its risks, to know to what extent and in what contexts it can be trusted, and in what ways we can continuously improve it in order to maximize consensus, minimize risks, and promote the continuous development of the corresponding field. In actuality, the paradigm of reasoning and symbolic thinking that originated before the age of artificial intelligence, and evolved after it, can help in developing new interpretable methods for machine learning in order to grasp how these models can find answers. This will be crucial to building robust evidence-based and explainable models as well as to determine the performance and reliability of such models. Introducing interpretability techniques to deep learning models which use a variety of neural networks will provide explanations and empirical guidelines on how the deep learning model is generating decisions at every instance in time. These interpretable methods will shed light on the neural network black box and on how predictions are generated. However, interpretability techniques vary broadly from being machine learning specific methods to machine learning model agnostics or from being local to global. Interpretability techniques in all their varieties are crucial as they guard against embedded bias as well as preventing intensive debugging. These techniques play an increasingly important role in science, engineering, and society as they provide precise answers on how machine learning algorithms generate decisions.

This Special Issue covers several of the most common interpretability methods, their relative advantages and disadvantages, their taxonomy and application in various fields. We welcome original research and review articles.

Potential topics include but are not limited to the following:

  • Interpretability methods for explaining complex black-box machine models
  • Interpretability methods for creating white-box machine learning models
  • Interpretability methods for promoting fairness
  • Interpretability methods for analyzing the sensitivity of machine learning model predictions
  • Agnostics interpretability methods
  • Specific interpretability methods
  • Global and local interpretability methods
  • Transparency techniques for interpretability methods
  • Explainable techniques for interpretability methods
  • Retrospective approach to interpretability methods
  • Prospective approach to interpretability methods
  • Applications of interpretability machine learning methods (e.g., evidence-based healthcare, logistics)

Articles

  • Special Issue
  • - Volume 2022
  • - Article ID 2133712
  • - Research Article

[Retracted] A New Decision-Making GMDH Neural Network: Effective for Limited and Fuzzy Data

Xiaofeng Hong | Yonghui Zhao | ... | Nasr Al Din Ide
  • Special Issue
  • - Volume 2022
  • - Article ID 2748215
  • - Research Article

An Analysis of the Operation Factors of Three PSO-GA-ED Meta-Heuristic Search Methods for Solving a Single-Objective Optimization Problem

Ali Fozooni | Osman Kamari | ... | Amin Valizadeh
  • Special Issue
  • - Volume 2022
  • - Article ID 1783445
  • - Research Article

A Meta-Path-Based Evaluation Method for Enterprise Credit Risk

Marui Du | Yue Ma | Zuoquan Zhang
  • Special Issue
  • - Volume 2022
  • - Article ID 1789490
  • - Research Article

Multi-Task Assignment Method of the Cloud Computing Platform Based on Artificial Intelligence

Yongchang Zhang | Panpan Geng
  • Special Issue
  • - Volume 2022
  • - Article ID 2648695
  • - Research Article

[Retracted] A Crop Growth Prediction Model Using Energy Data Based on Machine Learning in Smart Farms

Saravanakumar Venkatesan | Jonghyun Lim | Yongyun Cho
  • Special Issue
  • - Volume 2022
  • - Article ID 1688233
  • - Research Article

Seismic Facies Analysis Using the Multiattribute SOM-K-Means Clustering

Zhaolin Zhu | Xin Chen | ... | Rui Du
  • Special Issue
  • - Volume 2022
  • - Article ID 7395529
  • - Research Article

A Novel Semiautomatic Interpretation Model for Impulse Neutron Oxygen Activation Time Spectrum Data

Yong Dong | Mengxia Li | Ruiquan Liao
  • Special Issue
  • - Volume 2022
  • - Article ID 8562390
  • - Research Article

Applications of the Multiattribute Decision-Making for the Development of the Tourism Industry Using Complex Intuitionistic Fuzzy Hamy Mean Operators

Abrar Hussain | Kifayat Ullah | ... | Haolun Wang
  • Special Issue
  • - Volume 2022
  • - Article ID 4885897
  • - Research Article

[Retracted] Method for Quantum Denoisers Using Convolutional Neural Network

Bong-Hyun Kim | S. Madhavi
  • Special Issue
  • - Volume 2022
  • - Article ID 5892188
  • - Research Article

Gravitational Wave-Signal Recognition Model Based on Fourier Transform and Convolutional Neural Network

Hao Zhang | Zhijun Zhu | ... | Zaher Mundher Yaseen

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.