Computational Intelligence and Neuroscience

Safe and Fair Machine Learning for Neuroscience


Publishing date
01 Dec 2022
Status
Published
Submission deadline
22 Jul 2022

Lead Editor

1Instituto de Telecomunicações, Aveiro, Portugal

2University of Fortaleza, Fortaleza, Brazil

3University of Essex, Essex, UK


Safe and Fair Machine Learning for Neuroscience

Description

As machine learning (ML) moves from purely theoretical models to practical applications, the issue of safety and fairness for ML begins to become increasingly relevant. ML has shown increasing proficiency in neuroscience, even performing better than humans in certain tasks. Together with the cornucopia of neuroscience data becoming available in the big data era, ML is destined to achieve even more superior efficacy in many tasks by leveraging it.

However, accompanying the capabilities, the inherent safety and fairness issues associated with ML have profound implications. ML models are algorithms trained on existing data, and as such, they often carry the biases of prior practice. Without vigilance and surveillance methods, even the most well-intentioned ML model can propagate the biases present in the training data. Moreover, vulnerabilities in ML models can be easily exploited by attackers advertently or inadvertently to achieve desired pernicious results or exacerbate power imbalances. Neuroscience explores how the brain executes various perceptual, cognitive, and motor tasks. ML-based artificial intelligence techniques allow big data to be processed with intelligent computers, which provides new possibilities for neuroscience such as how thousands of neurons and nodes communicate to handle massive information and how the brain generates behaviors and controls them. Researchers often overlook the risks of data-fying neuroscience because of the safety and fairness issue of ML models. Attention has been drawn to the much more profound and more fundamental problem that the safety and fairness issue in ML models exacerbates imbalances and risk, and the consequences are especially severe for neuroscience. Thus, ensuring safety and fairness in ML models for neuroscience becomes a crucial component in the advancement of ML in neuroscience.

This Special Issue welcomes original research and review articles discussing the fairness and safety implications of the use of ML in real-world neuroscience systems, proposing methods to detect, prevent, and/or alleviate undesired fairness and safety issues that ML-based systems might exhibit, analyzing the vulnerability of neuroscience ML systems to adversarial attacks and the possible defense mechanisms, and, more generally, any paper that stimulates progress on topics related to fair and safe ML in neuroscience. Hence, we hope to make sure that with the application of new technologies such as ML, across the continuum of care, technology aids humanity and not the reverse.

Potential topics include but are not limited to the following:

  • Fairness and bias of ML in neuroscience
  • Application of transparency to safety and fairness of ML in neuroscience
  • Measurement & mismeasurement of safety and fairness
  • Understanding disparities in the predicted outcome
  • Construction of unbiased ML models for neuroscience
  • Bias removal of ML use cases for neuroscience
  • Interpretability of ML in neuroscience
  • Recourse and contestability of biased ML results
  • Emphasis on learning underrepresented groups
  • Safe reinforcement learning in neuroscience
  • Safe neuroscience robotic control
  • Ethical and legal consequences of using ML in real-world systems
  • Safety of deploying reinforcement learning in neuroscience
  • Any novelties for enabling the safety and fairness of ML in neuroscience

Articles

  • Special Issue
  • - Volume 2022
  • - Article ID 7468286
  • - Research Article

An Improved Math Word Problem (MWP) Model Using Unified Pretrained Language Model (UniLM) for Pretraining

Dongqiu Zhang | Wenkui Li
  • Special Issue
  • - Volume 2022
  • - Article ID 7592258
  • - Research Article

A Generative Adversarial Network Based a Rolling Bearing Data Generation Method Towards Fault Diagnosis

Lin Huo | Huanchao Qi | ... | Ji Li
  • Special Issue
  • - Volume 2022
  • - Article ID 4298235
  • - Research Article

Prediction of Retail Price of Sporting Goods Based on LSTM Network

Hui Ding
  • Special Issue
  • - Volume 2022
  • - Article ID 7512289
  • - Research Article

Application of Bayesian Algorithm in Risk Quantification for Network Security

Lei Wei
  • Special Issue
  • - Volume 2022
  • - Article ID 4790736
  • - Research Article

The Prediction of Sinter Drums Strength Using Hybrid Machine Learning Algorithms

Xinying Ren | Bing Yang | ... | Aimin Yang
  • Special Issue
  • - Volume 2022
  • - Article ID 7973446
  • - Research Article

Research on E-Commerce Database Marketing Based on Machine Learning Algorithm

Nie Chen
  • Special Issue
  • - Volume 2022
  • - Article ID 6810649
  • - Research Article

Intelligent Optimization of Tower Crane Location and Layout Based on Firefly Algorithm

Cong Liu | Fangqing Zhang | ... | Tianyue Zhang
  • Special Issue
  • - Volume 2022
  • - Article ID 2174910
  • - Research Article

Consumer Group Identification Algorithm for Ice and Snow Sports

Ting Zhang | Wei Wang
  • Special Issue
  • - Volume 2022
  • - Article ID 1540820
  • - Research Article

Service Sharing Decisions between Channels considering Bidirectional Free Riding Based on a Dual-Equilibrium Linkage Algorithm

Jing Zheng | Qi Xu
  • Special Issue
  • - Volume 2022
  • - Article ID 9241670
  • - Research Article

A Malicious Domain Detection Model Based on Improved Deep Learning

XiangDong Huang | Hao Li | ... | Tao Xue

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.