Computational Intelligence and Neuroscience

Safe and Fair Machine Learning for Neuroscience


Publishing date
01 Dec 2022
Status
Published
Submission deadline
22 Jul 2022

Lead Editor

1Instituto de Telecomunicações, Aveiro, Portugal

2University of Fortaleza, Fortaleza, Brazil

3University of Essex, Essex, UK


Safe and Fair Machine Learning for Neuroscience

Description

As machine learning (ML) moves from purely theoretical models to practical applications, the issue of safety and fairness for ML begins to become increasingly relevant. ML has shown increasing proficiency in neuroscience, even performing better than humans in certain tasks. Together with the cornucopia of neuroscience data becoming available in the big data era, ML is destined to achieve even more superior efficacy in many tasks by leveraging it.

However, accompanying the capabilities, the inherent safety and fairness issues associated with ML have profound implications. ML models are algorithms trained on existing data, and as such, they often carry the biases of prior practice. Without vigilance and surveillance methods, even the most well-intentioned ML model can propagate the biases present in the training data. Moreover, vulnerabilities in ML models can be easily exploited by attackers advertently or inadvertently to achieve desired pernicious results or exacerbate power imbalances. Neuroscience explores how the brain executes various perceptual, cognitive, and motor tasks. ML-based artificial intelligence techniques allow big data to be processed with intelligent computers, which provides new possibilities for neuroscience such as how thousands of neurons and nodes communicate to handle massive information and how the brain generates behaviors and controls them. Researchers often overlook the risks of data-fying neuroscience because of the safety and fairness issue of ML models. Attention has been drawn to the much more profound and more fundamental problem that the safety and fairness issue in ML models exacerbates imbalances and risk, and the consequences are especially severe for neuroscience. Thus, ensuring safety and fairness in ML models for neuroscience becomes a crucial component in the advancement of ML in neuroscience.

This Special Issue welcomes original research and review articles discussing the fairness and safety implications of the use of ML in real-world neuroscience systems, proposing methods to detect, prevent, and/or alleviate undesired fairness and safety issues that ML-based systems might exhibit, analyzing the vulnerability of neuroscience ML systems to adversarial attacks and the possible defense mechanisms, and, more generally, any paper that stimulates progress on topics related to fair and safe ML in neuroscience. Hence, we hope to make sure that with the application of new technologies such as ML, across the continuum of care, technology aids humanity and not the reverse.

Potential topics include but are not limited to the following:

  • Fairness and bias of ML in neuroscience
  • Application of transparency to safety and fairness of ML in neuroscience
  • Measurement & mismeasurement of safety and fairness
  • Understanding disparities in the predicted outcome
  • Construction of unbiased ML models for neuroscience
  • Bias removal of ML use cases for neuroscience
  • Interpretability of ML in neuroscience
  • Recourse and contestability of biased ML results
  • Emphasis on learning underrepresented groups
  • Safe reinforcement learning in neuroscience
  • Safe neuroscience robotic control
  • Ethical and legal consequences of using ML in real-world systems
  • Safety of deploying reinforcement learning in neuroscience
  • Any novelties for enabling the safety and fairness of ML in neuroscience

Articles

  • Special Issue
  • - Volume 2022
  • - Article ID 9649825
  • - Research Article

An Intelligent Prediction for Sports Industry Scale Based on Time Series Algorithm and Deep Learning

Hui Liang
  • Special Issue
  • - Volume 2022
  • - Article ID 7843455
  • - Research Article

Study of Multidimensional and High-Precision Height Model of Youth Based on Multilayer Perceptron

Lijian Chen | Xinben Fan | ... | Ahmedin M. Ahmed
  • Special Issue
  • - Volume 2022
  • - Article ID 5850684
  • - Research Article

Study on Sustainable Agricultural Structure Optimization Method Based on Multiobjective Optimization Algorithm

Dingkang Duan
  • Special Issue
  • - Volume 2022
  • - Article ID 1325061
  • - Research Article

Design of Growth Trend Map of Children and Adolescents Based on Bone Age

Kaiyan Chen | Weiyuan Shi | ... | Kai Fang
  • Special Issue
  • - Volume 2022
  • - Article ID 9987313
  • - Research Article

Design of Sports Rehabilitation Training System Based on EEMD Algorithm

Kaiwei Wang | Zhenghui Wang | ... | Chunsheng Yang
  • Special Issue
  • - Volume 2022
  • - Article ID 5106942
  • - Research Article

An Animation Model Generation Method Based on Gaussian Mutation Genetic Algorithm to Optimize Neural Network

Jing Liu | Qixing Chen | ... | Xiaoying Tian
  • Special Issue
  • - Volume 2022
  • - Article ID 3908188
  • - Research Article

Design and Implementation of Multiple Music System Based on Internet of Things

Mengchen Xu
  • Special Issue
  • - Volume 2022
  • - Article ID 6424057
  • - Research Article

Modeling and Simulation in an Aircraft Safety Design Based on a Hybrid AHP and FCA Algorithm

Miaosen Wang | Yuan Xue | Kang Wang
  • Special Issue
  • - Volume 2022
  • - Article ID 2372575
  • - Research Article

Experimental and Computational Study on Conductors Bearing Capacity in Offshore Drilling

Nanding Hu | Jin Yang | ... | Chen Yu
  • Special Issue
  • - Volume 2022
  • - Article ID 5625006
  • - Research Article

Analysis of Human Information Recognition Model in Sports Based on Radial Basis Fuzzy Neural Network

Tong Li | Longfei Ren | ... | Zijun Dang

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.