Computational Intelligence and Neuroscience

Safe and Fair Machine Learning for Neuroscience


Publishing date
01 Dec 2022
Status
Published
Submission deadline
22 Jul 2022

Lead Editor

1Instituto de Telecomunicações, Aveiro, Portugal

2University of Fortaleza, Fortaleza, Brazil

3University of Essex, Essex, UK


Safe and Fair Machine Learning for Neuroscience

Description

As machine learning (ML) moves from purely theoretical models to practical applications, the issue of safety and fairness for ML begins to become increasingly relevant. ML has shown increasing proficiency in neuroscience, even performing better than humans in certain tasks. Together with the cornucopia of neuroscience data becoming available in the big data era, ML is destined to achieve even more superior efficacy in many tasks by leveraging it.

However, accompanying the capabilities, the inherent safety and fairness issues associated with ML have profound implications. ML models are algorithms trained on existing data, and as such, they often carry the biases of prior practice. Without vigilance and surveillance methods, even the most well-intentioned ML model can propagate the biases present in the training data. Moreover, vulnerabilities in ML models can be easily exploited by attackers advertently or inadvertently to achieve desired pernicious results or exacerbate power imbalances. Neuroscience explores how the brain executes various perceptual, cognitive, and motor tasks. ML-based artificial intelligence techniques allow big data to be processed with intelligent computers, which provides new possibilities for neuroscience such as how thousands of neurons and nodes communicate to handle massive information and how the brain generates behaviors and controls them. Researchers often overlook the risks of data-fying neuroscience because of the safety and fairness issue of ML models. Attention has been drawn to the much more profound and more fundamental problem that the safety and fairness issue in ML models exacerbates imbalances and risk, and the consequences are especially severe for neuroscience. Thus, ensuring safety and fairness in ML models for neuroscience becomes a crucial component in the advancement of ML in neuroscience.

This Special Issue welcomes original research and review articles discussing the fairness and safety implications of the use of ML in real-world neuroscience systems, proposing methods to detect, prevent, and/or alleviate undesired fairness and safety issues that ML-based systems might exhibit, analyzing the vulnerability of neuroscience ML systems to adversarial attacks and the possible defense mechanisms, and, more generally, any paper that stimulates progress on topics related to fair and safe ML in neuroscience. Hence, we hope to make sure that with the application of new technologies such as ML, across the continuum of care, technology aids humanity and not the reverse.

Potential topics include but are not limited to the following:

  • Fairness and bias of ML in neuroscience
  • Application of transparency to safety and fairness of ML in neuroscience
  • Measurement & mismeasurement of safety and fairness
  • Understanding disparities in the predicted outcome
  • Construction of unbiased ML models for neuroscience
  • Bias removal of ML use cases for neuroscience
  • Interpretability of ML in neuroscience
  • Recourse and contestability of biased ML results
  • Emphasis on learning underrepresented groups
  • Safe reinforcement learning in neuroscience
  • Safe neuroscience robotic control
  • Ethical and legal consequences of using ML in real-world systems
  • Safety of deploying reinforcement learning in neuroscience
  • Any novelties for enabling the safety and fairness of ML in neuroscience

Articles

  • Special Issue
  • - Volume 2022
  • - Article ID 9400742
  • - Research Article

A Neural Network Approach for Chinese Sports Tourism Demand Based on Knowledge Discovery

Libin Qi | Yaohan Tang
  • Special Issue
  • - Volume 2022
  • - Article ID 7584675
  • - Research Article

An Intelligent Clinical Psychological Assessment Method Based on AHP-LSSVR Model

Junli Su | Dongyang Wang
  • Special Issue
  • - Volume 2022
  • - Article ID 6513776
  • - Research Article

Enhanced Super4PCS Algorithm by Comparing Transformed Normals at Corresponding Points

Hai Liu | Shulin Wang | Donghong Zhao
  • Special Issue
  • - Volume 2022
  • - Article ID 2309317
  • - Research Article

Real-Time Tracking of Object Melting Based on Enhanced DeepLab Network

Tian-yu Jiang | Feng-lan Ju | ... | Zun-Qian Zhang
  • Special Issue
  • - Volume 2022
  • - Article ID 6467086
  • - Research Article

Particle Swarm Algorithm and Its Application in Tourism Route Design and Optimization

Bing Lu | Chunlei Zhou
  • Special Issue
  • - Volume 2022
  • - Article ID 7897669
  • - Research Article

A Novel and Effective Brain Tumor Classification Model Using Deep Feature Fusion and Famous Machine Learning Classifiers

Hareem Kibriya | Rashid Amin | ... | Abdullah Alshehri
  • Special Issue
  • - Volume 2022
  • - Article ID 4844993
  • - Research Article

Optimal Path of Internet of Things Service in Supply Chain Management Based on Machine Learning Algorithms

Jing Li | Ruifeng Zhang | ... | Haiyan Zhang
  • Special Issue
  • - Volume 2022
  • - Article ID 7643006
  • - Research Article

Application of Machine Learning and Internet of Things Techniques in Evaluation of English Teaching Effect in Colleges

Wen-lan Shang
  • Special Issue
  • - Volume 2022
  • - Article ID 2076591
  • - Research Article

Research on Intelligent Warehousing and Logistics Management System of Electronic Market Based on Machine Learning

Ruifeng Zhang | Xiaoyan Zhou | ... | Jing Li
  • Special Issue
  • - Volume 2022
  • - Article ID 5818180
  • - Research Article

Private Face Image Generation Method Based on Deidentification in Low Light

Beibei Dong | Zhenyu Wang | ... | Jingjing Yang

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.