Computational Intelligence and Neuroscience

Safe and Fair Machine Learning for Neuroscience


Publishing date
01 Dec 2022
Status
Closed
Submission deadline
22 Jul 2022

Lead Editor

1Instituto de Telecomunicações, Aveiro, Portugal

2University of Fortaleza, Fortaleza, Brazil

3University of Essex, Essex, UK

This issue is now closed for submissions.

Safe and Fair Machine Learning for Neuroscience

This issue is now closed for submissions.

Description

As machine learning (ML) moves from purely theoretical models to practical applications, the issue of safety and fairness for ML begins to become increasingly relevant. ML has shown increasing proficiency in neuroscience, even performing better than humans in certain tasks. Together with the cornucopia of neuroscience data becoming available in the big data era, ML is destined to achieve even more superior efficacy in many tasks by leveraging it.

However, accompanying the capabilities, the inherent safety and fairness issues associated with ML have profound implications. ML models are algorithms trained on existing data, and as such, they often carry the biases of prior practice. Without vigilance and surveillance methods, even the most well-intentioned ML model can propagate the biases present in the training data. Moreover, vulnerabilities in ML models can be easily exploited by attackers advertently or inadvertently to achieve desired pernicious results or exacerbate power imbalances. Neuroscience explores how the brain executes various perceptual, cognitive, and motor tasks. ML-based artificial intelligence techniques allow big data to be processed with intelligent computers, which provides new possibilities for neuroscience such as how thousands of neurons and nodes communicate to handle massive information and how the brain generates behaviors and controls them. Researchers often overlook the risks of data-fying neuroscience because of the safety and fairness issue of ML models. Attention has been drawn to the much more profound and more fundamental problem that the safety and fairness issue in ML models exacerbates imbalances and risk, and the consequences are especially severe for neuroscience. Thus, ensuring safety and fairness in ML models for neuroscience becomes a crucial component in the advancement of ML in neuroscience.

This Special Issue welcomes original research and review articles discussing the fairness and safety implications of the use of ML in real-world neuroscience systems, proposing methods to detect, prevent, and/or alleviate undesired fairness and safety issues that ML-based systems might exhibit, analyzing the vulnerability of neuroscience ML systems to adversarial attacks and the possible defense mechanisms, and, more generally, any paper that stimulates progress on topics related to fair and safe ML in neuroscience. Hence, we hope to make sure that with the application of new technologies such as ML, across the continuum of care, technology aids humanity and not the reverse.

Potential topics include but are not limited to the following:

  • Fairness and bias of ML in neuroscience
  • Application of transparency to safety and fairness of ML in neuroscience
  • Measurement & mismeasurement of safety and fairness
  • Understanding disparities in the predicted outcome
  • Construction of unbiased ML models for neuroscience
  • Bias removal of ML use cases for neuroscience
  • Interpretability of ML in neuroscience
  • Recourse and contestability of biased ML results
  • Emphasis on learning underrepresented groups
  • Safe reinforcement learning in neuroscience
  • Safe neuroscience robotic control
  • Ethical and legal consequences of using ML in real-world systems
  • Safety of deploying reinforcement learning in neuroscience
  • Any novelties for enabling the safety and fairness of ML in neuroscience

Articles

  • Special Issue
  • - Volume 2023
  • - Article ID 1253824
  • - Research Article

Performance Evaluation and Identification of Key Influencing Factors for Student Achievement Based on the Entropy-Weighted TOPSIS Model

Wei Liu | Lei Zhang
  • Special Issue
  • - Volume 2022
  • - Article ID 2365320
  • - Research Article

A Fuzzy Neural Network-Based Evaluation Method for Physical Education Teaching Management in Colleges

Bo Zhao | Yanjin Liu
  • Special Issue
  • - Volume 2022
  • - Article ID 1545024
  • - Research Article

Load Balancing Algorithms for Hadoop Cluster in Unbalanced Environment

Weiyu Fu | Lixia Wang
  • Special Issue
  • - Volume 2022
  • - Article ID 6884956
  • - Research Article

Deep Reinforcement Learning-Based Trading Strategy for Load Aggregators on Price-Responsive Demand

Guang Yang | Songhuai Du | ... | Juan Su
  • Special Issue
  • - Volume 2022
  • - Article ID 1941855
  • - Research Article

Computational Intelligence Powered Experimental Test on Energy Consumption Characteristics of Cold-Water Phase-Change Energy Heat Pump System

Ronghua Wu | Hao Yu | Ying Xu
  • Special Issue
  • - Volume 2022
  • - Article ID 9521329
  • - Research Article

Facial Expression Recognition Based on LDA Feature Space Optimization

Fanchen Zheng
  • Special Issue
  • - Volume 2022
  • - Article ID 9594267
  • - Research Article

Detection Scheme for Tampering Behavior on Distributed Controller of Electric-Thermal Integrated Energy System Based on Relation Network

Chaoqun Zhu | Jie Li | ... | Yafei Li
  • Special Issue
  • - Volume 2022
  • - Article ID 8149395
  • - Research Article

Investigation on the Distribution Characteristics of Chinese Continuing Education Based on the Community Detection Algorithm in Complex Networks

Yuping Lai | Qin Yuan | Qinming Yu
  • Special Issue
  • - Volume 2022
  • - Article ID 8229956
  • - Research Article

Fair Transmission of Individual Signals and Formation of Mainstream Information: Evidence from Herd Behaviours in Emergencies

Xintong Wu
  • Special Issue
  • - Volume 2022
  • - Article ID 8151132
  • - Research Article

RBFNN-Enabled Adaptive Parameters Identification for Robot Servo System Based on Improved Sliding Mode Observer

Ye Li | Dazhi Wang | ... | Yanming Li
Computational Intelligence and Neuroscience
 Journal metrics
See full report
Acceptance rate42%
Submission to final decision47 days
Acceptance to publication27 days
CiteScore3.900
Journal Citation Indicator-
Impact Factor-
 Submit

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.