Explainable Artificial Intelligence for Medical Applications
1Jordan University of Science and Technology, Irbid, Jordan
2Rathinam College of Engineering, Coimbatore, India
3University of Cauca, Popayan, Colombia
Explainable Artificial Intelligence for Medical Applications
Description
Medical products and services are built around trust and high ethical standards. To meet these high requirements, the next-generation medical support software must address the issues that arise from using deep neural networks. Deep neural networks are expected to transform the health care sector. These machine algorithms are adaptable, updated continuously and are immune against inter-and intra- observer variability. Most importantly, they promise cost-effective solutions.
However, these software frameworks are man-made. Therefore, these structures are not perfect. Issues arise from the data which underpins the training process. For example, the training data might be biased and therefore it fails to reflect the measurements encountered in clinical practice. Inevitably, the deep learning algorithms and frameworks which use these algorithms are susceptible to software and hardware errors. In addition, deep learning algorithms constitute a singular decision point where it is impossible, with reasonable effort to trace into the network structure to establish the cause of a particular decision. Big data and associated processing methods are needed to address public health problems, such as cardiovascular disease, fever, obesity, and diabetes. The data comes from physiological signals and medical images. The fundamental assumption is that this data contains valuable information that can be used during the diagnosis process. Deep learning techniques, like convolution neural networks (CNN), long short-term memory (LSTM), autoencoder, deep generative models, and deep belief networks have been used to provide medical decision support. The application of such novel methods to medical data can aid clinicians to make accurate and fast diagnoses.
This aim of this Special Issue is to bring together original research and review articles discussing on the methods and software frameworks needed to build trust in artificial intelligence (AI) for healthcare applications.
Potential topics include but are not limited to the following:
- Explainable AI methods for precision medicine
- Explainable AI and Internet of Medical Things for medical devices
- Explainable AI for targeted drug delivery
- Explainable AI for medical image segmentation
- Explainable knowledge maintenance and evolution in health care technologies
- Context-aware systems and their applications in healthcare
- Explainable AI-based analytics for patient-specific health care
- AI-assisted decision-making in healthcare
- Explainable AI robotic-assisted surgery
- Case studies of machine learning and health informatics with explainable AI