Wireless Communications and Mobile Computing

Explainable Artificial Intelligence for Next Generation Internet of Things

Publishing date
01 May 2023
Submission deadline
06 Jan 2023

1Aristotle University of Thessaloniki, Thessaloniki, Greece

2Wireless Systems Laboratory, Huawei Technologies, Stockholm, Sweden

3North South University, Dhaka, Bangladesh

4Zhongnan University of Economics and Law, Wuhan, China

Explainable Artificial Intelligence for Next Generation Internet of Things


The advent of next-generation Internet-of-Things (NG-IoT) brings different research challenges and priorities. The identified priorities encompass several components of the IoT stack and, as a result, relate to 6G, Distributed Ledgers, Big Data, Artificial Intelligence, Cyber Security, and Cloud Computing.

Moreover, NG-IoT technology has been actively applied in the automotive industry to meet the new demands in the market while continuing to achieve their conservative business goals. NG-IoT paves the way for better understanding of the manufacturing process, thereby enabling efficient and sustainable production. This enormous number of IoT devices generates a large capacity of data that further require intelligent data analysis and processing methods, such as Artificial Intelligence. Notably, the AI algorithms, when applied in the next-generation Internet-of-Things (NG-IoT) can enable various applications such as smart city, smart manufacturing, smart transportation, and smart grid, etc. AI has recently been highly successful in a variety of application domains, including computer vision, natural language processing, and big data analysis. However, massive parameters and complicate processing mechanisms make it difficult for humans to trace and understand the reasoning process, which seriously hinders users' understanding of the model learning mechanism and prediction results and impairs the large-scale application of black box artificial intelligence, especially in autonomous systems and smart healthcare. We must explore the understanding of complex models and their output results and improve the interpretability of models. Explainable Artificial Intelligence (XAI) is an emerging field in AI that aims to address how AI systems make decisions. Thus, XAI is about the AI methods and techniques that produce human comprehensible solutions. The main objectives of XAI are to improve human understanding, to determine the justifiability of decisions made by the machine, and to introduce trust and reduce bias.

The aim of this Special Issue is to collate original research and review articles describing advances in this field.

Potential topics include but are not limited to the following:

  • Theory, models, and evaluation metrics of explainable deep learning
  • Explainable deep learning for multimedia processing in NG-IoT
  • Explainable deep learning methodologies to context prediction in NG-IoT
  • Explainable deep learning for human computer interaction
  • Explainable intelligent visual question answering and reasoning
  • Interpretation of the decision-making process
  • Visual analysis for explainable deep learning
  • Cloud/fog/edge computing for multimedia processing in NG-IoT
  • Multimedia data mining and representation learning in NG-IoT
  • Real-time multimedia data modeling in NG-IoT
  • Cross-modality big data fusion with deep learning
  • Multimedia data sharing and collaborative management in NG-IoT
  • The multimedia edge computing system for data storage, retrieval, detection, and recognition
  • Novel explainable deep learning approaches in the applications of smart health, smart city and IoV
Wireless Communications and Mobile Computing
Publishing Collaboration
More info
Wiley Hindawi logo
 Journal metrics
See full report
Acceptance rate11%
Submission to final decision194 days
Acceptance to publication66 days
Journal Citation Indicator-
Impact Factor-

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.