Mobile Information Systems

Distributed AI at Edge Nodes for Mobile Edge Computing


Publishing date
01 Feb 2023
Status
Published
Submission deadline
07 Oct 2022

Lead Editor

1Shenzhen University, Shenzhen, China

2Northeastern University, Shenyang, China

3University of Electronic Science and Technology of China, Chengdu, China

4University of Hertfordshire, Hatfield, UK

5University of Navarra, Pamplona, Spain


Distributed AI at Edge Nodes for Mobile Edge Computing

Description

The advent of smartphones, tablets, and wearable devices has made mobile computing applications more and more common in our lives. The growth in the number of users has led to a corresponding increase in the computing requirements of mobile applications to meet the needs of mobile users for computing applications such as virtual reality and real-time online games. Mobile edge computing (MEC) is a promising solution that can provide sufficient computing and storage capacity close to mobile users. The primary goal of previous generations of wireless technologies were to provide sufficient wireless speeds to enable the transition from voice to multimedia. The tasks of 5G/B5G networks are very different and far more complex, namely supporting communication, computing, control, and content delivery. In addition, the ultra-dense edge devices such as wireless access points and small cell base stations are deployed, each providing computing and storage capabilities at the network edge. In addition, UAVs have also received great attention in providing air-ground collaborative MEC frameworks due to their flexible management, flexible deployment, and low cost.

Considering such diverse resource availability and the large number of devices and data generated by computationally intensive applications, powerful tools and techniques are needed that can allocate communication and computational resources to users. Artificial intelligence (AI) solutions, especially deep learning networks, are suitable to address obstacles and support intelligent resource management for efficient MEC in real-time and dynamic scenarios. Recently, the concept of Distributed Machine Learning (DML) was defined, where MEC interacts with AI, i.e., learning models, parameters, and data are distributed across multiple edge servers, and AI models are trained from distributed data. This distributed learning model includes, for example, federated learning, where a node acts as a coordinator, which aggregates locally derived parameters and returns globally updated parameters to the server (learner). This interaction between MEC and AI paves the way for intelligent pipelines, enabling communication networks to become intelligent and self-driving.

This Special Issue will focus on DML for Mobile Edge Computing at edge nodes. Original research and review articles are welcome.

Potential topics include but are not limited to the following:

  • DML for new mobile network architectures, protocols, resource allocation, synchronization, signaling, optimization in MEC
  • DML for mobile backhaul traffic management including the latency, bandwidth, jitter, power consumption, and routing optimization in MEC
  • DML for the mobility, handoff and interference management and control in MEC
  • DML for new wireless applications and technologies (e.g., smart city, drone control) in MEC
  • DML for load balancing schemes and energy saving techniques in MEC
  • DML for content caching, deployment, update, push, recommendation in MEC
  • DML for wireless network measurements, implementations, and demos in MEC
  • DML for intrusion and threat detection in MEC
  • DML for wireless sensing and data processing in MEC
  • DML based open standard and application programming interface in MEC
  • DML based performance evaluation including scalability, robustness, adaptability, etc.
  • DML based testbeds and experiments over wide-scale deployment environments

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.