Deep Learning in Mobile Computing: Architecture, Applications, and Future Challenges
1Shanghai Polytechnic University, Shanghai, China
2Edinburgh Napier University, Edinburgh, UK
3DAMO Academy, Hangzhou, China
Deep Learning in Mobile Computing: Architecture, Applications, and Future Challenges
Description
With the emergence of big data and efficient computing resources, deep learning has made major breakthroughs in many areas of artificial intelligence. However, in the face of increasingly complex tasks, the scale of data and deep learning models has become increasingly large. For example, the amount of tagged image data used to train an image classifier amounts to millions, or even tens of millions. The emergence of large-scale training data provides a material basis for training large models. Therefore, many large-scale machine learning models have emerged in recent years. However, when the training data is increased, the deep learning model may have tens of billions or even hundreds of billions of parameters without any pruning.
In order to improve the training efficiency of the deep learning model and reduce the training time, distributed technology should be used to perform training tasks. At the same time, multiple working nodes are used to train the neural network model with excellent performance in a distributed and efficient manner. Distributed technology is an accelerator of deep learning technology, which can significantly improve the training efficiency of deep learning and further increase its application range. Mobile networks use a distributed architecture, which means that all the computers or devices that are part of it share the workloads in the network. The characteristics of mobile networks make it possible to perform distributed tasks, therefore it can be used for distributed deep learning.
This Special Issue focuses on recent advances in architecture, algorithms, optimization, and models of mobile computing for deep learning tasks. Original research and review articles reflecting various aspects of mobile computing for deep learning are encouraged.
Potential topics include but are not limited to the following:
- Mobile networking framework for deep learning
- Fault tolerance in mobile computing systems
- Algorithms, schemes and techniques in mobile computing systems for deep learning
- Parallel computing in mobile computing systems for deep learning
- Optimization and distributed control
- Distributed infrastructures, parallelization of deep learning training
- Resource allocation and scheduling of deep learning training
- Data management of deep learning training
- Security and privacy in mobile computing systems
- FPGA based mobile deep learning
- Edge cloud computing for distributed deep learning
- Fog computing for distributed deep learning
- Software platforms and infrastructures for distributed deep learning
- Application examples and success stories of distributed deep learning
- Surveys of distributed deep learning at mobile network