Abstract
The advent of the 5G mobile network has brought a lot of benefits. However, it prompted new challenges on the 5G network cybersecurity defense system, resource management, energy, cache, and mobile network, therefore making the existing approaches obsolete to tackle the new challenges. As a result of that, research studies were conducted to investigate deep learning approaches in solving problems in 5G network and 5G powered Internet of Vehicles (IoVs). In this article, we present a survey on the applications of deep learning algorithms for solving problems in 5G mobile network and 5G powered IoV. The survey pointed out the recent advances on the adoption of deep learning variants in solving the challenges of 5G mobile network and 5G powered IoV. The deep learning algorithm solutions for security, energy, resource management, 5G-enabled IoV, and mobile network in 5G communication systems were presented including several other applications. New comprehensive taxonomies were created, and new comprehensive taxonomies were suggested, analysed, and presented. The challenges of the approaches are already discussed in the literature, and new perspective for solving the challenges was outlined and discussed. We believed that this article can stimulate new interest in practical applications of deep learning in 5G network and provide clear direction for novel approaches to expert researchers.
1. Introduction
The unprecedented quest for rapid mobile traffic calls opens the way for emerging mobile communication systems [1, 2]. The emerging wireless mobile networks are projected to provide sufficient support for high rate of data transfer [3] and applications that need innovative wireless radio technology paradigm [4]. In addition, it provides satellite communication for delivering enhanced broadband [5]. The diverse requirement of the emerging wireless mobile network can be satisfied through radio in intelligent adaptive learning and decision making [4]. The 5G wireless mobile network [6, 7] is expected to stimulate interest in new fundamental innovations [8, 9] that have an impact on video surveillance, monitoring services for processing stream at reliable high speed, high bandwidth, and network connectivity that is highly secured [10], and Internet of Vehicles (IoVs) forming 5G-enabled IoV [11]. The emerging 5G mobile wireless network has the potentials for ultrahigh bandwidth and communication latency that is ultralow [12]. The 5G mobile wireless network aims to provide reliable connectivity ubiquitously [13]. The 5G wireless mobile network offers 1000 times increased in Internet traffic and is expected to give support to the industries and the Internet of Things technology. The 5G wireless mobile networks have more complications in design compared with the existing mobile communication technology and its diverse applications [14]. Therefore, it requires advance artificial intelligent techniques to solve problems in the 5G wireless mobile networks.
As pointed out in [2], a lot of research and development on 5G has been conducted before it is commercialized in the year 2020. The resurfacing of artificial intelligence with full force can bring an alternative methodology in solving 5G problems with likely better performance compared with the traditional methods [14]. The increased complexity of the cellular mobile network indicated that machine learning, a subset of artificial intelligence, has the potential to effectively improve the technologies of 5G wireless mobile networks [15]. In machine learning, the new generation artificial neural networks-deep learning algorithms have been applied in different domains and found to produce remarkable output comparable to human experts [16]. Xu et al. [17] argued that in the era of the 5G, the generation of large-scale data as a result of activities emanating from mobile task requires the new generation artificial neural networks-deep learning algorithms for the data processing, especially in the area of speech recognition and computer vision. Klautau et al. [15] pointed out that the performance of the deep learning algorithms increases as the amount of data scales up. This characteristic of the deep learning algorithms makes them fit to solve large-scale problems in 5G wireless mobile networks.
The dissemination of the data in traditional networks is susceptible to limitations such as high latency, significant drop in packets, and network congestion as a result of increasing number of connected vehicles on the road. Thus, the combination of the intelligent transportation and the Internet of Things is motivated to develop the IoV that basically allows exchange of data with its surrounding environment: vehicles-to-vehicles, vehicles-to-infrastructure, vehicles-to-roadside units, vehicles-to-sensors, and vehicles-to-personal devices via wireless communication networks [18] which could be called vehicle-to-everything and autonomous vehicle applications [19].
Different architectures of the deep learning algorithms [20] such as the convolutional neural network (CNN), generative adversarial network (GAN), dense neural network (DDNN), deep reinforcement learning (DRL), long short-term memory (LSTM), autoencoder (AE), and deep recurrent neural network (DRNN) were applied in 5G to solve problems in cybersecurity defense system, resource management, energy, mobile networks, and 5G-enabled IoV.
This paper intends to conduct an in-depth literature review on the progress made by deep learning algorithms in developing solutions to different aspects of 5G wireless mobile networks and 5G-enabled IoV.
The paper intends to answer the following research questions:(i)What are the deep learning architectures applied for solving problems in 5G mobile network?(ii)How is the publication trend for the applications of deep learning algorithms in 5G networks?(iii)What taxonomies can be created for the deep learning algorithms in 5G networks?(iv)What is the extend of applying deep learning algorithms in 5G-enabled IoV?(v)What are the challenges identified in the existing approaches of solving problems in 5G wireless mobile networks?(vi)What are the promising directions as new perspective for solving the identified challenges?
The other sections of the paper are structured as follows. Section 2 presents the previous reviews conducted and outlines the differences with the current review. Section 3 provides the basic information about the deep learning algorithms frequently applied in 5G wireless mobile networks. Section 4 presents 5G wireless mobile network domains and classification of papers accordingly. Section 5 presents meta-data analysis. Section 6 points out challenges and future direction for research work before. Conclusions are presented in Section 7.
2. Previous Surveys and Motivation
In the literature, there are a number of reviews on the applications of deep learning in 5G wireless mobile network. As such, the paper dedicated this section to present the reviews and point out the differences with the current review. For example, Aldweesh et al. [21] conducted a survey on the applications of deep learning algorithms in detecting anomaly. It mainly focused on the cyber security defense system for the 5G wireless mobile network. In another survey, Restuccia and Melodia [22] were motivated by the fact that the 5G wireless mobile networks are based heavily on millimeter wave (mmWave) and the ultrawideband communications. Therefore, it focuses on the physical layer of the wireless mobile networks. The paper discusses the significance of real time deep learning algorithms at the physical layer. Similarly, Huang et al. [23] presented survey focusing on deep learning algorithm-based physical layer, mainly on the nonorthogonal multiple access (NOMA), massive MIMO, and mmWave. The existing surveys mainly focus on the physical layers and cyber security defense. However, a lot of topics remain unexplored in the previous surveys. In addition, the earlier surveys focus on a particular aspect of the 5G wireless mobile networks denying readers to see a broad view of the deep learning solutions in 5G wireless mobile network. A comprehensive taxonomy connecting different deep learning architectures with different tasks in 5G wireless mobile network is missing from the already published surveys. The current review covers all aspects of deep learning algorithm-based 5G wireless mobile network solutions to give the reader a broad view of the 5G wireless mobile networks research area on the applicability of different architectures of the deep learning algorithms. Another major issue with the previous survey is that no comprehensive taxonomy on the 5G wireless mobile network domains. In view of the limitations in the previous surveys conducted, this paper proposes a comprehensive taxonomy showing different deep learning architectures and tasks in 5G wireless mobile networks. Zhang et al. [24] presented a survey on the applications of deep learning algorithms in the general area of mobile and wireless networks unlike our proposal that mainly focuses on 5G wireless mobile networks.
2.1. The Adoption of Deep Learning Architecture in 5G Wireless Mobile Network
In this section, for unification of the research area, a taxonomy as shown in Figure 1 on the adoption of deep learning architectures in performing different tasks in 5G wireless mobile networks is proposed. The taxonomy classified papers that applied deep learning algorithms in solving machine learning problems in 5G wireless mobile networks. The deep learning architectures together with the task associated to each of the deep learning architecture were extracted from different papers that used deep learning in 5G wireless mobile networks. The taxonomy can serve as a basis for the creation of deep learning-based 5G wireless mobile network framework that is holistic to be applied in 5G wireless mobile networks. The taxonomy is used as the foundation for extracting and classifying the different deep learning architectures available in the literature used for a particular task in the 5G wireless mobile networks. The deep learning architectures found to be applied in the 5G wireless mobile networks include CNN, DDNN, AE, GAN, LSTM, DRNN, hybrid deep learning, and DRL. The basic theory of each of the deep learning architecture is presented before summarizing the papers that applied it in 5G wireless mobile networks. The basic theories are presented to give readers understanding of how different deep learning architectures operate to achieve desired goal. This can make the paper self-contained especially for new readers having the intention of starting a research career in this field.

2.2. Convolutional Neural Network
This section presents the background information about CNN and the studies that applied the CNN in 5G wireless mobile network to develop solution. The CNN is a type of feed forward neural network mostly used in image processing and pattern recognition. It is characterized by a simple structure, adaptability, and few training parameters [25]. The structure of a CNN consists of different layers including input layer, convolution layer, pooling layer, and the output layer. The convolution layer receives an input image and performs the convolution process by applying a filter to extract feature map. The pooling layer receives feature maps from the convolution layer and downsamples the feature maps. During the pooling process, neighbouring pixels become a single pixel by adding a bias , scalar weighing , and applying activation function, and a narrow feature map is produced. A major advantage of CNN is its parallel learning ability that helps in reducing the network’s complexity. Again, it improved robustness, and scaling can be achieved by applying the subsampling process. The processing of output at the layers of CNN can be expressed by the following equations [25]:where is the output of neuron at convolution layer , feature pattern , row , and column and denotes the number of convolution cores in a given feature pattern. At the subsampling stage, the output of neuron at the subsampling layer, feature pattern, row , and column is expressed as follows:
At the hidden layer H, the output of neuron is given as follows:where denotes the number of feature patterns in the subsampling layer.
At the output layer, the output of neuron at the output layer is expressed by the following equation:
2.2.1. The Studies That Applied Convolutional Neural Network in 5G Wireless Mobile Network
Bega et al. [26] developed a DeepCog based on 3D CNN for resource management in 5G mobile networks. In the 5G technology network, infrastructure is divided into slice. The DeepCog is designed to allocate each slice it is own required resources. The DeepCog is evaluated in the real world scenario, and it is found to be effective. Gante et al. [27] proposed temporal CNN for outdoor positioning of mmWave in 5G mobile wireless networks. The temporal CNN achieved baseline accuracy for the non-line-of-sight mmWave outdoor positions with 1.78 meters as the average error while maintaining moderate bandwidth, sample of binary data, and single anchor. Huang et al. [28] presented deep learning for the allocation of co-operative resources based on channel conditions in 5G mobile wireless networks. The study generated CNN by applying channel information and the resource allocation intended for optimization. The generated CNN can assist in making the full scale channel information in place of the traditional resource optimal utilization especially in a dynamic channel environment. The method is found to be effective in reducing the complexity of the optimization, reducing computational time, and producing satisfactory performance.
He et al. [29] proposed CNN to capture the characteristics of interfering signal to suppress the interfering signal. The proposed CNN-based multiuser multiple-input multiple-output (MU-MIMO) for 5G can be applied to suppress the influence of interference that is correlated with a reduced computational complexity and improve the performance of the CNN-based MU-MIMO. Hussain et al. [30] proposed CNN for the development of framework to detect distributed denial-of-service attack prompted by botnet that control devices that are malicious over 5G network. These attack mainly target the cyber physical system. The framework is found to have an accuracy of over 90% in detecting attacks.
Doan and Zhang [34] proposed CNN for anomaly detection in 5G mobile wireless networks. The CNN is found to be a good algorithm for the detection of intrusion while reducing the impact of latency.
Ahmed et al. [35] applied CNN to solve problem in spectrum access for 5G/B5G cognitive radio network of IoT. The intelligent CNN-based model learns to locate spectrum holes for users with over 90% accuracy. Cheng et al. [36] proposed an enhanced CNN with attention for modeling mmWave for the 5G network communications. The image data captured and locality feature extraction were performed using convolution while the attention enhances the use of the global information. The proposed scheme was found to be better than the classical methods. Guan et al. [37] proposed CNN transfer learning for the classification of network traffic in a dataset constrain scenario in 5G IoT. The model is trained by weight transferring and ANN fine-tuning. The CNN transfer learning was able to predict the network traffic with comparative performance to the classical methods. Xu et al. [38] proposed RGB stream and spatial rich model noise stream for differentiating between adversarial and clean examples. The CNN is used to detect adversarial image, and it achieved over 90% accuracy for the detection rate.
2.3. Deep Reinforcement Learning
The reinforcement learning (RL) falls among the top real time methods for decision making. It learns by interacting with the environment through action and recognition [39]. At every stage of interaction, depending on the environment’s state, the agent selects an action that adjusts the state of the environment. A reward or punishment is given to the agent for every action taken depending on whether the action is beneficial or not.
The concept of RL is expressed as a Markov Decision Process (MDP) tuple given as (S, A, R, P), with S as the environment’s state, A as action, R as reward, and P as the probability of state transition. The aim of RL is always to learn the best policy and maximize the sum of discounted reward of each state expressed as follows [39]:where and π denote the optimal policy and policy, respectively, denotes total expected reward, is the expectation based on the policy π and the transition probabilities, and is the discount factor in the range . The agent becomes opportunistic about the present reward when and strives for long-term great reward when . denotes the reward at time .
The achievable return for execution of an action in a state is represented by the value function . This can be updated according to each state-action pair till a given threshold turns out to be greater than the highest change in the value:where denote the transition probability from state to state when action has been executed, and the reward is denoted by . Following the convergence of the algorithm, the optimal policy is achieved by taking a greedy action on each state . This is expressed as follows:
In situations where the system does not have prior knowledge about the environment, optimal policies can be achieved by a type of the RL algorithm known as Q-learning. Given as the learning rate such that when , the agent becomes incapable of learning, and when , the agent only considers the most recent information. The updating rule of Q-learning is given as follows:
This implies at time step , state is observed by the agent and an action is chosen. Reward is received by the agent for execution of the action . Q-learning always tries to choose the optimal action by considering the state-action pair with the best Q-value. RL algorithms are very good in solving various problems especially problems relating to messaging and mobile network [39].
2.3.1. The Applications of the Deep Reinforcement Learning in 5G Wireless Mobile Network
The studies that applied DRL in solving machine learning problems are discussed in this section. Dai et al. [40] applied deep reinforcement learning (DRL) to develop caching scheme in 5G mobile network and beyond. Numerical results indicated that the DRL caching scheme is effective in maximizing caching resource utility. Dong et al. [41] proposed DRL to minimize normalize energy consumption for hybrid 5G mobile network technology in edge computing systems. Digital twin from the real network environment is used for the training of the DRL at central server offline. It was found that the proposed approach minimizes normalized energy consumption with less computational complexity better than the existing approaches.
Pradhan and Das [48] proposed RL for resource reservation in ultrareliable low latency communication for 5G network. It is found that the RL performs better than the baseline method in terms of the packet drop probability and resource utilization. Zhao et al. [49] proposed the application of RL for dynamic scheme of the network slice resources to improve quality of service in 5G network-enabled smart grid. The algorithm is able to change the demand of network at a fast rate of response for processing resource allocation. Ho et al. [50] proposed the application of DQN-based 5G-V2X for the optimization of 5G-based station allocation for platooning vehicles. It attempted to provide solution to the base station allocation problem. Xie et al. [51] applied DQN to develop an adaptive decision scheme for initial window in 5G MEC. The scheme is able to optimize the flow completion while minimizing the congestion. Comparison with baseline algorithms shows that the proposed scheme converges fast with stability. Supervised learning is introduced to improve the responsiveness and efficiency of the initial window decision.
Li and Zhang [52] applied DRL which is used in the 5G network to optimize the tradeoff between the quality of service and enhanced broadband and low latency communications. It is found that the quality of the service is achieved with the tradeoff between the enhanced mobile broadband and ultrareliable low latency communications. Yu et al. [53] proposed DRL for cloud radio access networks to maximize energy efficiency, service quality, and connectivity of remote radio heads. The proposed algorithm is found to effectively meet user requirement and handle cell outage compensation. Mismar et al. [54] used greedy nature DQN for the estimation of voice bearers and data bearer in sub-6 GHz mmWave band. The performance of the signal to interference plus noise ratio and sum-rate capacity has been improved. Saeidian et al. [55] proposed DQN downlink power control in 5G. It is found that the power control approach proposed in Saeidian et al. [55] improved data rate at the edge and reduced power transmitted compared with the baseline approaches. Abiko et al. [56] proposed DRL for the allocation of radio resources in 5G that satisfied service requirement notwithstanding the number of slices.
Giannopoulos et al. [57] applied DQN for the improvement of energy efficiency in multichannel transmission for 5G cognitive in decentralized, centralized, and transfer learning. Results indicated that the DQN model can enhance network energy efficiency. Gu et al. [58] developed the DRL knowledge-based assisted algorithm for the design of wireless schedulers for 5G networks with time sensitivity in traffic. The proposal improved quality of service and reduced convergence time. Yu et al. [53] designed DRL time scale consisting of fast and slow timescales learning process for optimizing resource allocation, computation offload, and caching placement. For protecting the edge device data privacy, federated learning is applied for training the DRL in a distributed approach. Experiment shows that the proposed approach reduces convergence time with over 30%. Dinh et al. [59] applied DQN for self-optimization of access point selection based on local network state knowledge. It was found that the proposed scheme enhances throughput and improves quality of service compared with the classical methods.
2.4. Autoencoder Architecture
The AEs are unsupervised learning algorithms capable of learning automatically from input data. The algorithms use simple learning circuits to convert input to output without significant alteration. The unsupervised training of AE takes a bottom-up fashion after which the supervised learning stage is employed for the training of the top layer and fine-tuning the whole architecture [60].
The framework of an n/p/n autoencoder is expressed as a tuple . In this case, and are sets, and are positive integers with , and denotes class of functions from to . is a set of training vectors in . In the cases where external targets exist, the equivalent set of target vectors in is represented by . signifies the alteration function over . denotes class of functions from to . For each and any , autoencoder transforms received vector into an output vector . The aim is to achieve and that minimize the entire alteration function by this equation:
In nonautoassociative situations where external targets are given, the distortion minimization is defined as follows: occurs in cases where the autoencoder tries to apply a kind of feature extraction or compression. Different combinations of transformation classes and and sets, and distortion function along with some additional constraints can be used to produce various architectures of autoencoders.
2.4.1. The Autoencoder Architecture Applications in 5G Wireless Mobile Network
The papers that applied AE in solving problem in 5G wireless mobile network are discussed in this section. For example, Lei et al. [61] proposed caching strategy based on Stacked Sparse AE (SSAE) in Evolved Packet Core of the 5G mobile wireless networks. The network functions virtual (NFV)/software defined network (SDN) is used for the development of the virtual distributed deep learning on the SSAE. Subsequently, the SSAE predicted the content popularity. The caching strategy is generated by SDN controller based on the predicted result and then synchronizing to each node of the cache via the flow table for the strategy execution. The deep learning-based strategy is found to improve the performance of the cache better than the baseline methods. Kim et al. [62] presented deep autoencoder sparse code multiple access (Deep-SCMA) for 5G mobile wireless networks. The Deep-SCMA codebook reduces the bit error rate by adaptive construction and deep autoencoder-based decoding and encoding. Results indicated that the Deep-SCMA scheme achieved lower bit error rate and fast computational time better than the conventional scheme.
2.5. Hybrid Deep Learning Algorithm
The purpose of hybridizing intelligent algorithms is to explore the strength of the individual algorithms [63, 64]. Some of the studies combined different deep learning architectures to form the hybrid algorithm. Before dueling into the hybrid architectures, brief discussion about the hybrid algorithm is presented. The hybrid intelligent algorithm is typically robust and efficient because it combined the complimentary features to deviate from the weakness of the constituent algorithms. Algorithms are hybridized because of performance improvement, multitask applications, and achieving multiple functions. The degree of interaction between the modules in hybrid models varies; it can be loosely coupled, tight coupled, fully coupled, and transformational [65]. The hybridization of the algorithms strengthens synergetic effects to the algorithms as individual algorithm limitation is overcome. The hybridization of the intelligent algorithms has brought a lot of new intelligent algorithms design [66].
2.5.1. Applications of Hybrid Deep Learning Architecture in 5G Wireless Mobile Network
Some of the studies combined different architectures of the deep learning to form hybrid architecture while others combined deep learning and shallow algorithm to form the hybrid. This section presents the papers that design hybrid deep learning for solving machine learning problem in 5G wireless mobile network. Luo et al. [67] employed the hybrid of CNN and LSTM (CNN-LSTM) to predict channel state information in a 5G wireless mobile network. Two outdoor and two indoor scenarios were used for the evaluation of the proposed scheme. The result indicated that the CNN-LSTM predicts channel state information in 5G network with the average different ratio of 2.650%–3.457% within very fast convergence time. Similarly, Huang et al. [68] proposed a combination of CNN and LSTM (CNN-LSTM)-based multitasking for the prediction of 5G mobile network traffic loads. The CNN-LSTM model is found to successfully extract geographical and temporal features. The CNN-LSTM can predict the minimum, maximum, and average traffic loads in the 5G mobile network, and it performs better than the baseline algorithms. Luo et al. [69] proposed combination of CNN and deep Q-learning (CNN–DQL) scheme for dynamic transmission power control to improve the performance of the non-line-of-sight transmission in 5G network. The CNN is used to predict the q-function offline before conducting online deep q-learning to search for the control strategy. The approach is found to maximize power transmission and quality of service.
Simulation result shows that the proposed CNN-LSTM detected the altered biometrics.
The DBN and the LSTM-based anomaly detection scheme inspect the network traffic flow in real time. The first level in the scheme executed the DBN on each RAN very fast. Subsequently, it detected the anomalous symptoms on the network traffic flow. The anomalous symptoms collected served as inputs to LSTM where the LSTM identified pattern of the cyber-attacks. The work has been extended in [77] with more extensive and comprehensive results.
2.6. Long Short-Term Memory
The LSTM solves a major problem of vanishing gradient or exploding gradient associated with the recurrent neural networks (RNNs). The error posed by the vanishing gradient problem prevents RNNs from learning in situations where the time lag between the input events and the target signals is above 5–10 distinct time steps. The LSTM on the other side is capable of linking minimum time-lags up to 1000 distinct time steps. It does this through special units termed as cells which comprise of constant error carousels (CECs) that impose constant error flow [80]. Access to the cells is granted by multiplicative gate units.
The hidden layer of a standard LSTM network consists of the memory blocks. A memory block has some memory cells and a pair of multiplicative gate units that allow flow of input and output to and from all the cells in the given block. A memory cell contains the CEC which handles the vanishing gradient error problem by keeping its local backflow error constant (without vanishing or exploding) when the cell is not receiving new input or error signals. The pair of gating units: input and output gates shield the CEC from both forward and backward error flow, respectively. The activation of the CEC determines the state of the cell. The activation of the input gate and the activation of the output gate given discrete time steps can be computed by the following [80]:where denotes memory block, denotes the logistic sigmoid in the range , and denotes the connection weight from the unit to the unit .
To compute the internal state of a given memory cell , the squashed gate input at where can be added as follows:where denotes cell of memory block , squashing of the cell input is done by , and . To determine the output of a cell , the internal state is squashed using an output squashing function and gating it with the activation of the output gate expressed as follows:where h denotes a centered sigmoid in the range .
The output units of a network with layered topology consisting of hidden layer with memory blocks, standard input, and output layer can be defined by the equation as follows:where denotes the squashing function with logistic sigmoid in the range and ranges over all input units and the cells in the hidden layer. The LSTM is capable of solving tasks with complex long time-lags that was never solved by RNN.
2.6.1. Exploring Long Short-Term Memory in 5G Wireless Mobile Network
The LSTM has been explored in finding solution to machine learning problem in 5G wireless mobile network.
Yu et al. [9] studied resource allocation of TV multimedia service for 5G wireless cloud network random access network. The study proposes a deep learning framework for resource allocation. The DRL is integrated with bandwidth of the users and power resource allocation. Subsequently, the LSTM is applied for the construction of traffic multicast service, and it improves energy efficiency. Liu et al. [82] proposed LSTM to predict hotspot for potential formation of virtual small cell in 5G wireless network. The LSTM is found to predict the hotspot with accuracy and low latency and improve energy efficiency compared with the traditional approaches. Chen et al. [83] applied LSTM for the prediction of traffic flow in 5G mobile wireless network. The LSTM is combined with the broad learning system to improve the performance of the LSTM. Subsequently, the LSTM is used to predict the traffic flow in 5G wireless network and predict with accuracy while maintaining low complexity and convergence time. Memon et al. [84] predicted next packet time based on traffic trace using LSTM. The LSTM predicts the dynamic sleep time in discontinuous reception in 5G wireless mobile networks. It is found to improve power savings.
Gumaei et al. [85] combined blockchain and DRNN for edge computing 5G-enabled drone identification. Dataset was collected from raw radio frequency signals taken from many drones under several flight modes for the training of the DRNN model. The DRNN is applied to detect drones from radio frequency signals. Ullah et al. [86] used hybrid of control flow graph and DRNN for securing smart services rendered by 5G-enabled IoT. The DRNN is applied to predict clone applications. Results show that the approach recorded over 90% accuracy for cloned applications predictions from android application stores.
2.7. Generative Adversarial Networks
The GANs are deep learning models consisting of both supervised and unsupervised learning methods [87]. It basically uses two models, namely, generative and discriminative models as shown in Figure 2. The generative model acts as an image synthesizer and is capable of forging images that are analogous to real images. The discriminative model on the other side serves as an expert that isolates real images from forged images. Both the discriminative and the generative model compete against each other and are trained in parallel [88]. The discriminator gets access to images by interacting with the generator. An error signal sent to the discriminator guides it in identifying forged images, and generator uses the same error signal to forge more qualitative images. The two models are deployed in form of multilayer network having convolution and fully connected layers.

Given that is a natural image obtained from a certain distribution , and given as random vector in that comes from a uniform distribution in the range ; however, other normal distributions like the multivariate distribution can also be applied. Let the generative and discriminative models be denoted by and , respectively. The generative model receives as an input image and forges an image as output having the same form as . Let denote the distribution of . The probability that an input image is obtained from is calculated by the discriminative model. Note that if and if . The generative and the discriminative models of the GAN can be simultaneously trained by the following equations [87].
The equation given can be solved by alternating the two steps of updating gradient given as follows:with and denote the parameters of and , respectively, denotes iteration number, and denotes the learning rate x′.
2.8. Dense Deep Neural Network
The DDNN sometimes referred to as multilayer perceptrons (MLPs) or simply feedforward neural networks with multiple hidden layers is essential deep learning models with the goal of evaluating a function . For instance, a classifier maps an input to a category , and feedforward network works by defining a mapping at the same time understands which values of give the excellent function approximation. The models are termed as feedforward because data pass from through the function under evaluation via the intermediate computation that defines and lastly to which is the output. Feedforward neural networks are named networks because they are composed of multiple different functions. The functions are arranged in a form of a directed and acyclic graph. For instance, functions and , linked together form a chain:
Generally, these forms of chain structures are the most applied structures for neural networks. is known as the first layer, being the second layer of the network and other layers follow the same pattern. The depth of a model is defined by the entire length of the chain. The last layer of the network is the output layer. At the training stage, is derived to match . Noisy and estimated examples of evaluated at varying training points are provided by training data. This implies that for every example , there is a label attached to it. The essence of the training example is to clearly point out that at each point , the output layer is expected to produce a value close to . However, the training data do not specify the behavior of other layers instead the learning algorithm takes decision on how to exploit those layers to arrive at . The layers are called hidden layers. Each hidden layer is a form of vector, and the dimension of the layers defines the model’s width [91].
2.8.1. Exploring Dense Deep Neural Network in 5G Wireless Mobile Network
Butt et al. [95] proposed DNN for RF fingerprint for user equipment positioning in 5G mobile network. The DNN framework is able to predict the user equipment positioning in 5G network. Sim et al. [96] proposed DDNN for the selection of beam that is compatible with 5G new radio. The DDNN is able to select the mmWave beam, and it reduced the beam sweeping overhead. El Boudani et al. [97] proposed deep learning-based co-operative architecture in 5G network for 3D indoor positioning. The proposed approach is applied to predict 3D location of a mobile station, and it is found to perform better than the baseline algorithms. Thantharate et al. [98] used DDNN to detect and eliminate security thread before attacking 5G core wireless network. The proposed model has the ability to sell network slice as a service to service different services on single infrastructure that is reliable and highly secured. Chergui and Verikoukis [99] proposed deep learning for slicing resource allocation based on the service level agreement for 5G network reliability and end-end slicing. The gated RNN is used for the prediction of slices traffic while at every virtual network the DDNN is used to estimate the needed resources.
Ali et al. [100] proposed DNN for resource allocation to meet the requirement of the 5G network. The DNN achieved solutions for the resource allocation and remote radio head problems in the C-RAN. Rathore et al. [101] proposed DNN with blockchain for empowering security scheme for intelligent 5G IoT. The framework operated across the four layers of the emerging cloud computing architectures. The simulation of the proposed framework demonstrated efficiency and effectiveness in securing the 5G-enabled IoT systems.
2.9. The 5G Wireless Mobile Network Technology and 5G Powered Internet of Vehicle
This section presents the 5G technologies mostly targeted by the researchers for solution based on deep learning algorithms. The discussion includes the applicability of the 5G technologies in IoV. Few papers were found to apply deep learning algorithms to solve machine learning problems in 5G powered IoV. Chiroma et al. [102] argued that the deep learning algorithms are anticipated to drive the data analytics in IoV for better understanding and improvement of the IoV because large-scale data are predictable to be collected from the IoV as a result of vehicles mobility in the IoV environment.
The network slicing isolates the network functions logically and resources that are meant for the vertical market on a common infrastructure of a network. The network slicing can expand all the 5G network domains across the core network and radio access network segments [103]. The mmWave communications in the 5G significantly improve the amount of bandwidth [27]. The sparse code multiple access is a code-based nonorthogonal multiple accesses that improve spectral efficiency and connectivity that meet the standard of the 5G wireless mobile network [62]. Other 5G technologies are presented in Table 1. The 5G technologies in Table 1 are extracted from the papers analyzed in Section 3.
The 5G wireless mobile network was predicted to eliminate the challenges of the IoV by providing fast connection and low latency and offering a reliable connection for the IoV applications [105]. For example, the 5G network slicing can cope with variant use cases and different demands by many tenants over the 5G infrastructure in vehicle-to-everything communications ecosystem [103]. The 5G wireless mobile network can be used for guaranteeing security in IoV [19].
Insufficient spectrum motivated interest on enabling 5G vehicular communications at mmWave band. The mmWave band is the technology that offers very rich spectrum to support the flow of very large volume of data at high speed. It is crucial especially for the development of vehicular applications in view of the fact that the modern vehicles are embedded with a lot of sensors, as such it generates a lot of large-scale amount of data [106]. The 5G-based IoV embedded with SDN, Cloud, and Fog is developed by Benalia et al. [18]; the cloud and the fog enhance the processing and computing capability for controlling traffic. On the other hand, the flexibility, scalability, ease of programming, and global knowledge of the network are provided by the SDN. It uses the 5G MIMO and beamforming to get the high speed communication. The 5G-based IoV has the capacity of disseminating data efficiently with flexibility. The 5G slice for the vehicular infotainment application is anticipated to apply multiple radio access technology for the purpose of having high throughput as well as to cloud content remotely or near a node. The diagnosis and management of the vehicles are performed remotely via the slice configured to support the bidirectional flow of small amount of data with low frequency between vehicles and remote servers outside the core network [107].
2.9.1. Deep Learning Algorithms in 5G Wireless Mobile Network Powered Internet of Vehicles
In the 5G-based IoV, the network selection is performed by the Fuzzy CNN (FCNN). The vehicle-to-vehicle pairs were selected using the jellyfish optimization algorithm. The dynamic Q-learning and FCNN are applied to develop vertical handover decision that combined 5G mmWave, LTE, and DSRC in IoV. The performance of the Fuzzy CNN (FCNN) is evaluated using the following metrics: handover failure, handover success probability, redundant handover, throughput, packet loss, and delay [108]. Scarcity of the intensive study on data security and privacy preservation prompted the investigation of vehicular crowd sensing. A blockchain-enabled vehicular crowd sensing based on DRL is applied for the protection of user privacy and security safety in 5G powered IoV. The DRL is used for the selection of active miners and transactions, thus minimizing latency and maximizing security of blockchain. The nonorthogonal multiple access subchannels are allocated by the two-sided matching algorithm. The scheme is found to protect against common attacks, provide maximum security, and preserve privacy and integrity [109]. Similarly, privacy risk regarding centralized training of the model motivated the application of federated learning to develop a scheme based on federated learning in 5G powered IoV for recognition of license plate. The data for the modeling were harness in individual mobile phone in place of the server. It was found that the federated learning scheme preserved privacy and has high accuracy as well as effective communication cost (see Kong et al. [110]).
3. The 5G Wireless Mobile Network Domain
Figure 3 presents the taxonomy of the domain of applications in 5G wireless mobile technology. The taxonomy clearly indicated that a lot of domain in 5G witnessed the adoption of deep learning architectures for solving problems. The domain of applications includes cyber security defense system, resource management, signal, mobility, energy, networking, 5G-enabled vehicular network, and mobile network. The taxonomy revealed that different aspects of resource management and mobile network received tremendous attention from the researchers.

3.1. Resource Management
This section provides the applications of the deep learning algorithms in resource allocation in the 5G technology. Bega et al. [26] developed a DeepCog based on 3D CNN for resource management in 5G mobile network. In 5G technology, network infrastructure is divided into slice. The DeepCog is designed to allocate each slice it is own needed resources. The DeepCog is evaluated in the real-world scenario, and it is found to be very effective. Huang et al. [28] presented deep learning for the allocation of co-operative resources based on channel conditions in 5G mobile wireless network. The study generated CNN by applying channel information and the resource allocation intended for optimization. The generated CNN can assist in making the full scale channel information in place of the traditional resource optimal utilization especially in a dynamic channel environment. The method is found to be effective in reducing the complexity of optimization, reducing computational time, and producing satisfactory performance.
Chergui and Verikoukis [99] proposed deep learning for slicing resource allocation based on the service level agreement for 5G network reliability and end-end slicing. The gated recurrent neural network is used for the prediction of slices traffic while at every virtual network the DDNN is used to estimate the needed resources. Abbas et al. [90] proposed a network slicing scheme that can slice and effectively manage radio access network and core network resource. Subsequently, GAN is deployed to manage the network resources, and it is found to perform better in terms of bandwidth and latency. 5G-enabled TV multimedia allocation was studied by Yu et al. [9], and a deep learning framework was proposed. DRL is integrated with bandwidth of the users and power resource allocation. Subsequently, the LSTM is applied for the construction of traffic multicast service, and it improves energy efficiency.
Abiko et al. [56] proposed DRL for the allocation of radio resources in 5G that satisfied service requirement notwithstanding the number of slices. Ho et al. [50] proposed the application of DQN-based 5G-V2X for the optimization of 5G-based station allocation for platooning vehicles. It attempts to provide solution to the base station allocation problem. Zhao et al. [49] proposed the application of RL for dynamic scheme of the network slice resources to improve quality of service in 5G network-enabled smart grid. The algorithm is able to change the demand of network at a fast rate of response in the processing of resource allocation. Pradhan and Das [48] proposed RL for resource reservation in ultrareliable low-latency communication for 5G network. It is found that RL performs better than the baseline method in terms of the f packet drop probability and resource utilization.
Tang et al. [47] proposed DQN uplink/downlink resource allocation 5G heterogeneous network. The features of the complex network information were extracted using deep belief network. The Q-value based on the DQN with the reply is applied to change the time division duplex up/down link ratio based on the reward mechanism. The proposed DQN-based time division duplex is able to improve network performance based on throughput and packet loss rate compared with the resource allocation based on the traditional time division duplex. Li et al. [44] applied adaptive DQN for on-demand service function chaining mapping strategies in 5G. In the proposed approach, an agent makes decision from the heuristic service function chaining mapping algorithm with low complexity to meet the users’ need. The proposal is found to enhance the entire system recourses efficiently by scheduling two heuristics effectively after learning from the episode.
Abidi et al. [70] hybridized glowworm swarm optimization and deer hunting optimization algorithm to optimize the structure of hybrid DBN and ANN for 5G network slicing. The proposed model is found to accurately provide 5G network slicing. Ahmed et al. [35] applied CNN to solve problem in spectrum access for 5G/B5G cognitive radio network of IoT. The intelligent CNN-based model learns to locate spectrum holes for users with over 90% accuracy. Ali et al. [100] proposed DNN for resource allocation to meet the requirement of the 5G network. The DNN achieved solutions for the resource allocation and remote radio head problems in the C-RAN.
3.2. Energy/Power Transmission
Energy/power transmission [111] is an issue in 5G wireless mobile network, as such many researchers applied deep learning algorithms for solving problem of energy efficiency [112] in the 5G network. Dong et al. [41] proposed DRL to minimize normalized energy consumption for hybrid 5G mobile network technology in edge computing systems. Digital twin from the real network environment is used for the training of the DRL at central server offline. It was found that the proposed approach minimizes normalized energy consumption with less computational complexity better than the existing approaches. Luo et al. [69] proposed combination of CNN and deep Q-learning (CNN-DQL) scheme for dynamic transmission power control to improve the performance of the non-line-of-sight transmission in 5G network. The CNN is used to predict the q-function offline before conducting online deep q-learning to search for the control strategy. The approach is found to maximize power transmission and quality of service. Saetan et al. [93] deviated from the impact of imperfect successive interference cancellation under the fairness perspective for downlink nonorthogonal multiple access. The DDNN is applied to predict the power allocation factor. Result indicates that the performance DDNN is in comparable to the exhaustive search. A similar study was carried out by Saetan and Thipchaksurat [94], but the focus was on sum-rate maximization. Liu et al. [82] proposed LSTM to predict hotspot for potential formation of virtual small cell in 5G wireless network. The LSTM is found to predict the hotspot with accuracy and low latency and improve energy efficiency compared with the traditional approaches.
Saeidian et al. [55] proposed DQN downlink power control in 5G. It is found that the power control approach proposed by Saeidian et al. [55] improves data rate at the edge and reduces power transmitted compared with baseline approaches. Yu et al. [46] proposed DRL for Cloud Radio Access Networks to maximize energy efficiency, service quality, and connectivity of remote radio heads. The proposed algorithm is found to effectively meet user requirement and handle cell outage compensation. Xia et al. [45] proposed the DQN-based offloading algorithm for 5G multicell MEC to obtain optimal offloading policy by the mobile phone users. It is found the proposed algorithm is able to outperform the baseline algorithm by significantly reducing the energy cost of the mobile device and the delay experience by the users of the mobile devices. Giannopoulos et al. [57] applied DQN for the improvement of energy efficiency in multichannel transmission for 5G cognitive in decentralized, centralized, and transfer learning. Results indicated that the DQN model can enhance network energy efficiency.
3.3. Cybersecurity Defense Systems
The 5G wireless mobile network requires protection from cyber-attacks [113]. Therefore, mechanism and protocol as basis for the protection of the 5G network are needed to address the security challenges [114]. Ravi [115] argued that it is highly necessary to proffer effective and efficient security breach detection mechanism using intelligent systems. Maimó et al. [76] proposed deep learning anomaly detection scheme for network flows to effectively and efficiently search for attacks in 5G mobile wireless network. The DBN and the LSTM-based anomaly detection scheme inspect the network traffic flow in real time. The first level in the scheme executes the DBN on each RAN very fast. Subsequently, it detects anomalous symptoms on the network traffic flow. The anomalous symptoms collected served as inputs to LSTM where the LSTM identified pattern of cyber-attacks. The work has been extended in [77] with more extensive and comprehensive results. Maimó et al. [78] extended the work in [77] by integrating mobile edge computing (MEC) architecture in the management of 5G wireless network anomaly detection autonomously in real time based on policies. The policies provide the effective, efficient, and dynamic management of the computing resources used during the process of the anomaly detection in 5G network traffic flow. Sundqvist et al. [79] proposed Adaboosted ensemble LSTM for the detection of anomalies in 5G radio access network. The proposed ensemble method is used to detect anomalies in 5G random access network. The Adaboosted ensemble LSTM is able to detect the anomalies in random access network very fast with reliability.
Thantharate et al. [98] used DDNN to detect and eliminate security thread before attacking 5G core wireless network. The proposed model has the ability to sell network slice as a service to service different services on single infrastructure that is reliable and highly secured. Doan and Zhang [34] proposed CNN for anomaly detection in 5G mobile wireless network. The CNN is found to be good algorithm for the detection of intrusion while reducing the impact of latency. Hussain et al. [30] proposed CNN for the development of framework to detect distributed denial-of-service attack over 5G network prompted by botnet that control devices that are malicious. These attack mainly target the cyber physical system. The framework is found to have an accuracy of over 90% in detecting attacks.
Liu et al. [116] proposed federated learning framework for securing federated learning in 5G wireless network. Blockchain is embedded to protect the system against poisoning attacks. Performance analysis of the proposed framework indicated that the 5G-enabled federated learning framework is promising and robust. Ahmed et al. [10] proposed framework that uses 5G infrastructure based on pretrained CNN variants model with a transfer learning for multiple people tracking. The detection is performed by YOLOv3, and tracking is performed by the deep SORT algorithm. Experiment result indicated that it improves the transfer learning detection and tracking accuracy of the multiple people. Rathore et al. [101] proposed DNN with blockchain for empowering security scheme for intelligent 5G IoT. The framework operated across the four layers of the emerging cloud computing architectures. The simulation of the proposed framework demonstrated efficiency and effectiveness in securing the 5G-enabled IoT systems. Ullah et al. [86] used hybrid of control flow graph and DRNN for securing smart services rendered by 5G-enabled IoT. The DRNN is applied to predict clone applications. Results show that the approach recorded over 90% accuracy for cloned applications predictions from android application stores. Xu et al. [38] proposed RGB stream and spatial rich model noise stream for differentiating between adversarial and clean examples. The CNN is used to detect adversarial image, and it achieved over 90% accurate detection rate. Sedik et al. [75] used combination of CNN-LSTM for the detection of fake biometrics in 5G-based smart cities. The CNN-LSTM computes the 3-tier probability for the tempered biometric. Simulation result shows the proposed CNN-LSTM detects the altered biometrics.
3.4. Mobile Network
Ning et al. [42] developed an intelligent offloading framework based on DRL 5G-enabled vehicular networks that uses the combination of license spectrum and unlicensed spectrum channels. Distributed DRL-based approach is developed to significantly improve the communication between macrocell and vehicles. It was found to minimize offloading cost and maintain user latency constrain simultaneously. Lastly, the approach greatly simplifies distributed offloading traffic. Klautau et al. [15] proposed Deep Q-Learning (DQN) algorithm for the selection of beam based on 5G mobile network MIMO data. The channel realization with transceivers and objects that represent the 5G scenario is generated by combining vehicle traffic and ray tracing simulators. The mobility and channel were modeled. Gante et al. [27] proposed temporal CNN for outdoor positioning of millimeter wave in 5G mobile network. The temporal CNN achieved baseline accuracy for the non-line-of-sight millimeter wave outdoor positions with 1.78 meters as the average error while maintaining moderate bandwidth, sample of binary data, and single anchor. Dai et al. [40] applied deep reinforcement learning (DRL) to develop caching scheme in 5G mobile network and beyond. Numerical results indicated that the deep reinforcement learning caching scheme is effective in maximizing caching resource utility. Shahriari et al. [39] proposed the generic online learning system based on DRL for 5G cloud random access network load-balancer. The proposed approach is subsequently deployed for load balancing in the 5G cloud random access network. It was found that the communication load and caches misses are reduced with limited system overhead.
Kim et al. [62] presented deep autoencoder sparse code multiple access (Deep-SCMA) for 5G mobile wireless network. The Deep-SCMA codebook reduces the bit error rate by adaptive construction and deep autoencoder-based decoding and encoding. Results indicated that the Deep-SCMA scheme achieved lower bit error rate and fast computational time better than the conventional scheme. Luo et al. [67] employed the hybrid of CNN and LSTM (CNN-LSTM) to predict channel state information in a 5G wireless mobile network. Two outdoor and two indoor scenarios were used for the evaluation of the proposed scheme. The result indicated that the CNN-LSTM predicts channel state information in 5G network with the average different ratio of 2.650%–3.457% within very fast convergence time.
Huang et al. [68] proposed a combination of CNN and LSTM (CNN-LSTM)-based multitasking for the prediction of 5G mobile network traffic loads. The CNN-LSTM model is found to successfully extract geographical and temporal features. The CNN-LSTM can predict the minimum, maximum, and average traffic loads in the 5G mobile network, and it performs better than the baseline algorithms. Ozturk et al. [81] proposed the stacked LSTM model for cost evaluation of holistic handover that combined signaling overhead, latency, call dropping, and wastage of radio resource that focuses on control/data separation architecture. Analysis of the framework indicated that the stacked LSM has the potential for holistic handover management for 5G wireless network.
El Boudani et al. [97] proposed deep learning-based co-operative architecture in 5G network for 3D indoor positioning. The proposed approach is applied to predict 3D location of a mobile station, and it is found to perform better than the baseline algorithms. Sim et al. [96] proposed DDNN for the selection of beam that is compatible with 5G new radio. The DDNN is able to select the mmWave beam, and it reduced the beam sweeping overhead. Butt et al. [95] proposed DNN for RF fingerprint for user equipment positioning in 5G mobile network. The DNN framework is able to predict the user equipment positioning in 5G network. Memon et al. [84] predicted next packet time based on traffic trace using LSTM. The LSTM predicts the dynamic sleep time in discontinuous reception in 5G networks. It is found to improve power savings. Chen et al. [83] applied LSTM for the prediction of traffic flow in 5G mobile wireless network. The LSTM is combined with the broad learning system to improve the performance of the LSTM. Subsequently, the LSTM is used to predict the traffic flow in 5G wireless network and predict with accuracy while maintaining low complexity and convergence time. Mismar et al. [54] used greedy nature DQN for the estimation for voice bearers and data bearer in sub-6 GHz mmWave band. The performance of the signal to interference plus noise ratio and sum-rate capacity has been improved.
Li and Zhang [52] applied DRL in 5G network to achieve tradeoff between quality of service between enhanced mobile broad band and ultra-reliable low latency communications. It is found that the quality of the service is achieved with the tradeoff between the enhanced mobile broadband and ultra-reliable low latency communications. Xie et al. [51] applied DQN to develop an adaptive decision scheme for initial window in 5G MEC. The scheme is able to optimize the flow completion while minimizing the congestion. Comparison with baseline algorithms shows that the proposed scheme converges fast with stability. Supervised learning is introduced to improve the responsiveness and efficiency of the initial window decision.
Yu et al. [53] proposed DQN for the 3D aerial station-based station location for sudden traffic in 5G mmWave wireless network. Findings indicate that the DQN location scheme can search for the optimal deployment locations with very low convergence speed. Alhazmi et al. [33] proposed LeNet-5 a variant of CNN for the identification of signals in a cellular system environment. The LeNet-5 is able to successfully identify 5G signal, 3G, and long-term evolution in the environment. Klus et al. [32] proposed CNN for the prediction of user location. The model is first trained for the setting of the weights before reducing the unnecessary number of handovers at the same time sustaining high-quality connection at the second stage. The CNN model predicts user location and reduces number handover without affecting the throughput of the system. Godala et al. [31] proposed CNN for the estimation of state channel information in 5G mobile network new radio. It is found that the proposed framework enhances the spectral efficiency performance compared with the traditional methods.
Cheng et al. [36] proposed an enhanced CNN with attention for modeling mmWave for 5G network communications. The image data capture and locality feature extraction were performed using convolution while the attention enhances the use of global information. The proposed scheme was found to be better than the classical methods. Clement et al. [71] combined CNN, DDNN, and LSTM to create hybrid deep architecture for the modulation classification of 5G and beyond wireless communications. Principal component analysis is used for dimension reduction. The proposed framework classified the modulation, and results show that it outperforms the constituent algorithms. Gu et al. [58] developed the DRL knowledge-based assisted algorithm for the design of wireless schedulers 5G networks with time sensitivity in traffic. The proposal improved quality of service and reduced convergence time. Guan et al. [37] proposed CNN transfer learning for the classification of network traffic in a dataset constrain scenario in 5G IoT. The model is trained by weight transferred and ANN fine-tuning. The CNN transfer learning was able to predict the network traffic with comparative performance compared with the classical methods. Gumaei et al. [85] combined blockchain and DRNN for edge computing 5G-enabled drone identification. Dataset was collected from raw radio frequency signals taken from many drones under several flight modes for the training of the DRNN model. The DRNN is applied to detect drones from radio frequency signals. Dinh et al. [59] applied DQN for self-optimization of access point selection based on local network state knowledge. It was found that the proposed scheme enhances throughput and improves quality of service compared with the classical methods.
Khan et al. [72] proposed of LSTM and SVM for reliable and efficient congestion control mechanism in 5G/6G wireless network. The simulation was conducted for data collected for multiple unknown devices, failure of slice, and overloading. Results show improvement in congestion control in 5G/6G network. Kaya and Viswanathan [73] applied LSTM and AE (LSTM-AE) for the prediction of beam in 5G mmWave. The proposed approach reduces blockage and handover by switching user to new beams or cells. The method reduces overhead and enhances signal-to-noise ratio. Zhang et al. [74] proposed DRL-LSTM for resource allocation in mmWave 5G network based on column generation. The DRL-LSTM addresses the routing and link scheduling for 5G network mmWawe.
3.5. Caching
Pang et al. [12] applied MEC to improve caching in 5G mobile network based on LSTM framework that expedites the smart-based intelligent caching instead of the commonly used frequency and time based for replacing the strategies as well as cooperation within the base stations. The LSTM intelligent-based cache framework detects the individual pattern request for individual based-station to take a decision based on the intelligent cache. The intelligent LSTM-based cache framework reduces transmission delay by at least 14%, and the backhaul data traffic saves up to 23%. Sadeghi et al. [43] proposed DRL-based cache scheme that employs Q-learning for the implementation of the best policy in online manner, thereby enabling the cache control unit of the base station to learn, monitor, and adjust to the environment dynamics. To embed the algorithm with scalability, the Q-learning linear function estimation is introduced which provides fast computational time and reduces complexity and requirement for memory. Lei et al. [61] proposed caching strategy based on Stacked Sparse AutoEncoder (SSAE) in Evolved Packet Core of the 5G mobile wireless network. The network functions virtual (NFV)/software defined network (SDN) is used for the development of the virtual distributed deep learning statement on SSAE. Subsequently, the SSAE predicted the content popularity. The caching strategy is generated by SDN controller based on the predicted result and then synchronizing to each node of the cache via the flow table for the strategy execution. The deep learning-based strategy is found to improve the performance of the cache better than the baseline methods.
3.6. Multiple-Input Multiple-Output
Kim et al. [92] developed DDNN-based pilot allocation scheme for 5G massive multiple-input multiple-output (MIMO) that used large number of antenna for multitude end users. It is found that the proposed approach improved the performance of the 5G network. The scheme recorded 99.38% accuracy with low complexity and convergence times. He et al. [29] proposed CNN to capture the characteristics of interfering signal to suppress the interfering signal. The proposed CNN-based multiuser multiple-input multiple-output (MU-MIMO) for 5G can be applied to suppress the influence of interference that is correlated with a reduced computational complexity and improve the performance of the CNN-based MU-MIMO.
3.7. Other Domain
Razaak et al. [89] applied GAN for precision farming. The GAN-based image analysis framework is developed for 5G wireless mobile network. The GAN-based unmanned aerial vehicle image processing framework indicated that precision farming can significantly benefit from the combination of 5G wireless network technology, unmanned aerial vehicle, and intelligent algorithm. It has been demonstrated that the intelligent framework has the potential of applying drones integrated with 5G and cameras for the monitoring of farm lands to reduce human intervention. Different deep learning algorithms in different 5G domains are presented in Table 2 where it shows the domain, deep learning architecture, and corresponding references for each of the architecture. Yu et al. [53] designed DRL time scale consisting of fast and slow timescales learning process for optimizing resource allocation, computation offload, and caching placement. For protecting the edge device data privacy, federated learning is applied for training the DRL in a distributed approach. Experiment shows that the proposed approach reduces convergence time with over 30%.
The main contributions found in each of the study, type of deep learning architecture adopted, and mobility level are summarized in Table 3 for easy outlook of the studies.
The deep learning architecture was extracted to show suitability of each architecture as well as the applicability in 5G wireless mobile network. The different deep learning architectures found in the literature used in the 5G wireless mobile network are summarized in Table 4 indicating the corresponding applications in 5G.
4. Discussion on the Deep Learning in 5G Network and 5G-Enabled IoV
4.1. General Overview
The deep learning algorithms found to be frequently used in the 5G wireless network and 5G-enabled IoV are as follows: GAN, DRL, CNN, LSTM, DRNN, DDNN, and hybrid of the deep learning algorithm. The basic theories of the different architectures are presented for the readers to understand how they operate to achieve their goal.
The review has indicated that it is possible to apply deep learning algorithms in solving machine learning problems in 5G wireless mobile network and 5G powered IoV. The deep learning algorithms have shown to perform better than the shallow machine learning algorithms/techniques as well as conventional approaches. The applications of different deep learning architecture in 5G wireless mobile network and 5G powered IoV are presented in the review. It is found that most of the works are on the applications of deep learning in 5G. On the other hand, the applications of deep learning in 5G powered IoV are limited with few number of recent papers.
Different taxonomies were created based on the 5G papers analyzed. The taxonomy created is as follows: 5G domains and deep learning algorithm connecting the 5G machine learning task and learning paradigm. The taxonomies clearly showed the gap that is existing, the area that attracted a lot of attention, and those with little attention in the 5G wireless mobile network. It is found that the DRL architecture received the highest number of applications in 5G wireless mobile network. On the issue of mobility, it is found that the mobility level is mostly outdoor. Resource management and mobile network in 5G wireless mobile network received tremendous attention from the research community. The learning paradigm showed that the reinforcement learning has the highest number of applications in 5G. Federated learning has limited applications; very few studies attempted to solve machine learning problem in 5G wireless mobile network that applied the federated learning.
The review indicated that there is a lot of interest from the research community for the synergy between the communications engineering and artificial intelligence research community fostering collaborations from the two communities. It is hoping that this collaboration will continue into the future because of the interest it has currently generated. In addition, the race to 6G has already begun though in an early stage as evident in Wu et al. [117]. As a result of that we believe that in the future, the collaboration between the communications engineering and artificial intelligence research communities will continue because of the advent of 6G wireless mobile communication that will open up new challenges requiring new machine learning solutions for improving the entire 6G wireless communication systems.
The applications of deep learning in 5G wireless mobile network were used in different technological aspects of the 5G. The technologies are tabulated in Table 1 with the corresponding references where the 5G technology was considered for a research. The technology of the 5G wireless network technology that received remarkable attention from the research community is the 5G network slicing.
The 5G technologies were found to be used to develop the 5G-enabled IoV to improve the performance of the IoV. The 5G technologies used for the improvement of the IoV are as follows: 5G network slicing, 5G mmWave, 5G MIMO, and beamforming. In conjunction with deep learning algorithms, the 5G-enabled IoV is improved on the area of security and privacy and network selection. The deep learning algorithms established to solve problems in 5G-enabled IoV include CNN, DRL, and federated learning leaving the greater percentage of the deep learning algorithms without exploring them in the 5G wireless mobile network.
4.2. Publication Trend
Figure 4 presents the publications of papers that applied deep learning algorithms in developing cyber defense systems for the 5G wireless mobile network. The publications start appearing in 2017 up to 2021. The publication trend indicated growing number of papers in the area as the papers keep on increasing from 2017 to date. This is practically showing growing interest in developing cyber defense system for the 5G wireless mobile network.

The publications of papers that applied deep learning algorithms in improving energy efficiency in 5G wireless mobile network are presented in Figure 5. Similar to cyber defense systems, the publications in this domain are growing up to 2020 before dropping in 2021. On the other hand, in the field of cyber security defense system, the publications went up in 2020 and dropped in 2021. Though they dropped in 2021, they are still much higher than the domain of 5G energy. Meaning is that there are more interest on the cyber security defense system compared with energy. The deep learning models are found to be effective, efficient, and robust in managing energy in 5G mobile network technology though not absolute as it is found in Falkenberg et al. [118] that random forest is better than the deep learning model in predicting power transmission used for the transmission of data in 5G wireless mobile network.

Figure 6 depicts the publication trend of the applications of deep learning algorithms in mobile network domain. The publications showed raising interest in mobile network domain of the 5G wireless communications because the papers keep increasing from 2017 to 2021. The number of the publications is more than that of energy/power transmission and cyber security defense systems. The interest in mobile network is more compared with the energy/power transmission and cyber security defense systems.

Figure 7 shows the applications of the deep learning algorithms in solving machine learning problems in managing resource in 5G. It shows growing number of publications on yearly basis. The longest bar indicated the most recent publications. This signifies that in recent times, there is a lot of interest in the resource management of the 5G wireless mobile network. However, it shows that the bar diminishes in 2021 indicating dropped in the number of publications.

Figure 8 is showing the overall publications in the application of deep learning algorithms in 5G wireless communications. The papers steadily increase from 2017 to 2020 before it suddenly dropped in 2021. In general, the interest in the adoption of deep learning in 5G wireless communications is increasing drastically based on paper published in the last 3 years as shown in Figure 8.

4.3. Learning Paradigm
Figure 9 is a taxonomy created from the data collected from the papers analyzed. The review article only used the data found in the applications of deep learning in 5G wireless communications. The learning paradigm for the 5G wireless communication is shown in Figure 8 forming taxonomy of the learning paradigm and the deep learning algorithms architecture associated with each learning paradigm. Variants of deep learning architecture such as CNN, GAN, AE, LSTM, DRL, hybrid deep learning, and DDNN are used for different learning paradigms.

It is indicated that the supervised learning has the highest number of deep learning variants which found to be applied in solving machine learning problems in 5G wireless communications showing that researchers heavily depend on the supervised learning for solving problems in 5G wireless network using different architectures of the deep learning algorithms. This is likely because of the availability of 5G wireless communications data having input and outputs. The multitasking learning received the lowest attention from the researchers likely because of limited need for multitasking learning in 5G wireless communications. Though multitask learning is not needed for 5G [119], it is needed in 6G-enabled MEC [120]. A lot of intensive work on multitasking in 5G wireless communications is required because of the limited number of publications found in the literature.
5. Challenges and Future Perspective
Despite the tremendous successes achieved in adopting deep learning architecture to solve problems in different domains of 5G wireless mobile network, yet, there are unresolved challenges. In this section, the challenges are discussed and alternative approaches to solve the challenges in the future are suggested.
5.1. Lack of Freely Available Large-Scale Data
In machine learning especially data analytic involving deep learning algorithms, large-scale dataset is key component. The performance of the deep learning algorithms in data analytic heavily depends on the scale of the available datasets. However, it is found that at present, there are no publicly available 5G large-scale data for benchmarking deep learning algorithms in 5G [15]. This is a challenge to the research community as it will definitely limit the number of research studies to be conducted by researchers and limit the study due lack of sufficient data. The lack of benchmark data set for 5G can hinder the basic foundation of proposing and evaluating new deep learning approach in solving problem related to large-scale learning in 5G wireless mobile network domain. In [30, 121], limited amount of data hindered large-scale study. Therefore, we propose research studies to develop a reliable repository for 5G large-scale data set to be available to researchers freely.
5.2. Complexity Constrain
In practice, machine learning mainly involved two stages, namely, training and testing. High complexity typically arises from the training phase of the machine learning compared with the testing phase of the machine learning process. In mobile terminal, there is the challenge of energy constrain and computational complexity constrain. Therefore, in this situation, deploying both the training and testing phase of the machine learning to the mobile terminal will result in high complexity in addition to the mobile terminal energy constrain which will further compound the challenge. As such, it is recommended for researchers to deploy only the testing phase of the machine learning systems to the shirt-pocket-sized mobile terminals [4].
5.3. Comparative Study
It is found that researchers mainly used single deep learning framework for implementing solutions, therefore limiting the study to the performance of a particular framework without subjecting comparative performance evaluation of the available deep learning frameworks. For instance, Maimó et al. [76] evaluated only Tensorflow without the comparative study of different deep learning frameworks. To determine the optimum suited deep learning framework, it is required to perform a comparative study to find the best deep learning framework with the highest performance in terms of processing [76].
5.4. Obsolete Security Protection Mechanism
The cybersecurity defense system of the 5G technology comes with new challenges as the current approaches used in protecting the mobile technology will become absolute because of the new advance features that 5G comes with. Typically, the 5G has added features of technology, and pattern of the intrusion by the hackers will change as a result of the 5G technological development. It is recommended that the existing cybersecurity defense be adapted to accommodate the new features of the 5G technology to keep protecting the system [76].
5.5. High Volume of Data
One of the main component of the 5G technology is the mmWave frequency communication. It delivers large-scale data at high rate for accommodating the uncompressed 4K UHD video as well as different consumer electronic devices that requires high throughput [122]. The data rates in 5G are 1000 times more than those in 4G [123]. The high volume of data means increasing space for data storage, processing, analytics, and mining which in turn increases hardware cost. This has automatically rendered shallow machine learning algorithms absolute. Now, it is suggested that the deep learning algorithm with improve training speed should be considered in the future for processing data generated from 5G technology.
5.6. 5G Cybersecurity Defense System Is Vulnerable
The cybersecurity defense system for the 5G wireless mobile network heavily relied on machine learning for the development of the 5G cyber defense system. However, machine learning models are at safety risk. Developing the cyber defense system based on machine learning models is susceptible to vulnerabilities. The vulnerability of the learning system outlined by Pitropakis et al. [124] is as follows: the data for the training are prompt to poisonous injection by inserting Trojan horse into the machine learning model. The learning system is prompt to evasion attack at the testing phase to produce erroneous security alarm. The defense mechanism can be attacked by bypassing it to increase false negative or at the same time increase false positive and false negatives. Ftaimi and Mazri [125] pointed out that the deep learning models are vulnerable to adversarial attacks. We suggest that the future cybersecurity defense system for 5G wireless mobile network should be made to be robust by integrating with the mechanism to detect a possible backdoor attack on the learning systems and detect adversarial attacks on the cyber security defense system. The activation clustering methodology proposed by Chen et al. [126] can be applied to remove backdoor in the cybersecurity learning system for 5G wireless mobile network.
5.7. Restriction in 5G-Enabled Application
The development of the 5G-enabled applications like the vehicular network is restricted by cellular spectrum and energy constraints in the vehicular networks [42]. Massive and rigorous research studies are required to unravel the restrictions in the 5G-enabled applications.
5.8. 5G Ecosystem
The directional accuracy for a secured 5G ecosystem is yet to attain optimum. AS such, room for improvement still exists requiring further research to close the gap by developing secured 5G into mobile edge computing, core slicing, and radio access network. It is recommended to include traffic behavior learning in training the learning system in real time based on reinforcement learning and recurrent learning [98]. It should be extended to 6G [127].
5.9. Privacy
Collecting system logs from the live 5G wireless communication system can be difficult because of the issue of privacy, network disturbance, and repeated scenarios (see Sundqvist et al. [79]). The live data on the 5G wireless mobile network contain user data that are confidential in nature, in which not all users may like to expose it to third party. The systematic deep learning approach to conceal data confidentiality while processing the live 5G wireless network data remains an open challenge. Therefore, we have this research question for researchers: What is the systematic deep learning approach to conceal data confidentiality before, during, and after modeling?
5.10. Complexity
The complexity of the cost values of the reflection for anticipatory resources orchestration increases as it moves towards the edge. The ratio of 3 to 1 has been quantified between the operational expenses at the cloud radio access network with respect to the core [26].
6. Conclusions
In this paper, we present review article on the adoption of deep learning architecture to solve machine learning problems in 5G wireless communications and 5G-enabled IoV. The advances made on the applications of deep learning in 5G network are presented in a concise form. Different deep learning architectures used in solving problems in the 5G wireless communications were unrevealed such the CNN, LSTM, DRL, GAN, DDNN, and hybrid deep learning. The publication trend shows that the research area is attracting attention with the highest number of publications in the last three years. Taxonomies were created for the deep learning in 5G network and the domain of applications. Deep learning algorithms have started making inroad in 5G-enabled IoV especially federated learning. The challenges in the existing approaches for solving problem in 5G network based on deep learning algorithms and promising directions as new perspective for solving the identified challenges are presented in the article. The article can be used by new researchers as an initial reading material, and established researchers can use the article to easily identify area that requires further development of the research area that will lead to the real-world practical application of deep learning solutions for 5G wireless network [128–130].
Data Availability
No data were used to support this study.
Disclosure
The leading guest editor of the special issue is author’s collaborator in recent times.
Conflicts of Interest
The author declares that there are no conflicts of interest.