Abstract

Wireless technologies are growing unprecedentedly with the advent and increasing popularity of wireless services worldwide. With the advancement in technology, profound techniques can potentially improve the performance of wireless networks. Besides, the advancement of artificial intelligence (AI) enables systems to make intelligent decisions, automation, data analysis, insights, predictive capabilities, learning, and adaptation. A sophisticated AI will be required for next-generation wireless networks to automate information delivery between smart applications simultaneously. AI technologies, such as machines and deep learning techniques, have attained tremendous success in many applications in recent years. Hances, researchers in academia and industry have turned their attention to the advanced development of AI-enabled wireless networks. This paper comprehensively surveys AI technologies for different wireless networks with various applications. Moreover, we present various AI-enabled applications that exploit the power of AI to enable the desired evolution of wireless networks. Besides, the challenges of unsolved research in this area, which represent the future research trends of AI-enabled wireless networks, are discussed in detail. We provide several suggestions and solutions that help wireless networks be more intelligent and sophisticated to handle complicated problems. In summary, this paper can help researchers deeply understand the up-to-the-minute wireless network designs based on AI technologies and identify interesting unsolved issues to be pursued in their research in a fast way.

1. Introduction

Wireless technologies are growing fast due to the potential increase in applications and the rapid expansion of communication infrastructures. The next generation of sophisticated communication networks integrates a larger number of connected devices coupled with sensors requiring massive data with low latency. The increasing number of connected wireless devices is an indicator of massive global mobile traffic growth, which is expected to reach 11.5 billion and 31.6 billion at the end of 2020 and 2023, respectively [1, 2]. The recent fifth-generation (5G) cellular networks provide higher data rates and low end-to-end latency, allowing more real-time access from various technologies. However, 5G cellular networks practically consume more power to optimize and analyze the huge data volume rising from massively connected devices. Consequently, it is essential to address this issue to automate and manage the complexity of 5G cellular networks. From the very beginning, 5G cellular networks are designed to offer three types of services according to ITU-R: enhanced mobile broadband (eMBB), ultrareliable low latency service (URLLC), and massive machine-type communications (mMTC) [3]. First, eMBB strives to achieve exceptionally rapid data rates, addressing the burgeoning demand for high-speed data access inherent in emerging services. Second, mMTC focuses on achieving heightened connection density, coupled with low data rates and minimal power consumption, thereby catering to the requirements of sensor networks in smart cities, the Internet of things (IoT), and wearable device networks. Third, the URLLC realm is dedicated to furnishing wireless services marked by unparalleled reliability and minimal latency (<1 ms), primarily tailored for control networks encompassing domains such as high speed, train management, smart meters, remote medical surgical services, transport safety control, and industrial robotic control.

In addition to 5G cellular networks, sixth-generation (6G) wireless communication is currently being researched and developed to surpass the capabilities of 5G technology. It aims to provide unprecedented levels of connectivity and performance, revolutionizing the way we communicate and interact with our devices. The key focus areas of 6G cellular networks include faster data speeds, ultralow latency, massive device connectivity, enhanced energy efficiency, and intelligent network management [4]. With data rates expected to reach terabits per second, 6G promises to enable transformative technologies such as holographic communication, advanced virtual reality (VR), augmented reality (AR), and immersive gaming experiences. Additionally, 6G aims to leverage new frequency bands, such as terahertz (THz) frequencies, to accommodate the ever-increasing demand for bandwidth and enable innovative applications [5]. As the future of wireless communication, 6G holds the potential to shape various industries, from healthcare and transportation to education and entertainment, paving the way for a hyper-connected and intelligent society [6, 7]. In addition, 6G-enabled edge AI for the metaverse is making efforts to establish connections among billions of users, fostering a unified environment where the boundaries between the virtual and real worlds merge together [8].

Artificial intelligence (AI) offers mobile operators the potential to operate their networks more organic and cost-efficiently. Deploying AI will improve next-generation systems to be more robust, high performance, and less complex [9]. Besides, integrating AI into networks is a way to address network complexity, where about 53% of service providers are expected to fully integrate AI into their networks by the end of 2020 [10]. AI has been used to do sophisticated tasks such as optimization, classification, and clustering, which are implemented in numerous fields and industries, including transportation, education, healthcare, and more. Effective AI mechanisms are required to be developed to collect, analyze, and make decisions for enormous data volumes. AI integration will be fundamentally more efficient in optimizing network performance than focusing on network management and scheduling. Besides, AI technology reduces manual interventions in network traffic management and aims to enhance customer experience, improve customer service, and offer personalized services. Moreover, machine learning (ML) and deep learning (DL) stand as two sophisticated AI technologies that have garnered significant attention for their potential to address the complexities associated with the management of 5G cellular network traffic [11, 12]. In this context, recent advancements in ML and AI are some of the most robust solutions in terms of privacy, security, and performance gains for wireless systems [13, 14]. Besides, DL is recognized as a promising tool to handle the complex dynamics in communication networks and their potential to optimize wireless systems. Furthermore, deep AI and ML expect to make wireless systems more resilient towards new sophisticated threats and attacks with dynamic characteristics [15].

To our knowledge, no comprehensive survey provides AI technologies, techniques, and applications and highlights the open issues in wireless networks. Therefore, this survey paper comprehensively focuses on AI technology and its benefits in the next generation of wireless networks. This survey provides the researchers’ guidelines and an excellent platform to further their research in AI for wireless networks. It provides a detailed discussion of their implementation and improvement in network performance. Finally, we highlight the challenges and opportunities of AI-enabled and then remark on several points for future research directions. The contributions of this survey paper can be summarised as follows:(i)We provide an overview of several AI technologies that will facilitate readers’ understanding of AI and the fundamental idea of AI technology.(ii)We present various AI-enabled wireless networks that offer assistance and enhancements in overall system performance.(iii)We present the 5G/6G emerging wireless technologies that are currently leading the forefront of technological progress.(iv)We highlight several research challenges and potential future directions of AI in 5G and 6G wireless networks. This contributes to the advancement of this research field.

The organization of this article follows a hierarchical approach, depicted in Figure 1. We begin by presenting several related works using various types of publications: surveys, magazines, and research papers (Section 2). Section 3 provides background on AI classifications. In Section 4, various types of AI-enabled wireless networks are discussed. Section 5 presents 6G emerging technologies. Several AI applications in mobile and wireless communications are discussed in Section 6. The challenges that face the considered architecture and open issues that should be investigated in the future are discussed in Section 7. Finally, Section 8 brings the article to a close with conclusive remarks.

Several notable studies have explored the relationship between AI approaches and 5G wireless communications, offering valuable insights into their interplay and implications. Various AI types and wireless technologies have been discussed in several works. Ma et al. [16] discussed that AI approaches involved autonomous vehicles (AVs)-related and its primary applications. They have provided insights into potential opportunities of AI that could be used with other emerging technologies, such as 5G communication for connected AVs, enhanced simulation platforms for (AR/VR) and big data, high-definition maps, and high-performance computing. Sheraz et al. [17] focused primarily on AI-based caching techniques in wireless networks using ML algorithms such as supervised, unsupervised reinforcement, and transfer learning (TL). They have provided the existing challenges that must be addressed in future generations. Tong et al. [18] presented an emerging paradigm of AI for vehicle-to-everything (V2X). They provided some details on AI techniques such as logical AI, swarm intelligence, expert systems heuristic techniques, and fuzzy logic language processing. Another work [19] reviewed related works on deep integration between AI and fog computing (FC) technologies within the future V2X networks. Besides, the authors presented AI-enabled, fog-assisted V2X use cases that accommodate necessary FC capabilities and exploit AI to enable the desired evolution of vehicular networks. Lin and Zhao [20] provided a survey on the role of AI-based resource management and presented challenges and open issues of deploying AI in future wireless networks.

Chen et al. [21] presented a survey of AI-empowered path selection based on ant colony optimization for static and mobile wireless sensor networks. In [22], the authors designed to effectively identify attacks in wireless sensor networks within IoT networks using a whale-optimized gate recurrent unit. The framework utilizes the whale algorithm to optimize deep long short-term memory (LSTM) hyperparameters, achieving low computational overhead and strong performance. Mao et al. [23] described DL applications in wireless networks based on different layers: the physical layer, data link layer, network layer, and upper layers. Besides, they have discussed methods of DL implementation that have been performed in wireless networks. In terms of privacy and security, authors in [24] surveyed ML and privacy, focusing on aspects such as privacy violation and privacy protection in incoming 6G networks. Also, they highlighted several applications of ML that can protect privacy and violate. The work in [25] presented a systematic review of IoT security protection based on AI algorithms (i.e., ML and DL) that can provide new powerful capabilities to meet IoT’s security requirements. In [26], the authors provided an overview of AI-driven intelligent security for wireless networks, specifically focusing on 5G and beyond. It thoroughly examines various research studies that explore integrating AI capabilities into wireless networks to enhance security measures and address potential vulnerabilities arising from future technologies at both the physical and network levels. Moreover, the paper strives to uncover potential avenues for future research that can contribute to developing robust security and privacy frameworks for upcoming 6G networks. Using AI techniques in an end-to-end security design is crucial in 6G networks. Siriwardhana et al. [27] focused on exploring the role of AI in enhancing the security framework for 6G networks. They discussed numerous opportunities and challenges associated with integrating intelligent security and privacy provisions into the role of AI within 6G systems. Similarly, Bandi and Yalamarthi [28] presented a taxonomy of various security and privacy concerns associated with AI- and ML-enabled applications in the context of 6G networks.

In [29], the authors explored the potential effects of AI on the design and standardization of the air interface in wireless communication systems. They examined the AI-enabled network architecture and discussed a detailed analysis of the impact of AI capabilities on the design of higher-layer protocols, physical layer configurations, and cross-layer optimizations. Li et al. [30] examined AI applications and blockchain technology within 6G wireless networks. Initially, the architecture of 6G, characterized by its integration of space, air, ground, and underwater components into a four-tier network, has been introduced. Subsequently, two specific AI applications, namely, indoor positioning and AVs, have been extensively explored. A detailed analysis has also been conducted to highlight the growing significance of data security in AI applications, supported by a comprehensive case study. Letaief et al. [31] presented a vision for scalable and reliable edge AI systems that incorporate integrated designs of wireless communication strategies and decentralized ML models. Also, they outlined novel design principles for wireless networks, optimization methods for service-driven resource allocation, and a comprehensive end-to-end system architecture to support edge AI. Also, in [32], the authors examined the prospective technologies that can facilitate mobile AI applications in the context of 6G. Additionally, they explored the methodologies enabled by AI for the 6G network design and optimization and delved into the key trends driving the evolution towards 6G. In [33], the authors highlighted and categorized nine challenges that require attention from the interdisciplinary fields of AI/ML and wireless communications, specifically concerning 6G wireless networks. The challenges encompass computation in AI, learning, distributed neural networks, and semantic communications. In [34], a network slicing architecture designed specifically for 6G networks is introduced. This architecture focuses on AI integration, allowing for the seamless combination of AI and network slicing. The aim is to enable intelligent network management and provide the necessary support for emerging AI services within the network.

In [35], the authors focused on the intelligence aspects, security and privacy concerns, and energy efficiency challenges encountered by swarms of unmanned aerial vehicles (UAVs) operating within the context of 6G mobile networks. Through a comprehensive review, the article presented an integrated approach that combines blockchain technology and AI/ML techniques within UAV networks, leveraging the capabilities of the 6G ecosystem. The work in [36] emphasized enhancements in the multilevel architecture through the integration of AI in URLLC, offering a novel approach to wireless network design. Additionally, this research paper discussed existing multilevel architectures and provided further ideas on several research gaps using DL in 6G networks. The authors in [37] provided a new concept of “zero-touch management,” which refers to a network management solution that operates autonomously with human supervision. They focused on a connection point between zero-touch management and research on mobile and wireless networks, addressing a gap in the existing literature review between these two domains.

Li et al. [38] highlighted the most fundamental features among the intelligence techniques in the 5G cellular networks in terms of mobility management, radio resource management, and provisioning management. Besides, they discussed some open issues and challenges regarding exploiting AI to turn conventional 5G cellular networks into intelligent networks. Kamble and Shaikh [39] emphasized a range of resource allocation methodologies and algorithms that utilize DL techniques such as convolutional neural networks (CNN), deep neural networks (DNN), Q learning, deep Q learning, RL, and actor-critic. These methodologies and algorithms are discussed briefly. The goal is to dynamically optimize the allocation of resources in real time, leading to enhanced overall system performance. Sangeetha and Dhaya [40] explored the background of 6G wireless communication and examined the significant role of DL in advancing 6G wireless technologies. They also highlighted potential avenues for future research in DL-driven wireless technologies. Besides, the authors in [41] presented a comprehensive survey covering a range of ML techniques that can be applied to 6G wireless networks. Additionally, they identified and listed research challenges that are currently open and need timely solutions. In [42], the authors discussed several aspects of 6G vehicular intelligence: communications, networking, intelligence, and computing. Throughout the entirety of this paper, AI technology pervades and forms the basis of vehicular intelligence. This integration of AI has a beneficial influence on the realization of diverse network functionalities, ultimately enhancing the network’s proactivity and intelligence to a significant extent.

Fu et al. [43] investigated the new characteristics of 5G cellular network traffic and discussed their challenges for 5G traffic management. However, in [44], the researchers tried to enable the imminent and future demands of 5G and beyond by presenting a cross-layer AI-based framework. Also, it demonstrated some AI-enabled 5G use cases to support and accommodate the capabilities of 5G cellular networks. In [45], the authors discussed the challenges and perspectives of the AI paradigms for customer experience management (CEM) in 5G cellular networks. CEM’s challenges were elaborated with respect to business requirements and network operators, and autonomous CEM framework guidelines were provided for next-generation networks. Additionally, AI offers mobile network operators to improve network performance in time and operating expenses [46]. In this regard, Shafin et al. [46] presented a possible roadmap to utilize AI-enabled cellular networks in the next generations, overcoming technical barriers in performance, complexity, and robustness. The authors in [47] demonstrated several case studies for optimizing wireless physical and MAC layers based on explainable AI algorithms to simultaneously automate information delivery for human-machine interfacing and targeted healthcare. The summary of existing surveys and magazine papers is presented in Table 1.

3. Background on AI

This section provides a general overview of AI, focusing on the fundamentals of AI in terms of evolution, components, and algorithms. AI cheats and improves the behavior of humans and carries out tasks more effectively. The AI approach is widely used in various fields and illustrates its powerful ability in diverse networks and systems. Researchers aim to improve some essential specifications in next-generation wireless networks, such as connectivity, capacity, and speed. Improving these parameters is a key challenge and requires denser base stations (BSs) with wide band frequency. Wireless communication systems pave the way for various systems such as IoT, robots, and self-driving vehicles. These wireless systems and mobile networks face various drawbacks, such as ultralow latency and big data. The importance of AI comes from the view that it is flexible and can be embedded in the overall loop of systems. AI successfully challenges various types of problems in different systems, such as military defense systems and natural language processing.

Due to the complex network performance and environments, traditional methods are no longer suitable; hence, designing and optimizing wireless communication systems have become more challenging, and advanced methods and algorithms are required to solve complex problems. Designing and optimizing modern advanced systems has become very challenging regarding the extreme range of communication systems. One of the successful and robust methods that has been widely used in recent years is AI, which learns from located surroundings and massive datasets generated from systems. Due to the big data and generation of unprecedented traffic, researchers believe that AI methods can adjust to meet the users’ requirements. This algorithm can adapt network protocols and resource management of new generation systems where some predefined goals can be enhanced successfully. It can solve intractable drawbacks in wireless communication systems and improve the performance of future wireless systems in 5G and 6G wireless networks that are expected to influence the different service requirements in diverse aspects of our daily lives. The AI methods are illustrated to be more practical in many application areas, such as resource management, wireless signal processing, and channel modeling for optimizing physical layer design, network management, and resource optimization roles in networks.

Five aspects have been discussed in [9], as shown in Figure 2, that bring AI technologies beyond 5G wireless networks: physical-layer research, network management and optimization, channel measurements, AI algorithms and applications, and standard developments. Hence, AI and ML can potentially revolutionize the future beyond 5G wireless networks by addressing complex and unstructured challenges. Their ability to adapt, learn from data, and make real-time decisions will help address the complex and evolving challenges that arise in these advanced networks, ultimately leading to more robust, efficient, and user-centric communication systems.

AI methods are important because of the following reasons. One is the ability to predict and detect performance that can comfort network scheduling. Second, AI approaches are intelligent methods that can model the systems more accurately than conventional methods. Finally, it can provide new possibilities for constructing the updated model as traffic patterns [49]. These methods are generally used to optimize the determined design specifications and find the best solutions quickly. In manufacturing, these methods are used to provide safe processes and reduce costs with increasing revenue.

3.1. Classification of AI

AI, along with its pivotal components ML and DL, has garnered significant interest within wireless communications, where it is used as a data-driven approach for addressing wireless communication issues [50]. ML techniques are mainly classified as supervised, unsupervised, and RL approaches. The first category is supervised learning, which is also divided into two subcategories (classification and regression). It uses a labeled dataset intending to map each input into one of the labeled sets. However, it is very hard to find the available labeled dataset in real-world applications, which makes it not applicable to some applications. As a result, the second category (unsupervised learning) comes to figure out the data patterns and their hidden structures by learning from an unlabeled data set. Unsupervised learning often relies on the widely used Bayesian learning method, typically employed in clustering and dimension reduction tasks. Another category is the RL approach, which empowers an agent to discover optimal actions through interactions with the environment. The RL aims to maximize the reward using trial-and-error interactions instead of determining latent structure. Besides, AI approaches include deep reinforcement learning (DRL) methods that are applied to improve the performance of wireless communication systems in terms of latency, reliability, power consumption, and area convergence.

3.1.1. Neural Network (NN)

The architecture of an NN is derived from the structure and functionality of biological neural networks. Like neurons in the human brain, NN comprises neurons organized into different layers. A prevalent type is the feed-forward NN, which involves an input layer for receiving external data, an output layer for providing solutions to problems, and a hidden layer that acts as an intermediary, separating the layers. Connections between adjacent neurons span from the input layer to the output layer, forming acyclic arcs. During training, the NN employs an algorithm to adjust neuron weights based on the discrepancy between desired and actual outputs. Generally, the backpropagation algorithm is utilized as the training method to learn from datasets. There are several common NN types: feedforward neural networks (FNN), CNN, recurrent neural networks (RNN), LSTM networks, gated recurrent units (GRU), and generative adversarial networks (GAN). Table 2 provides a brief description of these types.

NNs are trained through a process that entails feeding input data into the network and iteratively adjusting its internal parameters (weights and biases) to minimize the difference between predicted and actual outputs. This can be accomplished by using a loss function that quantifies the model’s prediction error. The backpropagation algorithm is then employed to calculate gradients and update weights, making the model’s predictions gradually converge toward the actual values. The network’s architecture, including the number of layers and neurons, is determined based on the problem’s complexity and the data’s characteristics.

Hyperparameters are crucial settings that govern the learning process of an NN. Achieving optimal hyperparameters involves experimentation and validation. Techniques like grid search or random search are used to explore different combinations of hyperparameter values, such as learning rates, batch sizes, and regularization strengths. These values are evaluated using validation data to identify the configuration that yields the best performance. Hyperparameter tuning aims to strike a balance between model complexity and generalization ability.

The accuracy of a NN is assessed using various metrics, mainly when dealing with classification tasks. Common metrics include accuracy, precision, recall, F1-score, confusion matrix, mean square error (MSE) or mean absolute error (MAE), receiver operating characteristic (ROC), and under-the-curve (AUC).

Training and constructing neural networks involve iteratively adjusting weights to minimize prediction errors while experimentation optimizes hyperparameters. Accuracy is measured using a range of metrics, each providing specific insights into the model’s performance.

3.1.2. Distributed AI

Distributed AI (DAI) refers to the concept of employing AI techniques and algorithms across a network of interconnected devices or nodes. In this paradigm, AI tasks are distributed and processed locally on individual devices rather than relying on a centralized system, allowing for collaborative decision-making and resource sharing. This approach enables efficient processing, real-time responses, and the ability to handle large-scale data across interconnected devices, making it particularly useful for applications in edge computing, IoT, and decentralized networks. In [51], the authors explored the state of the art in DAI, outlining the opportunities and challenges associated with offering DAI as a service.

Several of the mentioned types of DAI are utilized in wireless communications to enhance efficiency, adaptability, and decision-making. In [52], the authors conducted a comprehensive survey of newly introduced distributed ML techniques, thoroughly analyzing their distinctive attributes and potential advantages. The primary emphasis was placed on scrutinizing the most influential papers within this domain. The authors in [53] also critically examined various distributed ML architectures and designs, emphasizing their concentration on communication optimization, resource allocation, and computation. Some of the most relevant types of wireless communications include the following.

3.1.3. Federated Learning (FL)

FL, known as collaborative learning, is an ML technique that brings AI models to the data source and constructs an algorithm over multiple decentralized edge devices. In this type of learning, the training data are not transmitted across diverse sections; only the updated data related to the model can be made over. The objective of this approach is to afford users the advantage of accessing an extensive pool of data without the necessity of central storage. Numerous applications incorporating intelligent functionalities within the mobile domain, such as image categorization, language models, and speech recognition, exemplify the earlier qualities. Indeed, users remain susceptible to risks even when transmitting anonymized data to the central data repository. Conversely, in the context of FL, only the essential information required for model enhancement is communicated, mitigating these risks effectively. Figure 3 shows the FL framework with the federated averaging (FedAvg) process [52]. The main FL process elements are FL communication, local FL device processing, and the FedAvg process at a centralized FL server.

3.1.4. Multiagent Reinforcement Learning (MARL)

MARL is a subfield of AI and ML that focuses on training multiple agents to make decisions in interactive environments. In MARL, each agent learns through trial and error to maximize a cumulative reward by taking actions based on its observations and the actions of other agents. The agents interact with each other and the environment, and their individual actions can affect their own rewards and the rewards of other agents. MARL algorithms aim to find optimal strategies for each agent, considering both cooperative and competitive interactions. Cooperative MARL involves agents working together to achieve a shared goal, while competitive MARL focuses on agents competing against each other to maximize their individual rewards. The challenge lies in striking a balance between cooperation and competition to achieve desirable outcomes. Figure 4 demonstrates the difference between single RL and MARL. In MARL, multiple agents interact with a common environment where each agent has its own three-tuple parameters: state , reward , and action . The combination of these actions constitutes the collective a.

Numerous challenges within wireless communication can be effectively addressed by employing RL techniques, as they can model sequential decision-making scenarios. Given the prevalence of multiagent environments in wireless setups, where agents’ interactions and decisions influence one another, MARL emerges as a promising solution for a wide array of problems. Recent advancements in DL-based approximations, operations research, and multiagent systems have fueled the surge of interest in MARL over the last decade [54]. MARL configurations can be categorized into three primary groups based on agents’ interactions [55]. In fully cooperative settings, agents collaborate harmoniously to optimize shared goals or reward signals, often yielding similar rewards. Alternatively, agents vie against each other in fully competitive scenarios, prioritizing individual reward maximization, potentially resulting in a net reward sum of zero. Hybrid MARL systems also exist, encompassing both cooperative and competitive agents to accommodate varied dynamics.

4. AI-Enable Wireless Communications

4.1. Probabilistic ML and Bayesian Inference

ML techniques are categorized into three distinct groups: supervised learning, unsupervised learning, and RL techniques. Supervised learning, in particular, excels at capturing the intricate connections between input and output data by refining cost function weights. This model can label the dataset at the output. In contrast to supervised learning, unsupervised learning does not predict the output’s label, and it should underline any hidden layers in the input. In the reinforcement method, there is a feedback mechanism opposite to the supervised and unsupervised learning, and this method can represent the relationships between the input and output data. In cellular networks, URLLC is a crucial communication service in 5G and 6G cellular networks, serving various mission-critical applications. To address the distinct quality of service (QoS) requirements of these URLLC applications, ML solutions show great potential and promise for use in future 6G networks [56]. In [57], the authors focused on ten essential ML roles within joint sensing and communication, communication-aided sensing, and sensing-aided communication systems. It elaborates on the reasons and methods behind leveraging ML in these areas and identifies crucial avenues for further research.

4.2. Deep Learning

Today, communication systems generate large amounts of data traffic where advanced ML methods are required for designing and managing the communication components. Regarding the availability of large datasets in communication systems, DNNs dealing with this amount of data are required to solve complex tasks. DL, like ML, can be divided into three groups: supervised learning, unsupervised learning, and RL. DNNs include many hidden layers (more than two) representing the relationships between the input and output layers. Each layer can consist of neurons where the activation function might be sigmoid function, rectified linear unit (RELU), threshold, and softmax. There are two popular algorithms in DNN, which are either feed-forward or back-propagation algorithms. Due to the strong prediction and ability to analyze data, AI methods, including DNN, tackle the problems of conventional methods [58].

Researchers are increasingly facing challenges with real-time optimization in 6G cellular networks due to the complexity of hybrid beamforming. In [59], the authors examined different DNN structures to address beamforming challenges in the THz band, specifically for ultramassive multiple-input multiple-output (UM-MIMO) systems. Additionally, they explore the mathematical modeling context of these DNN architectures. Also, in [60], a DL algorithm is designed to decode spheres, aiming to address the detection problem in multiple-input multiple-output (MIMO) receivers.

The work in [61] introduced a hybrid DL-based congestion control mechanism that combines LSTM and support vector machine (SVM). The goal is to resolve challenges related to load balancing, network slice failure, and the provision of alternative slices when failures or overloading occur. To validate its effectiveness, the proposed model is tested through simulations over a week, incorporating multiple unknown devices and various slice failure and overloading conditions. A hybrid quantum DL model is suggested in [62], combining the functionalities of CNN and RNN. Within this model, the CNN handles tasks such as resource distribution, network reconfiguration, and slice collection, while the RNN is employed to manage error proportion, load balancing, and other relevant operations. The study in [63] explored important concerns and potential remedies concerning DL-based wireless channel estimation and channel state information (CSI) feedback in the context of 6G. This includes topics such as DL model selection, acquiring training data, and designing neural networks for improved performance. In [64], the authors presented a novel approach that leverages a feedback algorithm called argute distributed uplink beamforming which is combined with an offline-trained DL model to achieve efficient and dynamic distributed uplink beamforming for 6G-enabled Internet of vehicles applications.

4.3. Reinforcement Learning

The advent of 6G technologies has led to more intelligent and sophisticated networks. RL employs multiple agents, which collaborate with service stations in the cellular network to learn the best programming parameters and improve the quality of service. This learning approach can be seen as a balance between supervised and unsupervised learning, with previous knowledge providing indirect control over the system’s optimal performance. The agent’s objective is to maximize the long-term accumulated reward. Many wireless challenges, including resource allocation, can be formulated as RL problems. Utilizing various DRL architectures can help resolve multiple wireless network issues, leading to the development of advanced networking systems in 6G.

In [65], the authors introduced a two-level RAN slicing approach based on an open radio access network (O-RAN) to allocate communication and computation RAN resources to URLLC end devices. For each level of RAN slicing, the resource allocation problem is formulated as a single-agent Markov decision process, and then, a DRL algorithm is utilized to address it. In addition, the authors in [66] conducted an extensive experiment to assess the effectiveness of employing TL to expedite the convergence of RL-based RAN slicing in the given scenario. They also introduced a novel predictive approach to further enhance the TL-based acceleration by identifying and reusing the most optimal saved policy.

In [67], the authors presented an extensive overview of research endeavors that have integrated RL and DRL algorithms for managing vehicular networks, focusing particularly on vehicular telecommunications matters. Vehicular networks have garnered significant attention in research due to their unique characteristics and applications, encompassing standardization, effective traffic management, road safety, and infotainment. The work in [68] offered a thorough survey of RL-enabled mobile edge computing (MEC) and valuable perspectives for advancing this field. Moreover, it identifies the MEC challenges related to free mobility, dynamic channels, and distributed services that various RL algorithms can effectively address.

Regional satellite networks play a crucial role in the 6G communication system, providing denser coverage and more reliable communications in the target area. To optimize resource utilization, virtual network embedding (VNE) enables various virtual network requests (VNRs) to share the same substrate network resources. In [69], the authors introduced a DRL-assisted load-balanced VNE algorithm (DRL-LBVNE) tailored for regional satellite networks. Initially, we construct a cost-effective regional satellite network scenario and establish its multifold coverage constraints. Besides, the rate-splitting multiple access (RSMA) technique is used to handle extreme interference caused by nonorthogonal transmission, making it highly effective in addressing spectrum scarcity in the future 6G low earth orbits (LEO) satellite communication system. The authors in [70] focused on the power allocation problem in LEO satellite networks using the RSMA mechanism and applied a DRL technique to tackle this challenge. Additionally, the authors in [71] suggested using UAVs as aerial backhauling and relay mediums in a marine communication network complemented by satellites and coastal BSs. The research focused on examining the power allocation strategy for multisatellites in a 6G network context. Due to the nonconvex nature of the power allocation problem, we employ DRL as an alternative to conventional optimization techniques to solve it.

4.4. Federated Learning

FL brings applications such as edge computing and on-device learning to 5G wireless networks; however, these applications are vulnerable to poisoning and membership inference attacks, which are key threads [72]. For instance, in [73], the authors proposed a dedicated FL blockchain to ensure secure FL and create a marketplace for solving federated learning problems. The study in [74] integrated the FL into the 3GPP 5G data analytics architecture for much lower communication. In [75], the authors explored combining an intelligent reflecting surface (IRS) and UAV to form an aerial IRS system, providing comprehensive 360-degree panoramic full-angle reflection and flexible deployment of the IRS system. To address the challenges associated with providing high-quality, widespread network coverage while meeting data privacy and latency constraints, the authors propose an innovative solution known as FL network via over-the-air computation for IRS-assisted UAV communications. In [76], the authors examined the significance of UAVs connected to 6G networks as aerial users, utilizing ML algorithms for advanced applications such as object detection and video tracking. Traditionally, ML model training takes place at the BS, creating high communication overhead and possible privacy concerns. Distributed learning algorithms such as FL and split learning were introduced as solutions to overcome these hurdles by training ML models using shared model parameters exclusively.

The authors in [77] proposed an innovative solution to accelerate training processes in FL environments. This scheme focused on optimizing training efficiency by formulating a problem that takes into account training loss, resource consumption, and device heterogeneity through convergence analysis. Additionally, to address the straggler effect resulting from diversity and resource constraints in edge devices, the authors introduce the IFBA searching algorithm. This algorithm seeks to find an optimum inexactness of local models and frequency band allocation for edge devices, thus improving FL performance overall. The study in [78] highlighted how FL was being used to analyze the influence of imperfections in uplink and downlink links using FL technology. These researchers focused on a multiuser massive multi-input-multioutput (m-MIMO) 6G network and explored its estimation errors for weights for each round using zero-forcing and minimum mean squared error techniques. The authors in [79] introduced an innovative FL framework with an incentive mechanism based on the one-side matching theory. This mechanism’s primary goal is to encourage and select users who will actively take part in FL, with the ultimate aim of shortening FL convergence times while increasing profit for participating users.

The researchers in [80] addressed improving federated learning within wireless mesh networks by taking into account the wide variety of communication and computing resources available to routers and clients. To do this, a novel framework is proposed in which each intermediate router conducts in-network model aggregation before transmitting data to its next hop. This approach seeks to minimize outgoing data traffic, enabling the aggregation of more models despite limited communication resources. In [81], the authors provided a study on using distributed FL techniques to improve road user/object classification using Lidar data. The authors present a novel decentralized approach to FL called consensus-driven FL, designed specifically to work with deep ML architectures compliant with PointNet and enable efficient LiDar point cloud processing for road actor classification. In [82], the authors showcased their work involving the development of three distinct case studies, one of which is the smart airport scenario. These case studies encompass eight different scenarios, exemplified by the concept of federated learning, and these scenarios, in turn, involve nine distinct applications and AI delivery models. Some examples of these models include smart surveillance. Additionally, the study encompasses a substantial array of 50 sensor and software modules, with an instance being the object tracker module.

5. 6G Emerging Technologies

6G wireless communications bring about many innovative new technologies that push the limits of connectivity, data rates, latency, and applications [83]. Collectively, these emerging technologies create the landscape of 6G wireless communications, though their exact form and eventual deployment remain to be determined. Since 6G remains in its conceptual/early research stage, however, several emerging technologies have already been proposed as potential inclusion in its ecosystem:

5.1. Terahertz (THz) Communication

Terahertz frequencies offer impressive data rates due to their extensive bandwidth. THz communication could even allow multi-terabit-per-second connections, making this technology perfect for applications that demand bandwidth-intensive connections such as HD video streaming or AR. THz ultramassive MIMO has the capability of creating highly amplified, highly focused beams using advanced beamforming technologies [84]. THz communications stands out among various potential solutions as an exceptionally potent technology to facilitate 6G and subsequent generations. With its ability to enable terabit-per-second transmissions for emerging applications, its significance cannot be denied. In [85], the authors delved deeply into the pivotal areas necessary for developing comprehensive THz communications systems, specifically physical, link, and network layers, providing the primary areas of study.

Understanding the fundamental characteristics of THz wireless propagation channels serves as the cornerstone for developing robust THz communication systems and applications [86]. In [87], the authors thoroughly investigated AI integration within cutting-edge THz communications, considering its challenges, prospects, and limitations associated with this integration. Furthermore, they also explored existing platforms for THz communications, from commercial options and testbeds to public simulators and simulators available publicly. The researchers in [88] presented an innovative endeavor designed to facilitate adaptable and secure THz communications. They marked an experimental investigation into modulation and bandwidth classification at THz frequencies using deep DL techniques. In [89], the authors conducted an in-depth exploration of recent studies pertaining to THz frequency communication between UAVs and THz frequencies, providing insight into its various facets and distinguishing features. Furthermore, this investigation explores challenges and prospects related to the physical layer of THz-UAV communication, providing greater knowledge of its complex nature.

5.2. Holographic Beamforming

Advanced beamforming techniques using holography could allow for more precise and efficient beamforming, providing highly targeted adaptive signal transmission and improving both spectrum efficiency and network coverage. Real-time holographic video communications provide opportunities for immersive encounters within advanced video services in the coming metaverse era. However, creating high-fidelity holographic videos requires significant bandwidth and computational power, surpassing current 5G network capabilities in terms of transmission capacity [90]. In [91], the authors introduced an innovative LSTM-based scheme that can accurately ascertain both zenith and azimuth angles to enable accurate localization of users. This will ultimately allow accurate determination of present user locations. These users include those directly served by holographic MIMO (HMIMO) systems and those taking advantage of reflective and refractive channels emanating from intelligent omnisurface technology. The authors in [92] implemented HMIMO communications using intelligent metasurfaces stacked to eliminate radio-frequency chains that would otherwise need to be utilized. Creating an effective channel model is one of the primary research challenges associated with wireless systems employing multiuser holographic MIMO (MU-HMIMO) technology. This challenge is made more formidable due to the complex interactions arising from having numerous nearby patch antennas [93]. In [94], the authors explored the sum-rate analysis of MU-HMIMO systems and recent advances in holographic beamforming techniques.

5.3. Quantum Communication

Quantum communication provides unparalleled security through quantum essential distribution methods that prevent data transmission from eavesdropping. Integrating quantum principles into 6G networks could bring additional privacy and data protection measures into play. Quantum technology integration improves system performance while increasing security and dependability; its potential is unrealized in future communication systems. Fundamental concepts related to quantum communication, information processing, design goals, visions, and protocols have been presented in [95]. Also, in [96], the researchers provided a visionary and technology-focused account and exploration of how quantum information technology may be leveraged for the advancement of future 6G wireless networks. In [97], the authors discussed in-depth examination, analysis, and prospective outlook of quantum communications and networking compared to conventional Internet. Discussion topics of quantum networks cover fundamental concepts, technological innovations, and challenges associated with them. The authors in [98] also introduced an innovative examination of quantum communication network (QCN) performance, using an innovative, physics-oriented approach derived from quantum physical principles governing various QCN components. The necessity for this physics-based approach is examined for its importance as part of practical designs within various realms of ongoing research.

5.4. Reconfigurable Intelligent Surfaces (RIS)

RIS technology involves employing arrays of small, programmable reflectors that manipulate radio waves in real time for enhanced signal quality, coverage extension, and interference mitigation. RISs are expected to transform the propagation environment, creating an intelligent radio environment with dynamic capabilities for 6G wireless communications applications. This transformation could radically reshape wireless communications landscapes and enable unprecedented capabilities and functionalities. In [99], the authors conducted a systematic exploration of emerging technology, covering its fundamental components as well as nine pivotal issues related to it, providing comparisons between massive MIMO and RISs and outlining one crucial challenge. This comprehensive overview offers an in-depth examination of RIS technology, covering its key principles, critical considerations, and areas of significance. In [100], the authors presented an in-depth examination and innovative proposal for the beyond diagonal RIS model. This model breaks free from conventional diagonal phase shift matrices to provide a unifying framework to harmonize diverse RIS modes and architectures. In [101], the authors explored an engaging scenario related to RIS-assisted communication systems. Their study focused on situations in which complete phase error elimination is beyond the capabilities of the RIS, and user locations have an uncertain distribution. At present, ML algorithms deployed within RIS systems have seen considerable uptake, as has their use by DL-based algorithms aimed at increasing constrained channel estimation performance within these communications aid systems [102]. Additionally, such an implementation could potentially yield substantial cost-cutting opportunities [103, 104].

5.5. Integrated Satellite and Terrestrial Networks (ISTN)

Seamless integration of satellite and terrestrial networks can provide global coverage and reliable connectivity, bridging the digital divide in remote and underserved areas. The concept of an ISTN holds great promise in delivering worldwide broadband access to users of all kinds. This notion has garnered significant interest from both academia and industry stakeholders. Several articles, such as [105, 106], provided a review of ISTN in terms of architectures and key techniques and highlighted its charming potential in the 6G era. In [107], the authors outlined an intelligent strategy that utilizes IRS to augment the capabilities of uplink transmission, aiming to enhance both coverage and efficiency within the ISTN context. In [108], the authors introduced an exploratory analysis of a semigrant-free transmission approach. This strategy is designed to offer versatile connectivity options for diverse user categories within the framework of ISTN. In terms of AI, RL was utilized to enable intelligence in nonterrestrial-based communications [109], DL with differential privacy for integrated terrestrial and nonterrestrial networks (ITNTs) [110], and utilizing a multiagent approach in DRL for the purpose of user association and resource allocation is explored within the context of ITNTs [111].

5.6. Hyperconnected Edge Computing

Edge computing infrastructure combined with URLLC will facilitate real-time processing of data at the edges of networks—critical for applications such as AV, AR, and industrial automation systems. AI-enhanced edge devices will process data locally to reduce latency and relieve core network traffic while supporting applications such as IoT and AR. Hyperconnected edge computing could transform 6G cellular networks. The core of the system lies in AI-powered edge devices which will not only process data but also drive transformative applications [112114]. This approach holds promise to overcome latency issues and network congestion issues associated with IoT and AR applications, among other requirements. AI-powered edge devices serve as local data processing hubs. By processing data closer to its source, these devices significantly decrease latency and relieve strain on core networks—an architectural shift especially essential when responding to real-time experiences or time-sensitive IoT applications. AI and edge computing work hand in hand to provide cutting-edge analytics and decision-making at the edge. AI algorithms can rapidly analyze streaming data streams in real time to extract meaningful insights that lead to intelligent local actions. Hyperconnected edge computing encapsulates AI with network architecture in an innovative solution for efficiency enhancement and mission-critical application support. Hyperconnected edge computing also stands as proof that AI and network architecture converge. 6G networks can leverage AI’s rapid processing abilities to support an ever-more-connected world and unleash IoT, AR, and other emerging applications’ full potential while offering users seamless experiences that respond instantly and smoothly.

6. AI Applications in Wireless Networks

This section provides a comprehensive overview of the application in the era of 5G wireless communications. AI technology is necessary for obtaining cognitive resource management in wireless communication systems [44, 115]. AI methods can generate channel modeling autonomously without the need for theoretical analysis [116]. Besides with the light of AI techniques such as DNNs, more reliable channel information can be provided by predicting the future channel information by using the past measurements [117]. In [118], the DNN method is applied for dimmable optical wireless communication systems to tackle the problems that appear from signal-dependent optical channels. Also, in [119], the DRL method is applied for capturing and estimating system dynamics for solving the difficulties of resource allocation and also making the arrangement in the backhaul of millimeter wave (mm-wave) networks. AI method is also applied in multiaccess edge computing (MaEC) topic [120], where, in [121], the stochastic online learning method is presented for the concept use of MaEC. In this section, various use cases in wireless communication systems are presented by applying AI-enabled methodologies. With the iniquitousness of smart mobile gadgets and the revival of artificial intelligence, various AI-empowered mobile applications are emerging. In this section, we present how AI applications strengthen future wireless networks.

6.1. Big Data Analytics

Leveraging big data analytics within 6G wireless networks involves harnessing its immense power for insights and performance optimization and improving various aspects of communication and network administration. 6G technology promises significant advancements in terms of data speed, capacity, latency, and connectivity. These advancements produce vast quantities of data from various sources such as user devices, IoT devices, sensors, and network infrastructure [122]. Big data analytics entails collecting, processing, and examining large volumes of information to discover useful patterns, trends, and correlations that reveal meaningful meaning. As urban areas continue to expand and technology progresses, smart cities have emerged to address resource management, urbanization, and environmental sustainability. Integrating advanced wireless networks and big data analytics plays a pivotal role in realizing smart environments and sustainable cities [123]. In [124], the authors introduced an approach utilizing reliable mobile FL classifications tailored for mobile devices within an intelligent source distribution system that integrates big data analytics and AI techniques for effective source distribution. This system’s primary goal is data efficiency distribution using these resources efficiently.

6.2. Data Caching

Data caching has grown increasingly valuable with the rising need for fast, low-latency services in 5G/6G wireless networks. Caching refers to storing frequently requested content nearer end users through various network locations so as to meet this higher demand for fast services. Caching mechanisms within these networks often employ sophisticated algorithms and AI and ML techniques to anticipate user demand patterns and determine what content needs to be stored in their caches. The researchers in [125] categorized mobile edge caching solutions using RL techniques. This provided invaluable insight into the ecosystem of mobile edge caching while also revealing innovative caching strategies utilizing RL with enhanced potential. Besides, an in-depth examination of cutting-edge intelligent data caching methodologies powered by learning mechanisms is presented in [125]. This review offered an invaluable roadmap through the complex world of AI-powered data caching, providing strategies that effectively balance storage needs with network efficiencies. In [126], the authors focused specifically on caching at a small SB level in an attempt to minimize data access delays. They proposed an innovative data caching solution powered by intelligence that utilizes an RL framework in AI to significantly shorten retrieval timelines for small cell networks while improving efficiency overall. In [127], the researchers presented an innovative caching and computing offloading scheme to enhance system performance in smart home environments. By employing the deep Q network algorithm, this study generated optimal offloading and caching decisions designed to minimize system latency. Furthermore, another work presented in [128] investigated content distribution within hotspot regions. The main contribution of this work is the deployment of several cache-enabled UAVs designed to reduce congestion within dense cellular networks, thus providing promising paths towards optimizing content delivery and network performance through dynamically incorporating UAVs.

6.3. Mobility Management

Mobility management is the function of communication networks that enables mobile systems to work. In other words, mobility management can create mobility options to enhance the efficiency and affordability of various systems. Long-term evolution (LTE) stands as a fourth-generation (4G) wireless standard that delivers enhanced network capacity and speed for mobile phones and other cellular devices, surpassing the capabilities of third-generation (3G) technology. In the new radio nonstandalone, this LTE carrier is applied for handling mobility management tasks. Hence, mobility management has important effects on the performance of the system, and intelligent methods are required for controlling the various responsibilities. Several works on 5G mobility management have presented various techniques to improve network performance [129133]. However, integrated AI-based methods and optimizations will pave the way for 5G/6G performance requirements by decreasing the latency for the new emerging applications, leading to work at low- and high-frequency bands. Most importantly, using the AI methods improves the continuous connectivity to the mobile user equipment. In [134], the authors introduced a range of prevalent ML types and mobility management techniques aimed at enhancing network performance. In [135], the authors presented a novel strategy that combines centralized and MARL approaches to achieve optimal performance levels. Also, the authors in [136] introduced a handover technique satellite-ground integrated network that leverages the adaptive learning rate with momentum framework within the deep Q-network. This approach not only enhances the precision of decision-making but also elevates the effectiveness of the learning process.

6.4. Intelligent Resource Management

Intelligent resource management represents a remarkable development for wireless networks, optimizing resource allocation in order to meet the growing connectivity demands in today’s fast-changing global environment. AI algorithms play a pivotal role in wireless network efficiency and adaptability, providing seamless connectivity while opening up potential avenues of innovation across industries. This development showcases AI’s transformative potential within connectivity. Researchers published evidence in [137] supporting the potential value of using an AI engine with multiple AI algorithms for comprehensive life cycle management of network slices, showing its ability to maximize both slice performance and efficiency by employing various techniques from AI. In [138], authors explored resource management through an optimization policy, DL, and ensemble learning techniques designed to simultaneously optimize resource element reflection coefficients, transmit power allocation for BS, wideband THz resource block allocations, and allow coexistence between URLLC and eMBB systems. The authors in [139] provided an in-depth exploration into network slicing resource management, underscoring its importance, especially with tenants requesting multiple slices at once. This research explored key stages in resource management and evaluated RL/DRL algorithms at each phase for autonomous behavior to increase network slicing efficiency. Also, in [140], the authors have devised an adaptive learning framework tailored for resource and load prediction within data-driven beyond 5G/6G wireless networks using insights gained through transformal TL, creating an innovative network slicing architecture that promises to redefine advanced wireless systems for years to come.

6.5. Massive MIMO and Beamforming

One of the features of the next generation is the MIMO method, which is able to multiply the capacity of a radio link by getting the benefit of multiple transmissions and receiving antennas [141, 142]. AI methods can be applied for an accurate estimating of the channel [143], mapping channels in space and frequency [144], and also for allocating power in massive MIMO [145]. In communication systems, the base station requires valid and accurate CSI for precoding [146, 147] and scheduling operations in the vast MIMO systems. For this case, in order to improve the spectrum efficiency, in [148], DL is used for tackling the problems in device memories at the user equipment. Channel estimation with received signal-to-noise ratio (SNR) feedback is another important factor in MIMO systems. The study in [149] utilized received SNR feedback to estimate coefficients of the MIMO channel at a transmitter side. Additionally, the downlink channel reconstruction scheme can be optimized using DL for massive MIMO systems where high accuracy parameters can be estimated [150]. In the presented method, the channel model parameters are trained instead of the channel matrix by using the neural networks. Therefore, various benefits of AI methods have been demonstrated where, in comparison with traditional methods, more accurate outcomes and satisfactory performance can be achieved, for example, in estimating channels and reducing the number of pilots in communication systems.

6.6. Channel Coding

In wireless communication systems, channel coding can be one of the methods for improving the overall system performance. Some of the parameters that can be considered in the channel coding scheme are the bit error rate, packet error rate, and computational complexity that affects the overall performance. Channel coding can be applied for detecting errors or for correcting errors. The error detection coding is known as automatic repeat request, where the receiver is able to petition a transmission repeat with two-way communications. The error correction coding is the forward error correction coding in that the receiver is detecting errors, and by using the feedback, errors are corrected. AI methods are also applied for the channel coding (encoder and decoder) of wireless communications. These methods are more powerful in mapping the nonlinearities in various aspects, such as computational complexity, processing latency, coding performance, and power consumption [151]. In [152], the authors provided several current trends in DL-aided channel coding. In [153], the AI method is used for designing error correction codes, and it is proved that improved outcomes are achieved in the case of list decoding for polar codes by learning the parameters of an optimal code. Figure 5 shows the logic of error correction using the AI method.

The researchers in [154] introduced an innovative communication system in which a neural network forms the foundation. Within this framework, both the channel coding and modulation components are represented as neural networks. This novel architecture amalgamates the turbo autoencoder approach with feed-forward neural networks dedicated to modulation tasks. In another work, a novel channel estimation approach by utilizing residual deep NN is developed in [155] to exhibit a remarkable advantage of over 2 dB compared to the conventional minimal MSE channel estimation. Also, several works have been discussed on deep joint source-channel coding using DL for CSI feedback [156], wireless image transmission [157], low delay [158], and wireless multipath fading channels [159].

6.7. Network Management

Networks across hybrid environments are growing in credibility, and management is getting more diverse challenges. Networking is a promising technology that can provide efficient computational resources for optimization methods. Due to the advanced era of integration and analytics, network innovation and evolution are becoming difficult from theoretical and industry aspects. A very clear example of this development is the Internet, where network operators are working continuously in both wired or wireless types in the application of network security and so on. The application of networks with respect to the required specifications can be different; thus, advanced methods such as AI approaches are needed for supporting modern network operations. AI methods can be applied for interference and spectrum management, link adaptation, and traffic congestion. Network management can provide stabilization of input from multiple management platforms [160]; thus, developing a framework utilizing ML algorithms can assist in creating an autonomous network management system. The primary goal of this framework is to transition from traditional human-centered approaches to managing networks to a new paradigm where machines take a more central role in managing and optimizing network operations [161]. The presented concept in [162] suggests an architectural integration of the network data analytics function and the intent-based networking concept, synergizing them with ML techniques for the analysis of monitoring data. This innovative approach culminates in the establishment of a system capable of seamlessly offering automated validation and enhancement functionalities for the 6G core networks. In [35], the authors explored the integration of network management, UAVs, and emerging technologies such as blockchain and AI/ML in the context of 6G networks. It emphasizes the challenges faced by UAV swarms in terms of security, privacy, intelligence, and energy efficiency within the 6G mobile network.

7.1. Challenges and Complexities
7.1.1. Massive Data Handling

6G networks will generate and process massive volumes of information, creating storage, processing, and bandwidth demands which must be managed efficiently in order to stay competitive in today’s globalized society. 6G mobile technology could revolutionize data handling as more devices contribute information. AI plays an invaluable role in managing this data influx [7, 163, 164]. High-speed 6G networks have the capability to transmit large volumes of IoT device and vehicle-generated information quickly and reliably. AI storage solutions also play an integral part in prioritizing and classifying this data to reduce latency, with edge computing providing local processing support that further optimizes latency management. AI analytics help sift through this vast amount of data in search of insights that enable real-time decision-making—for example, optimizing traffic or resource allocation decisions. AI allows for real-time adjustments such as optimizing traffic or resource allocation and real-time threat detection and privacy protection of personal information privacy. Network management stands to gain greatly from AI’s predictive abilities, with AI/6G’s combination being potentially disruptive across industries and daily life by efficiently meeting volumetric, speed, and security concerns.

7.1.2. AI Algorithm Complexity

Implementing sophisticated AI algorithms across 6G networks in real time requires considerable computational power and hardware efficiency; their complexity presents an insurmountable obstacle. Advanced AI algorithms must be designed and deployed seamlessly across these networks in real time. AI development requires substantial computational power and optimized hardware, with more complicated AI algorithms emerging as 6G networks expand to handle massive amounts of data. As 6G networks take shape to handle such loads more seamlessly. These algorithms, ranging from DL-NN and RL models, demand significant processing power and must be carefully designed and optimized in order to guarantee seamless real-time performance. In order to guarantee this goal, hardware must also be tailored towards these algorithms in an effort to deliver maximum real-time efficiency. Due to the demanding nature of AI computations, efficient hardware becomes critical to their success. High-performance graphics processing units (GPUs), dedicated AI accelerators, and even quantum computing technologies may all help address intricate computations efficiently. Hardware’s ability to efficiently process parallel tasks and execute complex matrix operations will have a direct bearing on an algorithm’s effectiveness, and power efficiency should also be an issue of concern. Due to the mobility of 6G networks, energy-efficient hardware becomes crucial in order to prevent excessive battery drain in user devices and network infrastructure. Given AI algorithm complexity is prevalent within 6G networks, an approach towards hardware design and optimization must also be strategically applied in this endeavor. Real-time computation demands of AI algorithms require hardware that balances performance, power consumption, and scalability if we want to unlock their full potential in next-generation mobile networks.

7.1.3. Interference Management

Interference management poses one of the greatest obstacles to 5G networks and beyond. Due to an explosion of devices and technologies, interference issues have grown increasingly complex over time. HetNets and femtocells can provide effective solutions, yet cotier interference remains an obstacle [165]. Furthermore, HetNets offer better spatial spectrum reuse and QoS performance compared with homogeneous networks. Effective resource management is crucial in HetNets in order to prevent interference and enable spectrum sharing [166]. The new network design incorporates various technologies such as IoT, device-to-device (D2D) communication, mm-wave, beamforming, M-MIMO, and relay nodes that need to be compatible with traditional networks. However, simultaneous use of different technologies has led to signal interference issues [167]. Addressing such complexity requires the integration of advanced AI techniques that promote effective coordination and optimization. 6G networks present unique challenges due to multiple devices sharing one frequency spectrum simultaneously, from smartphones and IoT sensors through AV and industrial equipment, all coexisting side by side on one network and often leading to interference that compromises network performance and user experience. Traditional methods for managing interference between devices may prove inadequate due to their sheer scale and heterogeneity; ML algorithms offer real-time management by learning dynamic interference patterns in real time and adapting accordingly. By analyzing large volumes of data, these algorithms are capable of anticipating and mitigating interference by dynamically adjusting parameters such as transmitting power allocation and resource allocation. AI also facilitates intelligent scheduling to ensure devices with diverging communication needs are coordinated efficiently.

7.1.4. Energy Efficiency

Energy efficiency remains a top challenge, particularly when integrating power-hungry AI algorithms into wireless devices and infrastructure. As smart devices become ever more common and the IoT takes shape, wireless communication has emerged as a significant force driving social transformation. Edge intelligence can provide an important solution for improving user experiences with limited resources. However, effectively managing independent yet interconnected edge nodes to maximize decentralized learning approaches can present daunting challenges [168]. Energy efficient computing in today’s 6G networks involves increased resource use with reduced energy usage, shifting away from traditional perceptions of networks as being only transmission conduits. To address this challenge, an energy-efficient in-network computing paradigm for 6G mobile HetNets is being created by incorporating network functions onto a universal computing platform that integrates network functions efficiently. The computing loads will be alleviated as transmission overhead decreases while data center energy consumption declines [169]. Furthermore, efficient network management plays an equally critical role when meeting stringent QoS requirements, especially within its complex and densely packed mobile HetNet framework with regard to tasks related to various ML approaches [170]. Researchers are driven by an eagerness to conserve energy to conduct multiple studies that promote eco-friendly communication practices. Even with significant technological progress, energy efficiency remains a formidable obstacle, particularly when integrating AI algorithms that require large amounts of power into mobile networks such as 6G. Combining advanced AI with the energy constraints of wireless devices and infrastructure presents an intricate dilemma. 6G networks, with their expanded connectivity, offer immense promise to support an array of applications from AR to autonomous systems. However, the implementation of sophisticated AI algorithms necessitating substantial computational power may lead to higher energy use as efficiency is of the utmost importance due to mobility factors inherent in 6G networks.

7.1.5. Privacy and Security

AI and ML play an invaluable role in shaping 6G landscapes, providing it with the means to gain knowledge from unpredictable, ever-evolving environments. However, this collaboration of AI with 6G brings with it both advantages and drawbacks akin to two sides of a double-edged sword. AI technology may improve the privacy and security aspects of 6G in several ways; on the one hand, AI offers great promise to advance these aspects, yet, on the other, it introduces risks related to security breaches that pose significant threats to its future [171]. As AI-enabled networks open a new era of connectivity, they raise legitimate concerns regarding data privacy, security vulnerabilities, and ethical considerations of AI insights generated. Concerns also persist surrounding 6G applications driven by AI/ML [28]. These concerns include protecting sensitive data, restricting unwarranted access, and mitigating potential vulnerabilities that might compromise AI-powered systems in 5G environments and beyond. Although these networks hold promises of transformational benefits, potential risks must also be carefully considered to create a responsible and secure digital landscape. Innovation must coexist with protecting against risks to ensure the greatest gains can be realized without endangering the privacy, security, or well-being of both individuals and societies alike.

Data privacy has become a top concern as AI algorithms collect and process increasingly large volumes of user data. Any failure in managing personal information properly could result in possible breaches. Securing individual privacy through strong encryption, user consent, and transparent data handling practices is vital in protecting individual identity. Security breaches pose another significant danger. 6G and AI can have both security and privacy-enhancing capabilities and potential safety concerns when combined. Underlying future networks’ end-to-end automation are proactive threat detection, implementation of intelligent mitigation techniques, and realizing self-sustaining 6G networks [27]. As part of these activities, ethical guidelines, regulations, and safeguards to avoid the misuse of AI-generated insights are imperative in maintaining trust from both users and stakeholders alike.

7.1.6. Regulatory and Ethical Issues

Integrating AI technologies into wireless networks requires new regulatory frameworks that address safety, privacy, and ethics issues. In this regard, 6G networks offer unprecedented connectivity and innovation potential. All progress requires robust regulatory and ethical frameworks that protect safety, privacy, and other aspects of moral considerations. As AI continues its rise as an influential force within 6G networks, new regulations must be put in place in order to ensure its safe usage—such regulations must address issues related to data privacy, user consent, and transparency. Maintaining a balance between using user data for improved services while upholding privacy rights will be crucial to building trust with AI implementation projects, not forgetting any ethical considerations involved. AI algorithms may cause bias, discrimination, or unexpected results, which require careful examination by regulatory bodies and technology developers alike. Guidelines must be established by both in order to mitigate risks and promote equitable results for everyone involved. Artificial intelligence and wireless networks require close cooperation among legal, technological, and ethical specialists in order to be used effectively as engines of progress within society. By considering ethical considerations while following legal regulations, 6G networks can harness AI for positive effects within our society—benefiting individuals, industries, and society at large.

7.2. Future Trends
7.2.1. Intelligent Spectrum Management

Intelligent spectrum management will play a pivotal role in 6G networks, using AI-powered services to revolutionize how spectrum resources are allocated and utilized [7, 172]. Faced with rising connectivity requirements and an oversaturated radio frequency environment, AI’s role in dynamically optimizing network performance and mitigating interference becomes all the more crucial. Spectrum allocation has historically been static, thus leading to inefficiency and underutilization. AI has transformed networks by making real-time adjustments based on fluctuating demand and interference patterns. AI algorithms are capable of processing various sources of data relating to user behaviors, device types, and environmental conditions to make informed decisions regarding spectrum allocation decisions. AI-powered systems can use smart frequency assignment to ensure efficient use of spectrum usage, thus limiting congestion and increasing data throughput. AI can detect interference sources and dynamically adjust transmission power and frequency parameters to reduce network performance impacts; its integration into spectrum management holds great promise. Intelligent spectrum management enables coexistence among various wireless technologies and the deployment of applications that rely on seamless and reliable connectivity and ensures optimal use of this scarce spectrum resource for the maximum potential of 6G networks and improved wireless experience for users and devices.

7.2.2. AI-Driven Beamforming and Antenna Arrays

Advanced AI beamforming techniques will maximize signal transmission, increase coverage, and address any specific challenges presented by mm-wave frequencies. Explorations into various strategies for 6G wireless networks have begun in earnest as specifications for 5G are close to finalization. Of all the potential technologies for service providers of this generation of 6G networks, RISs stand out. These surfaces allow a system to shape wireless channels with unprecedented degrees of freedom, giving it the capability of customizing each channel’s characteristics as required. However, to fully comprehend radiation pattern attributes, an in-depth knowledge of how a metasurface behaves across all possible operational situaitions is required [173]. Both analytical models and exhaustive wave simulations can be utilized to gain more insight into radiation pattern attributes; each has limitations in certain situations while demanding significant computational resources.

7.2.3. Autonomous Networks

6G networks will shift towards greater autonomy with AI-powered self-healing, self-organizing, and self-optimizing features. Mobile networks have come under immense strain due to new applications and services coming online rapidly and widespread mobile device use. Managing these demands, however, can be extremely complex given the heterogeneity of networks that increase heterogeneity over time, but embracing innovative network automation solutions may prove effective here, with zero-touch management methods being one approach [37]. Blockchain-based smart systems can be utilized as part of an architecture for zero-touch pervasive AI as a service in 6G networks, creating platform architecture aimed at streamlining its deployment across application and infrastructure domains, relieving users of worrying about costs, security, or resource allocation requirements while meeting 6G’s stringent performance criteria [174]. This platform would need to meet standardization of PAI at every level and unification interface so as to facilitate service deployment across application and infrastructure domains and meet stricter performance criteria of 6G networks simultaneously.

As many 6G services are mission-critical applications or directly impact human lives, ethical responsibility and trustworthiness of autonomous systems are both necessary for their successful deployment and integral parts of ensuring their long-term viability [175]. Multiple factors should be taken into consideration to assess ethical responsibility including automatic testing and monitoring, ethical principles, moral theory, blockchain implementation, and explainability to create an ethically accountable autonomous system in 6G.

7.2.4. Quantum-Assisted AI

Quantum-assisted AI promises to open new horizons for optimization, cryptography, and solving intricate wireless network challenges. Quantum computing and AI represent a powerful combination with enormous potential to radically revolutionize wireless communications [96, 176]. Quantum computing’s ability to perform complex calculations at rapid speeds makes it the perfect partner for AI’s data processing abilities. Synergies between these technologies promise breakthroughs in solving optimization problems, providing more efficient network management, resource allocation, and spectrum utilization across 6G networks. Quantum-assisted AI could add another level of protection against cyber threats by strengthening encryption processes. Quantum algorithms have the potential to strengthen encryption protocols by making them more resilient against attacks from powerful quantum computers and providing data privacy and security within an increasingly connected 6G landscape. Besides, quantum-assisted AI has the ability to address difficult wireless communication challenges that conventional computing methods cannot. Integrating leading-edge technologies has the power to offer unheard-of solutions once thought unobtainable—from mitigating interference, optimizing signal processing, and simulating complex network scenarios.

As quantum-assisted AI continues to advance, it will challenge conventional assumptions regarding 6G networks and what can be accomplished within them. This partnership could revolutionize not only wireless communications but also industries dependent upon an unobstructed flow of information and intelligence. Emerging applications that rely heavily on data, including tactile Internet, immersive experiences (virtual or AR), autonomous mobility, and industrial automation, pose unprecedented difficulties when it comes to reaching URLLC within 6G networks. Employing various machine intelligence techniques such as DL, RL, and FL assists in developing new approaches that maintain robust 6G URLLC capabilities amidst continuous streams of training data [177]. However, their inherent constraints pose difficulties in meeting the demanding 6G URLLC criteria.

7.2.5. AI-Enabled Satellite Networks

6G networks will integrate AI into satellite communications systems to increase global coverage, disaster response times, and cross-network integration—marking an enormous advancement for 6G technology [178180]. AI’s integration into satellite communication systems promises to revolutionize global coverage, disaster response, and integration—with one potential advantage being enhanced global connectivity. Utilizing AI for optimization and adaptive resource allocation, satellites are now capable of adaptively altering their coverage areas and frequencies depending on real-time demand and providing uninterrupted communication in remote or underserved regions. AI-powered satellite networks provide swift disaster response capabilities during times of crisis. AI can quickly evaluate and process data to quickly detect areas affected by disasters, optimizing communication resources and supporting emergency services and relief activities in real time. AI can play an instrumental role in more efficient disaster management and network integration. Satellites may seamlessly blend in with terrestrial 6G networks, IoT devices, and edge computing systems for an interconnected system in which AI-assisted analysis enables data-based decision-making through satellite observations. Consequently, AI technology integrated into satellite networks for 6G would significantly expand worldwide connectivity, revolutionize disaster response processes, and facilitate convergence across diverse network infrastructures. Realizing its vast potential would produce beneficial societal effects while creating revolutionary advancements to revolutionize global collaboration.

7.2.6. AI for Environmental Sustainability

AI will play an increasingly prominent role in 6G networks to maximize resource usage, lower energy usage, and support green communication solutions. As demand for energy-efficient communication networks escalates, faster data rates and expanded capacities for supporting an ever-increasing amount of traffic are required [181]. Increased attention is being focused on the negative outcomes associated with climate change, scarcity of raw materials and rising energy costs, and unequal and biased utilization of technology. Therefore, in addition to traditional performance indicators, it is crucial to account for ecological impact assessments, energy usage analysis, and resource allocation as well as inclusiveness and impartiality when planning future strategies and performance evaluation. Consequently, AI integration in 6G networks presents an opportunity to address concerns related to inclusiveness and impartiality. This could be accomplished by developing intelligence algorithms that take account of users’ various requirements while eliminating bias in resource distribution. Furthermore, 6G networks also promise to contribute towards digital divide reduction by providing equal access to communication services. Besides, AI algorithms assist in the adjustment of network configuration under energy availability and environmental conditions, resulting in eco-conscious communication infrastructures.

7.2.7. AI-Enhanced Network Slicing

AI-powered network slicing will deliver customizable network slices tailored specifically to various industries and applications as required. AI-powered network slicing will likely play a pivotal role in 6G networks, providing adaptable solutions tailored to specific industries or applications [34, 182, 183]. Network slicing refers to the practice of partitioning one physical network infrastructure into multiple virtual ones that cater specifically for specific use cases. Based on AI technology, this approach has the capability of rapidly adapting and improving in response to users’ evolving requirements, meeting all aspects of users’ requirements effectively. AI-powered network slicing enables precise customization of networks to suit a range of applications, such as AV, smart cities, and industrial IoT services. AI algorithms analyze traffic patterns, user behavior, and real-time network demands to allocate resources intelligently for optimal performance thus increasing resource utilization while improving user experiences. An AR network slice may place greater focus on minimizing latency, while IoT slices may prioritize conserving energy use. AI integration within network slicing is central to 6G’s ability to meet various use cases and accommodate complex usage situations. AI-powered network slicing could radically revolutionize network provisioning and management. It allows companies to more quickly meet customer demands with flexibility and accuracy while taking a 6G capability approach which provides customized high-performance connectivity that enables an intelligent future.

7.2.8. Distributed ML for Communications

Future wireless networks require large-scale distributed ML to power mobile AI applications, with communication being the major bottleneck to scaling training and inference across cloud services, network edges, and customers or end-user devices. Exploration of potential technologies that enable mobile AI applications within 6G networks is necessary for success. Cloud-based solutions have proven ineffective due to significant time delays, power consumption concerns, and security and privacy risks associated with interconnecting wireless devices that generate large volumes of data at network edges [184]. Integrating edge computing and AI offers the chance to strategically deploy efficient computing servers near network edges for maximum performance and convenience. This configuration enhances the capabilities of advanced AI applications such as video and audio surveillance as well as personalized recommendation systems by providing intelligent decision-making at the precise moment data generation occurs exactly when needed. Distributed machine learning offers another benefit. By eliminating large datasets being transmitted over the cloud-based centralized learning, distributed ML reduces potential privacy breaches inherent to cloud-based centralized learning. Integration of AI techniques for end-to-end communications is becoming an increasing focus due to their potential to improve overall performance. Thus, AI and wireless communications forming an intricate relationship have produced what are known as native AI wireless networks.

8. Conclusion

AI has proven its advanced benefits over wireless technologies recently; hence, this method has gained the attention of engineers for employment in their designs. The evolution of AI, its classification, and the requirement of different components to develop AI-based wireless communication systems are discussed in detail in the present study. Technical details of various AI-enabled technologies for future wireless communication have been presented. In the present study, AI-enabled applications for addressing different aspects of 6G mobile communication, including intelligent mobility and network management, channel coding, massive MIMO, and beamforming, have been discussed. It has been studied that, enabled by AI techniques, the 6G system can automatically control network structure and various resources, including slices, computing, caching, energy, and communication, to fulfil changing demands. With the application of AI in future wireless systems, network optimization can be automated. AI can enhance the intelligence of 6G networks to achieve self-management, self-protection, self-healing, and self-optimization. Finally, we highlighted several challenges and promising future research directions for AI-enabled 6G networks. AI-enabled 6G wireless communications face challenges, including massive data handling, algorithm complexity, interference management, energy efficiency, privacy, and regulatory concerns. Future trends include intelligent spectrum management, AI-driven beamforming, autonomous networks, quantum-assisted AI, AI-enabled satellite networks, environmental sustainability, AI-enhanced network slicing, and distributed ML for communications, promising transformative advancements in connectivity, efficiency, and network customization. Addressing these challenges and embracing these trends will shape a more intelligent, connected, and sustainable era for 6G wireless communications, propelling the evolution of networks towards unprecedented capabilities and enhancing user experiences. As these technologies mature, the digital landscape of the future is expected to undergo a profound transformation.

9. Abbreviations List

Table 3 features the abbreviations list, providing readers with quick reference and enhanced comprehension of shortened forms and acronyms used throughout this article for quick reference and better understanding.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research study was sponsored by the Universiti Teknologi Malaysia through the Professional Development Research University Grant no. Q.K130000.21A2.07E08 and by the Higher Institution Centre of Excellence (HICOE) program, Ministry of Higher Education (MOHE) Malaysia, conducted at the Universiti Teknologi Malaysia under the HiCOE Research Grant no. R.J130000.7823.4J637. Also, this work was supported in part by the Ministry of Higher Education, Research and Innovation (MoHERI) in the Sultanate of Oman under the Block Funding Program with agreement no. MoHERI/BFP/ASU/2022. Additionally, this research was also partially sponsored by the Internal Research Grant (IRG) Program of A’Sharqiyah University (ASU), with project no. ASU/IRG/22/23/03.