Abstract

Cognitive radio (CR) enables unlicensed users to exploit the underutilized spectrum in licensed spectrum whilst minimizing interference to licensed users. Reinforcement learning (RL), which is an artificial intelligence approach, has been applied to enable each unlicensed user to observe and carry out optimal actions for performance enhancement in a wide range of schemes in CR, such as dynamic channel selection and channel sensing. This paper presents new discussions of RL in the context of CR networks. It provides an extensive review on how most schemes have been approached using the traditional and enhanced RL algorithms through state, action, and reward representations. Examples of the enhancements on RL, which do not appear in the traditional RL approach, are rules and cooperative learning. This paper also reviews performance enhancements brought about by the RL algorithms and open issues. This paper aims to establish a foundation in order to spark new research interests in this area. Our discussion has been presented in a tutorial manner so that it is comprehensive to readers outside the specialty of RL and CR.

1. Introduction

Cognitive radio (CR) [1] is the next generation wireless communication system that enables unlicensed or Secondary Users (SUs) to explore and use underutilized licensed spectrum (or white spaces) owned by the licensed or Primary Users (PUs) in order to improve the overall spectrum utilization. The CR technology improves the availability of bandwidth at each SU, and so it enhances the SU network performance. Reinforcement learning (RL) has been applied in CR so that the SUs can observe, learn, and take optimal actions on their respective local operating environment. For example, a SU observes its spectrum to identify white spaces, learns the best possible channels for data transmissions, and takes actions such as to transmit data in the best possible channel. Examples of schemes in which RL has been applied are dynamic channel selection [2], channel sensing [3], and routing [4]. To the best of our knowledge, the discussion on the application of RL in CR networks is new albeit the importance of RL in achieving the fundamental concept of CR, namely, cognition cycle (see Section 2.2.1). This paper provides an extensive review on various aspects of the application of RL in CR networks, particularly, the components, features, and enhancements of RL. Most importantly, we present how the traditional and enhanced RL algorithms have been applied to approach most schemes in CR networks. Specifically, for each new RL model and algorithm which is our focus, we present the purpose(s) of a CR scheme, followed by in-depth discussion on its associated RL model (i.e., state, action, and reward representations) which characterizes the purposes, and finally the RL algorithm which aims to achieve the purpose. Hence, this paper serves as a solid foundation for further research in this area, particularly, for the enhancement of RL in various schemes in the context of CR, which can be achieved using new extensions in existing schemes, and for the application of RL in new schemes.

The rest of this paper is organized as follows. Section 2 presents RL and CR networks. Section 3 presents various components, features, and enhancements of RL in the context of CR networks. Section 4 presents various RL algorithms in the context of CR networks. Section 5 presents performance enhancements brought about by the RL algorithms in various schemes in CR networks. Section 6 presents open issues. Section 7 presents conclusions.

2. Reinforcement Learning and Cognitive Radio Networks

This section presents an overview of RL and CR networks.

2.1. Reinforcement Learning

Reinforcement learning is an unsupervised and online artificial intelligence technique that improves system performance using simple modeling [5]. Through unsupervised learning, there is no external teacher or critic to oversee the learning process, and so, an agent learns knowledge about the operating environment by itself. Through online learning, an agent learns knowledge on the fly while carrying out its normal operation, rather than using empirical data or experimental results from the laboratory.

Figure 1 shows a simplified version of a RL model. At a particular time instant, a learning agent or a decision maker observes state and reward from its operating environment, learns, decides, and carries out its action. The important representations in the RL model for an agent are as follows.(i) State represents the decision-making factors, which affect the reward (or network performance), observed by an agent from the operating environment. Examples of states are the channel utilization level by PUs and channel quality.(ii) Action represents an agent’s action, which may change or affect the state (or operating environment) and reward (or network performance), and so the agent learns to take optimal actions at most of the times.(iii) Reward represents the positive or negative effects of an agent’s action on its operating environment in the previous time instant. In other words, it is the consequence of the previous action on the operating environment in the form of network performance (e.g., throughput).

At any time instant, an agent observes its state and carries out a proper action so that the state and reward, which are the consequences of the action, improve in the next time instant. Generally speaking, RL estimates the reward of each state-action pair, and this constitutes knowledge. The most important component in Figure 1 is the learning engine that provides knowledge to the agent. We briefly describe how an agent learns. At any time instant, an agent’s action may affect the state and reward for better or for worse or maintain the status quo; and this in turn affects the agent’s next choice of action. As time progresses, the agent learns to carry out a proper action given a particular state. As an example of the application of the RL model in CR networks, the learning mechanism is used to learn channel conditions in a dynamic channel selection scheme. The state represents the channel utilization level by PUs and channel quality. The action represents a channel selection. Based on an application, the reward represents distinctive performance metrics such as throughput and successful data packet transmission rate. Lower channel utilization level by PUs and higher channel quality indicate better communication link, and hence the agent may achieve better throughput performance (reward). Therefore, maximizing reward provides network performance enhancement.

-learning [5] is a popular technique in RL, and it has been applied in CR networks. Denote decision epochs by ; the knowledge possessed by agent for a particular state-action pair at time is represented by -function as follows: where (i) represents state,(ii) represents action,(iii) represents delayed rewards, which is received at time for an action taken at time ,(iv) represents discount factor. The higher the value of , the greater the agent relies on the discounted future reward compared to the delayed reward ,(v) represents learning rate. The higher the value of , the greater the agent relies on the delayed reward and the discounted future reward , compared to the -value at time .

At decision epoch , agent observes its operating environment to determine its current state . Based on the , the agent chooses an action . Next, at decision epoch , the state changes to as a consequence of the action , and the agent receives delayed reward . Subsequently, the -value is updated using (1). Note that, in the remaining decision epochs at time , , the agent is expected to take optimal actions with regard to the states; hence, -value is updated using a maximized discounted future reward . As this procedure evolves through time, agent receives a sequence of rewards and the -value converges. Q-learning searches for an optimal policy at all time instants through maximizing value function as shown below:

Hence, the policy (or action selection) for agent is as follows:

The update of the -value in (1) does not cater for the actions that are never chosen. Exploitation chooses the best-known action, or the greedy action, at all time instants for performance enhancement. Exploration chooses the other nonoptimal actions once in a while to improve the estimates of all -value in order to discover better actions. While Figure 1 shows a single agent, the presence of multiple agents is feasible. In the context of CR networks, a rigorous proof of the convergence of -value in the presence of multiple SUs has been shown in [6].

The advantages of RL are as follows:(i)instead of tackling every single factor that affects the system performance, RL models the system performance (e.g., throughput) that covers a wide range of factors affecting the throughput performance including the channel utilization level by PUs and channel quality and, hence, its simple modeling approach;(ii)prior knowledge of the operating environment is not necessary; and so a SU can learn the operating environment (e.g., channel quality) as time goes by.

2.2. Cognitive Radio Networks

Traditionally, spectrum allocation policy has been partitioning radio spectrum into smaller ranges of licensed and unlicensed frequency bands (also called channels). The licensed channels provide exclusive channel access to licensed users or PUs. Unlicensed users or SUs, such as the popular wireless communication systems IEEE 802.11, access unlicensed channels without incurring any monetary cost, and they are forbidden to access any of the licensed channels. Examples of unlicensed channels are Industrial, Scientific, and Medical (ISM) and Unlicensed National Information Infrastructure (UNII) bands. While the licensed channels have been underutilized, the opposite phenomenon has been observed among the unlicensed channels.

Cognitive radio enables SUs to explore radio spectrum and use white spaces whilst minimizing interference to PUs. The purpose is to improve the availability of bandwidth at each SU, hence improving the overall utilization of radio spectrum. CR helps the SUs to establish a “friendly” environment, in which the PUs and SUs coexist without causing interference with each other as shown in Figure 2. In Figure 2, a SU switches its operating channel across various channels from time to time in order to utilize white spaces in the licensed channels. Note that each SU may observe different white spaces, which are location dependent. The SUs must sense the channels and detect the PUs’ activities whenever they reappear in white spaces. Subsequently, the SUs must vacate and switch their respective operating channel immediately in order to minimize interference to PUs. For a successful communication, a particular white space must be available at both SUs in a communication node pair.

The rest of this subsection is organized as follows. Section 2.2.1 presents cognition cycle, which is an essential component in CR. Section 2.2.2 represents various application schemes in which RL has been applied to provide performance enhancement.

2.2.1. Cognition Cycle

Cognition cycle [7], which is a well-known concept in CR, is embedded in each SU to achieve context awareness and intelligence in CR networks. Context awareness enables a SU to sense and be aware of its operating environment; while intelligence enables the SU to observe, learn, and use the white spaces opportunistically so that a static predefined policy is not required while providing network performance enhancement.

The cognition cycle can be represented by a RL model as shown in Figure 1. The RL model can be tailored to fit well with a wide range of applications in CR networks. A SU can be modeled as a learning agent. At a particular time instant, the SU agent observes state and reward from its operating environment, learns, decides, and carries out action on the operating environment in order to maximize network performance. Further description on RL-based cognition cycle is presented in Section 2.1.

2.2.2. Application Schemes

Reinforcement learning has been applied in a wide range of schemes in CR networks for SU performance enhancements, whilst minimizing interference to PUs. The schemes are listed as follows, and the nomenclatures (e.g., (A1) and (A2)) are used to represent the respective application schemes throughout the paper.(A1) Dynamic Channel Selection (DCS). The DCS scheme selects operating channel(s) with white spaces for data transmission whilst minimizing interference to PUs. Yau et al. [8, 9] propose a DCS scheme that enables SUs to learn and select channels with low packet error rate and low level of channel utilization by PUs in order to enhance QoS, particularly throughput and delay performances.(A2) Channel Sensing. Channel sensing senses for white spaces and detects the presence of PU activities. In [10], the SU reduces the number of sensing channels and may even turn off channel sensing function if its operating channel has achieved the required successful transmission rate in order to enhance throughput performance. In [11], the SU determines the durations of channel sensing, time of channel switching, and data transmission, respectively, in order to enhance QoS, particularly throughput, delay, and packet delivery rate performances. Both [10, 11] incorporate DCS (A1) into channel sensing in order to select operating channels. Due to the environmental factors that can deteriorate transmissions (e.g., multipath fading and shadowing), Lo and Akyildiz [3] propose a cooperative channel sensing scheme, which combines sensing outcomes from cooperating one-hop SUs, to improve the accuracy of PU detection.(A3) Security Enhancement. Security enhancement scheme [12] aims to ameliorate the effects of attacks from malicious SUs. Vucevic et al. [13] propose a security enhancement scheme to minimize the inaccurate sensing outcomes received from neighboring SUs in channel sensing (A2). A SU becomes malicious whenever it sends inaccurate sensing outcomes, intentionally (e.g., Byzantine attacks) or unintentionally (e.g., unreliable devices). Wang et al. [14] propose an antijamming scheme to minimize the effects of jamming attacks from malicious SUs, which constantly transmit packets to keep the channels busy at all times so that SUs are deprived of any opportunities to transmit.(A4) Energy Efficiency Enhancement. Energy efficiency enhancement scheme aims to minimize energy consumption. Zheng and Li [15] propose an energy-efficient channel sensing scheme to minimize energy consumption in channel sensing. Energy consumption varies with activities, and it increases from sleep, idle, to channel sensing. The scheme takes into account the PU and SU traffic patterns and determines whether a SU should enter sleep, idle, or channel sensing modes. Switching between modes should be minimized because each transition between modes incurs time delays.(A5) Channel Auction. Channel auction provides a bidding platform for SUs to compete for white spaces. Chen and Qiu [16] propose a channel auction scheme that enables the SUs to learn the policy (or action selection) of their respective SU competitors and place bids for white spaces. This helps to allocate white spaces among the SUs efficiently and fairly.(A6) Medium Access Control (MAC). MAC protocol aims to minimize packet collision and maximize channel utilization in CR networks. Li et al. [17] propose a collision reduction scheme that reduces the probability of packet collision among PUs and SUs, and it has been shown to increase throughput and to decrease packet loss rate among the SUs. Li et al. [18] propose a retransmission policy that enables a SU to determine how long it should wait before transmission in order to minimize channel contention.(A7) Routing. Routing enables each SU source or intermediate node to select its next hop for transmission in order to search for the best route(s), which normally incurs the least cost or provides the highest amount of rewards, to the SU destination node. Each link within a route has different types and levels of costs, such as queuing delay, available bandwidth or congestion level, packet loss rate, energy consumption level, and link reliability, as well as changes in network topology as a result of irregular node’s movement speed and direction.(A8) Power Control. Yao and Feng [19] propose a power selection scheme that selects an available channel and a power level for data transmission. The purpose is to improve its Signal-to-Noise Ratio (SNR) in order to improve packet delivery rate.

3. Reinforcement Learning in the Context of Cognitive Radio Networks: Components, Features, and Enhancements

This section presents the components of RL, namely, state, action, reward, discounted reward, and -function; as well as the features of RL, namely, exploration and exploitation, updates of learning rate, rules and cooperative learning. The components and features of RL (see Section 2.1) are presented in the context of CR. For each component and feature, we show the traditional approach and subsequently the alternative or enhanced approaches with regard to modeling, representing, and applying them in CR networks. This section serves as a foundation for further research in this area, particularly, the application of existing features and enhancements in current schemes in RL models for either existing or new schemes.

Note that, for improved readability, the notations (e.g., and ) used in this paper represent the same meaning throughout the entire paper, although different references in the literature may use different notations for the same purpose.

3.1. State

Traditionally, each state is comprised of a single type of information. For instance, in [11], each state represents a single channel out of channels available for data transmission. The state may be omitted in some cases. For instance, in [10], the state and action representations are similar, so the state is not represented. The traditional state representation can be enhanced in the context of CR as described next.

Each state can be comprised of several types of information. For instance, Yao and Feng [19] propose a joint DCS (A1) and power allocation (A8) scheme in which each state is comprised of three-tuple information; specifically, . The substate represents the number of SU agents, represents the number of communicating SU agents, and represents the received power on each channel.

The value of a state may deteriorate as time goes by. For instance, Lundén et al. [20] propose a channel sensing (A2) scheme in which each state represents SU agent ’s belief (or probability) that channel is idle (or the absence of PU activity). Note that the belief value of channel deteriorates whenever the channel is not sensed recently, and this indicates the diminishing confidence in the belief that channel remains idle. Denote a small step size by (i.e., ); the state value of channel deteriorates if it is not updated at each time instant; specifically, .

3.2. Action

Traditionally, each action represents a single action out of a set of possible actions . For instance, in [10], each action represents a single channel out of the channels available for data transmission. The traditional action representation can be enhanced in the context of CR as described next.

Each action can be further divided into various levels. As an example, Yao and Feng [19] propose a joint DCS (A1) and power allocation (A8) scheme in which each action represents a channel selection, and each represents a power level allocation with being the number of power levels. As another example, Zheng and Li [15] propose an energy efficiency enhancement (A4) scheme in which there are four kinds of actions, namely, transmit, idle, sleep, and sense channel. The sleepaction represents a sleep level with being the number of sleep levels. Note that different sleep level incurs different amount of energy consumption.

3.3. Delayed Reward

Traditionally, each delayed reward represents the amount of performance enhancement achieved by a state-action pair. A single reward computation approach is applicable to all state-action pairs. As an example, in [2], represents the reward and cost values of 1 and −1 for each successful and unsuccessful transmission, respectively. As another example, in [8], represents the amount of throughput achieved within a time window. The traditional reward representation can be enhanced in the context of CR as described next.

The delayed reward can be computed differently for distinctive actions. As an example, in a joint DCS (A1) and channel sensing (A2) scheme, Felice et al. [21] compute the delayed rewards in two different ways based on the types of actions: channel sensing and data transmission . Firstly, a SU agent calculates delayed reward at time instant . The indicates the likelihood of the existence of PU activities in channel whenever action is taken. Specifically, where indicates the number of neighboring SU agents, while , which is a binary value, indicates the existence of PU activities as reported by SU neighbor agent . Secondly, a SU agent calculates delayed reward at time instant . The indicates the successful transmission rate, which takes into account the aggregated effect of interference from PU activities whenever action is taken. Specifically, where indicates the number of data packets sent by SU agent , indicates the number of acknowledgment packets received by SU agent , and indicates the number of data packets being transmitted by SU agent .

Jouini et al. [22] apply an Upper Confidence Bound (UCB) algorithm to compute delayed rewards in a dynamic and uncertain operating environment (e.g., operating environment with inaccurate sensing outcomes), and it has been shown to improve throughput performance in DCS (A1). The main objective of this algorithm is to determine the upper confidence bounds for all rewards and subsequently use them to make decisions on action selection. The rewards are uncertain, and the uncertainty is caused by the dynamicity and uncertainty of the operating environment. Let represent the number of times an action has been taken on the operating environment up to time ; an agent calculates the upper confidence bounds of all delayed rewards as follows: where is the mean reward, and is the upper confidence bias being added to the mean. Note that if is not chosen at time instant . The is calculated as follows: where exploration coefficient is a constant empirical factor. For instance, in [22, 23].

The UCB algorithm selects actions with the highest upper confidence bounds, and so (3) is rewritten as follows:

3.4. Discounted Reward

Traditionally, the discounted reward has been applied to indicate the dependency of -value on future rewards. Based on an application, the discounted reward may be omitted with to show the lack of dependency on future rewards, and this approach is generally called the myopic approach. As an example, Li [6] and Chen et al. [24] apply -learning in DCS (A1), and the -function in (1) is rewritten as follows:

3.5. -Function

The traditional -function (see (1)) has been widely applied to update -value in CR networks. The traditional -function can be enhanced in the context of CR as described next.

Lundén et al. [20] apply a linear function approximation-based approach to reduce the dimensionality of the large state-action spaces (or reduce the number of state-action pairs) in a collaborative channel sensing (A2) scheme. A linear function provides a matching value for a state-action pair. The matching value , which shows the appropriateness of a state-action pair, is subsequently applied in -value computation. The linear function is normally fixed (or hard-coded), and various kinds of linear functions are possible to indicate the appropriateness of a state-action pair based on prior knowledge. For instance, yields a value that represents the level of desirability of a certain number of SU agents sensing a particular channel [20]. Higher value indicates that the number of SU agents sensing a particular channel is closer to a desirable number. Using a fixed linear function , the learning problem is transformed into learning the matching value as follows:

The parameter is updated as follows:

3.6. Exploration and Exploitation

Traditionally, there are two popular approaches to achieve a balanced trade-off between exploration and exploitation, namely, softmax and -greedy [5]. For instance, Yau et al. [8] use the -greedy approach in which an agent explores with a small probability (i.e., ) and exploits with probability . Essentially, these approaches aim to control the frequency of exploration so that the best-known action is taken at most of the times. The traditional exploration and exploitation approach can be enhanced in the context of CR as described next.

In [3, 25], using the softmax approach, an agent selects actions based on a Boltzman distribution; specifically, the probability of selecting an action in state is as follows: where is a time-varying parameter called temperature. Higher temperature value indicates more exploration, while smaller temperature value indicates more exploitation. Denote the time duration during which exploration actions are being chosen by ; the temperature is decreased as time goes by so that the agent performs more exploitation as follows: where and are initial and final values of temperature, respectively. Note that, due to the dynamicity of the operating environment, exploration is necessary at all times, and so .

In [21], using the -greedy approach, an agent uses a simple approach to decrease exploration probability as time goes by as follows: where is a discount factor and is the minimum exploration probability.

3.7. Other Features and Enhancements

This section presents other features and enhancements on the traditional RL approach found in various schemes for CR networks, including updates of learning rate, rules, and cooperative learning.

3.7.1. Updates of Learning Rate

Traditionally, the learning rate is a constant value [16]. The learning rate may be adjusted as time goes by because higher value of may compromise the RL algorithm’s accuracy to converge to a correct action in a finite number of steps [26]. In [27], the learning rate reduces as time goes by using , where is a small value to provide smooth transition between steps. In [14], the learning rate is updated using .

3.7.2. Rules

Rules determine a feasible set of actions for each state. The traditional RL algorithm does not apply rules although it is an important component in CR networks. For instance, in order to minimize interference with PUs, the SUs must comply with the timing requirements set by the PUs, such as the time interval that a SU must vacate its operating channel after any detection of PU activities.

As an example, Zheng and Li [15] propose an energy efficiency enhancement scheme in which there are four kinds of actions, namely, transmit, idle, sleep, and sense channel. Rules are applied so that the feasible set of actions is comprised of idle and sleep whenever the state indicates that there is no packet in the buffer. As another example, Peng et al. [4] propose a routing scheme, specifically, a next hop selection scheme in which the action represents the selection of a next hop out of a set of SU next hops. Rules are applied so that the feasible set of actions is limited to SU next hops with a certain level of SNR, as well as with shorter distance between next hop and the hop after next. The purposes of the rules are to reduce transmission delays and to ensure high-quality reception. Further description about [4, 15] is found in Table 1.

3.7.3. Cooperative Learning

Cooperative learning enables neighbor agents to share information among themselves in order to expedite the learning process. The exchanged information can be applied in the computation of -function. The traditional RL algorithm does not apply cooperative learning, although it has been investigated in multiagent reinforcement learning (MARL) [28].

Felice et al. [11] propose a cooperative learning approach to reduce exploration. The -value is exchanged among the SU agents, and it is used in the -function computation to update -value. Each SU agent keeps track of its own -value , and it is updated using the similar way to [6] (see Section 3.4). At any time instant, each agent receives -value from its neighbor agent . The agent keeps a vector of -value with . For the case , the -value is updated as follows: where defines the weight assigned to cooperation with neighbor agent . Similar approach has been applied in [25], and the -value is updated based on the weight as follows:

In [11], the weight depends on how much a neighbor agent can contribute to the accurate estimation of value function , such as the physical distance between agent and . In [25], the weight depends on the accuracy of the exchanged -value (or expert value as described next) and the physical distance between agent and .

In [25], an agent exchanges its -value with its neighboring agents only if the expert value for -value is greater than a particular threshold. The expert value indicates the accuracy of the -value . For instance, in [25], the -value indicates the availability of white spaces in channel , and so greater deviation in the signal strengths reduces the expert value . By reducing the exchanges of -value with low accuracy, this approach reduces control overhead, and hence it reduces interference to PUs.

Application of cooperative learning in the CR context has been very limited. More description on cooperative learning is found in Section 4.8. Further research could be pursued to investigate how to improve network performance using this approach in existing and new schemes.

4. Reinforcement Learning in the Context of Cognitive Radio Networks: Models and Algorithms

Direct application of the traditional RL approach (see Section 2.1) has been shown to provide performance enhancement in CR networks. Reddy [29] presents a preliminary investigation in the application of RL to detect PU signals in channel sensing (A2). Table 1 presents a summary of the schemes that apply the traditional RL approach. For each scheme, we present the purpose(s) of the CR scheme, followed by its associated RL model.

Most importantly, this section presents a number of new additions to the RL algorithms, which have been applied to various schemes in CR networks. A summary of the new algorithms, their purposes, and references, is shown in Table 2. Each new algorithm has been designed to suit and to achieve the objectives of the respective schemes. For instance, the collaborative model (see Table 2) aims to achieve an optimal global reward in the presence of multiple agents, while the traditional RL approach achieves an optimal local reward in the presence of a single agent only. The following subsections (i.e., Sections 4.14.9) provide further details to each new algorithm, including the purpose(s) of the CR scheme(s), followed by its associated RL model (i.e., state, action, and reward representations) which characterize the purposes, and finally the enhanced algorithm which aims to achieve the purpose. Hence, these subsections serve as a foundation for further research in this area, particularly, the application of existing RL models and algorithms found in current schemes to either apply them in new schemes or extend the RL models in existing schemes to further enhance network performance.

4.1. Model 1: Model with in -Function

This is a myopic RL-based approach (see Section 3.4) that uses so that there is lack of dependency on future rewards, and it has been applied in [10, 17, 18]. Li et al. [10] propose a joint DCS (A1) and channel sensing (A2) scheme, and it has been shown to increase throughput, as well as to decrease the number of sensing channels (see performance metric (P4) in Section 5) and packet retransmission rate. The purposes of this scheme are to select operating channels with successful transmission rate greater than a certain threshold into a sensing channel set and subsequently to select a single operating channel for data transmission.

Table 3 shows the RL model for the scheme. The action is to select whether to remain at the current operating channel or to switch to another operating channel with higher successful transmission rate. A preferred channel set is composed of actions with -value greater than a fixed threshold (e.g., in [10]). Since the state and action are similar in this model, the state representation is not shown in Table 3, and we represent . Note that if there is no channel switch. The reward represents different kinds of events, specifically, in case of successful transmission, and in case of unsuccessful transmission or channel is sensed busy. The RL model is embedded in a centralized entity such as a base station.

Algorithm 1 presents the RL algorithm for the scheme. The action is chosen from a preferred channel set. The update of the -value is self-explanatory. Similar approach has been applied in DCS (A1) [30, 31].

Repeat
(a) Choose action
(b) Update -value:
           
(c) Update preferred channel set
          

Li et al. [18] propose a MAC protocol, which includes both DCS (A1) and a retransmission policy (A6), to minimize channel contention. The DCS scheme enables the SU agents to minimize their possibilities of operating in the same channel. This scheme uses the RL algorithm in Algorithm 1, and the reward representation is extended to more than a single performance enhancement. Specifically, the reward represents the successful transmission rate and transmission delay. Higher reward indicates higher successful transmission rate and lower transmission delay, and vice versa. To accommodate both transmission rate and transmission delay in -function, the reward representation becomes , and so the -function becomes . The retransmission policy determines the probability a SU agent transmits at time , and so indicates the probability a SU agent transmits at time . The reward , 0, and if the transmission delay at time is smaller than, equal to, and greater than the average transmission delay, respectively. The reward represents different kinds of events; specifically, , 0, and in case of successful transmission, idle transmission, and unsuccessful transmission, respectively; note that idle indicates that channel is sensed busy, and so there is no transmission.

Li et al. [17] propose a MAC protocol (A6) to reduce the probability of packet collision among PUs and SUs, and it has been shown to increase throughput and to decrease packet loss rate. Since both successful transmission rate and the presence of idle channels are important factors, it keeps track of the -functions for channel sensing and transmission using RL algorithm in Algorithm 1, respectively. Hence, similar to Algorithm 2 in Section 4.2, there is a set of two -functions. The action is to select whether to remain at the current operating channel or to switch to another operating channel. The sensing reward and if the channel is sensed idle and busy, respectively. The transmission reward and if the transmission is successful and unsuccessful, respectively. Action selection is based on the maximum average -value; specifically, .

Repeat
(a) Choose action
(b) Update -value as follows:
(c) Update -value:
        
(d) Update policy:
       

4.2. Model 2: Model with a Set of -Functions

A set of distinctive -functions can be applied to keep track of the -value of different actions, and it has been applied in [11, 21]. Di Felice et al. [11] propose a joint DCS (A1) and channel sensing (A2) scheme, and it has been shown to increase goodput and packet delivery rate, as well as to decrease end-to-end delay and interference level to PUs. The purposes of this scheme are threefold:(i)firstly, it selects an operating channel that has the lowest channel utilization level by PUs;(ii)secondly, it achieves a balanced trade-off between the time durations for data transmission and channel sensing;(iii)thirdly, it reduces the exploration probability using a knowledge sharing mechanism.

Table 4 shows the RL model for the scheme. The state represents a channel for data transmission. The actions are to sense channel, to transmit data, or to switch its operating channel. The reward represents the difference between two types of delays, namely, the maximum allowable single-hop transmission delay and a successful single-hop transmission delay. A single-hop transmission delay covers four kinds of delays including backoff, packet transmission, packet retransmission, and propagation delays. Higher reward level indicates shorter delay incurred by a successful single-hop transmission. The RL model is embedded in a centralized entity such as a base station.

Algorithm 2 presents the RL algorithm for the scheme. Denote learning rate by , eligible trace by , and the amount of time during which the SU agent is involved in successful transmissions or was idle (i.e., no packets to transmit) by , as well as the temporal differences by and . A single type of -function is chosen to update the -value based on the current action being taken. The temporal difference indicates the difference between the actual outcome and the estimated -value.

In step (b), the eligible trace represents the temporal validity of state . Specifically, in [11], eligible trace represents the existence of PU activities in channel , and so it is only updated when channel sensing operation is taken. Higher eligible trace indicates greater presence of PU activities, and vice versa. Hence, the term is in the update of -value , and is in the update of -value in Algorithm 2. Therefore, higher eligible trace results in higher value of and lower value of , and this indicates more channel sensing tasks and lesser data transmission in channels with greater presence of PU activities. The action switches channel from state to state . The -greedy approach is applied to choose the next channel . In [21], eligible trace , which represents the temporal validity or freshness of the sensing outcome, is only updated when the channel sensing operation is taken as shown in Algorithm 2. The eligible trace is discounted whenever is not chosen as follows: where is a discount factor for the eligible trace. Equation (15) shows that the eligible trace of each state is set to the maximum value of 1 whenever action is taken; otherwise, it is decreased with a factor of .

In step (c), the value keeps track of the channel that provides the best-known lowest estimated average transmission delay. In other words, the channel must provide the maximum amount of reward that can be achieved considering the cost of a channel switch . Hence, can keep track of a channel that provides the best-known state value the SU agent receives compared to the average state value by switching its current operating channel to the operating channel . Note that the state value is exchanged among the SU agents to reduce exploration through cooperative learning (see Section 3.7.3).

In step (d), the policy is applied at the next time instant. The policy provides probability distributions over the three possible types of actions using a modified Boltzmann distribution (see Section 3.6). Next, the policy is applied to select the next action in step (a).

4.3. Model 3: Dual -Function Model

The dual -function model has been applied to expedite the learning process [32]. The traditional -function (see (1)) updates a single -value at a time, whereas the dual -function updates two -values simultaneously. For instance, in [33], the traditional -function updates the -value for the next state only (e.g., SU destination node), whereas the dual -function updates the -value for the next and previous states (e.g., SU source and destination nodes, respectively). The dual -function model updates a SU agent’s -value in both directions (i.e., towards the source and destination nodes) and speeds up the learning process in order to make more accurate decisions on action selection; however, at the expense of higher network overhead incurred by more -value exchanges among the SU neighbor nodes.

Xia et al. [33] propose a routing (A7) scheme, and it has been shown to reduce SU end-to-end delay. Generally speaking, the availability of channels in CR networks is dynamic, and it is dependent on the channel utilization level by PUs. The purpose of this scheme is to enable a SU node to select a next-hop SU node with higher number of available channels. The higher number of available channels reduces the time incurred in seeking for an available common channel for data transmission among a SU node pair, and hence it reduces the MAC layer delay.

Table 5 shows the RL model for the scheme. The state represents a SU destination node . The action represents the selection of a next-hop SU neighbor node . The reward represents the number of available common channels among nodes and . The RL model is embedded in each SU agent.

This scheme applies the traditional -function (see (1)) with . Hence, the -function is rewritten as follows: where is an upstream node of SU neighbor node , so node must estimate and send information on to SU node .

The dual -function model in this scheme is applied to update the -value for the SU source and destination nodes. While the traditional -function enables the SU intermediate node to update the -value for the SU destination node only (or next state), which is called forward exploration, the dual -function model enables an intermediate SU node to achieve backward exploration as well by updating the -value for the SU source node (or previous state). Forward exploration is achieved by updating the -value at SU node for the SU destination node whenever it receives an estimate from SU node , while backward exploration is achieved by updating the -value at SU node for the SU source node whenever it receives a data packet from node . Note that, in the backward exploration case, node ’s packets are piggybacked with its -value so that node is able to update -value for the respective SU source node. Although the dual -function approach increases the network overhead, it expedites the learning process since SU nodes along a route update -value of the route in both directions.

4.4. Model 4: Partial Observable Model

The partial observable model has been applied in a dynamic and uncertain operating environment. The uniqueness of the partial observable model is that the SU agents are uncertain about their respective states, and so each of them computes belief state , which is the probability that the environment is operating in state .

Bkassiny et al. [34] propose a joint DCS (A1) and channel sensing (A2) scheme, and it has been shown to improve the overall spectrum utilization. The purpose of this scheme is to enable the SU agents to select their respective operating channels for sensing and data transmission in which the collisions among the SUs and PUs must be minimized.

Table 6 shows the RL model for the scheme. The state represents the availability of a set of channels for data transmission. The action represents a single channel out of channels available for data transmission. The reward represents fixed positive (negative) values to be rewarded (punished) for successful (unsuccessful) transmissions. The RL model is embedded in each SU agent so that it can make decision in a distributed manner.

Algorithm 3 presents the RL algorithm for the scheme. The action is chosen from a preferred channel set. The chosen action has the maximum belief-state -value, which is calculated using belief vector as weighting factor. The belief vector is the probability of a possible set of state being idle at time . Upon receiving reward , the SU agent updates the entire set of belief vectors using Bayes’ formula [34]. Next, the SU agent updates the -value . Note that .

Repeat
(a) Choose action
        
(b) Receive delayed reward
(c) Update belief
(d) Update -value:

It shall be noted that Bkassiny et al. [34] apply the belief vector as a weighting vector in its computation of -value , while most of the other approaches, such as [20], use belief vector as the actual state, specifically, . This approach has been shown to achieve a near-optimal solution with a very low complexity in [35].

4.5. Model 5: Actor-Critic Model

Traditionally, the delayed reward has been applied directly to update the -value. The actor-critic model adjusts the delayed reward value using reward corrections, and this approach has been shown to expedite the learning process. In this model, an actor selects actions using suitability value, while a critic keeps track of temporal difference, which takes into account reward corrections in delayed rewards.

Vucevic et al. [13] propose a collaborative channel sensing (A2) scheme, and it has been shown to minimize error detection probability in the presence of inaccurate sensing outcomes. The purpose of this scheme is that it selects neighboring SU agents that provide accurate channel sensing outcomes for security enhancement purpose (A3). Table 7 shows the RL model for the scheme. The state is not represented. An action represents a neighboring SU chosen by SU agent for channel sensing purpose. The reward represents fixed positive (negative) values to be rewarded (punished) for correct (incorrect) sensing outcomes compared to the final decision, which is the fusion of the sensing outcomes. The RL model is embedded in each SU agent.

The critic keeps track of , where is the temporal difference and is a constant (e.g., ). In [13], depends on the difference between the delayed reward and the long-term delayed reward , the number of incorrect sensing outcomes, and the suitability value . Next, the actor selects actions using given by the critic. The probability of selecting action is based on the suitability value of action ; .

4.6. Model 6: Auction Model

The auction model has been applied in centralized CR networks. In the auction model, a centralized entity, such as a base station, conducts auctions and allows SU hosts to place bids so that the winning SU hosts receive rewards. The centralized entity may perform simple tasks, such as allocating white spaces to SU hosts with winning bids [16], or it may learn using RL to maximize its utility [36]. The RL model may be embedded in each SU host in a centralized network [16, 3638], or in the centralized entity only [36].

Chen and Qiu [16] propose a channel auction scheme (A5), and it has been shown to allocate white spaces among SU hosts (or agents) efficiently and fairly. The purpose of this scheme is to enable the SU agents to select the amount of bids during an auction, which is conducted by centralized entity, for white spaces. The SU agents place the right amount of bids in order to secure white spaces for data transmission, while saving their credits, respectively. The RL model is embedded in each SU host.

Table 8 shows the RL model for the scheme. The state indicates a SU agent’s information, specifically, the amount of data for transmission in its buffer and the amount of credits (or “wealth”) it owns. The action is the amount of a bid for white spaces. The reward indicates the amount of data sent. This scheme applies the traditional -learning approach (see (1)), to update -values.

Jayaweera et al. [36] propose another channel auction scheme (A5) that allocates white spaces among SUs, and it has been shown to increase transmission rates of the SUs and to reduce energy consumption of the PUs. In [36], the PUs adjust the amount of white spaces and allocate them to the SUs with winning bids. The winning SUs transmit their packets, as well as relaying PUs’ packets using the white spaces so that the PUs can reduce its energy consumption. In other words, the SUs use their power as currency to buy the bandwidth. Two different kinds of RL models are embedded in PUs and SUs, respectively, so that the PUs can learn to adjust the amount of white spaces to be allocated to the SUs, and the SUs can learn to select the amount of bids during an auction for white spaces.

The state is not represented, and we show the action and reward representations of the scheme. Table 9 shows the reward representation of the RL model. The reward indicates a constant positive reward in case of successful bid and a constant negative reward in case of unsuccessful bid. The reward representation is embedded in both PUs and SUs. The actions for both PUs and SUs are different. Each SU selects the amount of bid during an auction for white spaces in channel , while each PU adjusts the amount of white spaces to be offered for auction in its own channel . Higher amount of white spaces encourages the SUs to participate in auctions.

This scheme applies -function with (see Section 4.1) at both PUs and SUs. The SUs’ -function indicates the appropriate amount of bids for white spaces, while the PUs’ -function indicates the appropriate amount of white spaces to be offered for auction.

Fu and Van der Schaar [37] propose a channel auction scheme (A5) that improves the bidding policy of SUs, and it has been shown to reduce SUs’ packet loss rate. The purpose of this scheme is to enable SU agents to learn and adapt the amount of bids during an auction for time-varying white spaces in dynamic wireless networks with environmental disturbance and SU-SU disturbance. Examples of environmental disturbance are dynamic level of channel utilization by PUs, channel condition (i.e., SNR), and SU traffic rate, while an example of SU-SU disturbance is the effect from other competing SUs, who are noncollaborative and autonomous in nature. Compared to traditional centralized auction schemes, SUs compute their bids based on their knowledge and observation of the operating environment with limited information received from other SUs and the centralized base station. Note that the joint bidding actions of SUs affect the allocation of white spaces and bidding policies of the other SUs, and so the proposed learning algorithm improves the bidding policy of SUs based on the observed white space allocations and rewards.

Table 10 shows the RL model for the scheme. The state indicates SU agent’s information, specifically, its buffer state, as well as the states of the available channels in terms of SNR. The action is the amount of bids for white spaces. The reward represents the sum of the number of lost packets and the channel cost that SU must pay for using the channel. Note that the channel cost represents network congestion, and hence higher cost indicates higher congestion level. The RL model is embedded in each SU host.

Algorithm 4 presents the RL algorithm for the scheme. In step (a), SU agent observes its current state and available channels (or white spaces) advertised by the centralized base station. In step (b), it decides and submits its bids to the base station, and the bids are estimated based on SU ’s state and other SUs’ representative (or estimated) state . Note that, since SU needs to know all the states and transition probabilities of other SUs, which may not be feasible, it estimates the representative state based on its previous knowledge of channel allocation and channel cost (or network congestion). In step (c), SU receives its channel allocation decision and the required channel cost from the base station. In step (d), the representative state and transition probabilities of the other SUs are updated based on the newly received channel allocation decision and the required channel cost information. In step (e), SU computes its estimated -value, which is inspired by the traditional -function approach, and this approach explicitly takes into account the effects of the bidding actions of the other SUs based on their estimated representative state and transition probabilities . Note that also denotes Markov-based policy profile that representsthe bidding policies of all the other SUs. In step (f), the -table is updated if there are changes in the SU states and channel availability.

Repeat
(a) Observe the current state and available channels
(b) Choose an action and submits it to the base station
(c) Receive channel allocation decision and the required channel cost
(d) Estimate the representative state and update the state transition probabilities of the other SUs
(e) Compute the estimated -value as follows:
         
(f) Update -table using learning rate as follows:
   

Xiao et al. [38] propose a power control scheme (A8), and it has been shown to increase the transmission rates and payoffs of SUs. There are two main differences compared to the traditional auction schemes, which have been applied to centralized networks. Firstly, the interactions among all nodes, including PUs and SUs, are coordinated in a distributed manner. A SU source node transmits its packets to the SU destination node using either single-hop transmission or multihop relaying. In multihop relaying, a SU source node must pay the upstream node, which helps to relay the packets. Secondly, the PUs treat each SU equally, and so there is lack of competitiveness in auctions. Each SU may accumulate credits through relaying. Game theory is applied to model the network in which SUs pay credits to PUs for using licensed channels and to other SUs for relaying their packets. The purpose of this scheme is to enable a SU node to choose efficient actions in order to improve its payoff, as well as to collect credits through relaying, and to minimize the credits paid to PUs and other SU relays. A RL model is embedded in each SU.

The state is not represented, and we show the action and reward representations of the scheme. Table 11 shows the RL model for the scheme. The action represents transmission of SU ’s packets by either using single-hop transmission or multihop relaying. The reward indicates the revenue (or profit) received by SU node for providing relaying services to other SUs, and so higher reward indicates higher transmission rate and increased transmission power of SU node . Denote the payoff of SU by , as shown in (17). The payoff indicates the difference between SU ’s revenue and costs. There are two types of costs represented by and . The represents the cost charged by the upstream SU node for relaying SU node ’s packets, and the represents the cost charged by all PUs for using the white spaces in licensed channels. The increases with the SU ’s interference power in the respective channel. Consider

This scheme applies -function , which indicates the average payoff, where is a constant step size and is the probability of SU choosing action , which is computed according to Boltzmann distribution (see Section 3.6).

4.7. Model 7: Internal Self-Learning Model

The internal self-learning model has been applied to expedite the learning process. The uniqueness of the internal self-learning model lies in the learning approach in which the learning mechanism continuously interacts with a simulated internal environment within the SU agent itself. The learning mechanism continuously exchanges its actions with rewards generated by the simulated internal environment so that the SU agent learns the optimal actions for various settings of the operating environment, and this helps -value and the optimal action to converge.

Bernardo et al. [27] propose a DCS (A1) scheme, and it has been shown to improve the overall spectrum utilization and throughput performances. Note that, unlike the previous schemes in which the RL models are embedded in the SU agents, the RL model is embedded in each PU base station (or agent) in this scheme, and it is applied to make medium-term decisions (i.e., from tens of seconds to tens of minutes). The purpose of this scheme is to enable a PU agent to select its operating channels for transmission in its own cell. In order to improve the overall spectrum utilization, the PU agent preserves its own QoS while generating white spaces and sells them off to SU agents.

Table 12 shows the RL model for the scheme. The action is a set of chosen available channels for the entire cell. The reward has a zero value if the estimated throughput of an action selection is less than a throughput threshold ; otherwise, the reward is based on the spectrum efficiency and the amount of white spaces , which may be sold off to SU agents. Both and are constant weight factors.

Figure 3 shows the internal self-learning model. The learning mechanism, namely, RL-DCS, continuously interacts with a simulated internal environment, namely, Environment Characterization Entity (ECE). Based on the information observed from the real operating environment (i.e., the number of PU hosts and the average throughput per PU host), which is provided by status observer, the ECE implements a model of the real operating environment (i.e., spectrum efficiency and the amount of white spaces ) and computes reward . Hence, the ECE evaluates the suitability of action in its simulated internal model of the operating environment. By exchanging action and reward between RL-DCS and ECE, the RL-DCS learns an optimal action at a faster rate compared to the conventional learning approach, and this process stops when the optimal action converges.

Algorithm 5 presents the RL algorithm for the scheme. The action is chosen using a Bernoulli random variable [27]. The PU agent receives reward computed by ECE and computes the average reward for each subaction at time using the exponential moving average [27]. Denote the probability of taking action by and the current overall unused spectrum, which is the ratio of the unused bandwidth to the total bandwidth of a cell, by . Upon receiving reward , the PU agent updates the -value for each action . Finally, the probability of taking action , specifically, , is updated. Note that the exploration probability is .

Repeat
(a) Choose action
(b) Receive delayed reward from ECE
(c) Update -value:
   For each
     
(d) Update probability , which is the probability of taking action :
   For each
         

4.8. Model 8: Collaborative Model

Collaborative model enables a SU agent to collaborate with its SU neighbor agents and subsequently make local decisions independently in distributed CR networks. It enables the agents to learn and achieve an optimal joint action. A joint action is defined as the actions taken by all the agents throughout the entire network. An optimal joint action is the actions taken by all the agents throughout the entire network that provides an ideal and optimal network-wide performance. Hence, the collaborative model reduces the selfishness of each agent through taking other agents’ actions or strategies into account. The collaboration may take the form of exchanging local information, including knowledge ( -value), observations, and decisions, among the SU agents.

Lundén et al. [20] propose a collaborative channel sensing (A2) scheme, and it has been shown to maximize the amount of white spaces found. The purposes of this scheme are twofold:(i)firstly, it selects channels with more white spaces for channel sensing purpose;(ii)secondly, it selects channels so that the SU agents diversify their sensing channels. In other words, the SU agents perform channel sensing in various channels.

Table 13 shows the RL model for the scheme. The state represents the belief on the availability of a set of channels for data transmission. An action , which is part of the joint action representing all the actions taken by SU agent and its SU neighbor agents, represents a single channel chosen by SU agent for channel sensing purpose. The reward represents the number of channels identified as being idle (or free) at time by SU agent . The RL model is embedded in each SU agent.

Algorithm 6 presents the RL algorithm for the scheme, and it is comprised of two rounds of collaboration message exchanges. After taking action , the SU agent exchange collaboration messages with its SU neighbor agents. The is comprised of two-tuple information, namely, SU agent ’s action and SU agent ’s sensing outcomes . SU agent determines the delayed reward based on . Next, the SU agent exchanges collaboration messages with its SU neighbor agents. During the second round of collaboration message exchange, a SU agent chooses its action for the next time instance upon receiving from SU neighbor agent . Note that the SU agent transmission order affects the action selection. This is because a SU agent may receive and use information obtained from its preceding agents, and so it can make decisions using more updated information in the second round. Since one of the main purposes is to enable the SU agents to diversify their sensing channels, the SU agents choose action from a preferred channel set. The preferred channel set is comprised of sensing channels which are yet to be chosen by the preceding SU agents. The SU agent chooses channels with the maximum -value from the preferred channel set. Finally, the SU agent updates -value and (see Section 3.5).

Repeat
(a) Take action
(b) Exchange collaboration message with SU neighbor agents // First round of collaboration
(c) Determine delayed reward
(d) Exchange collaboration message with SU neighbor agents // Second round of collaboration
(e) Choose action
(f) Update (see (9))
(g) Update -value,

Liu et al. [39] propose a collaborative DCS (A1) scheme that applies a collaborative model, and it has been shown to achieve a near-optimal throughput performance. The purpose of this scheme is to enable each SU link to maximize its individual delayed rewards, specifically, the SNR level. Note that this collaboration approach assumes that an agent has full observation of the actions and policies adopted by all the other SU links at any time instance. Hence, (1) is rewritten as follows: where represents the action taken by agent and represents the joint action taken by all the SU agents throughout the entire CR network except agent . Note that , where represents joint actions by all the SU agents throughout the entire CR network. Therefore, (19) is similar to the traditional RL approach except when an action becomes a joint action (or set of actions). To take into account actions taken by the other agents , agent updates an average -value , which is the average -value of agent in state if it takes action , while the other agents take action . The is updated as follows: where is the number of agents.

Next, is applied in action selection using the Boltzmann equation (see Section 3.6). Further research can be pursued to reduce communication overheads and to enable indirect coordination among the agents.

4.9. Model 9: Competitive Model

Competitive model enables a SU agent to compete with its SU neighbor agents and subsequently make local decisions independently in CR networks. The competitive model enables an agent to make optimal actions in worst-case scenarios in the presence of competitor agents, which attempt to minimize the accumulated rewards of the agent. Note that the competitor agent may also possess the capability to observe, learn, and carry out the optimal actions in order to deteriorate the agents’ accumulated rewards.

Wang et al. [14] propose an antijamming approach (A3) scheme called channel hopping, and it applies minimax- learning to implement the competitive model. This approach has been shown to maximize the accumulated rewards (e.g., throughput) in the presence of jamming attacks. Equipped with a limited number of transceivers, the malicious SUs aim to minimize the accumulated rewards of SU agents through constant packet transmission in a number of channels in order to prevent spectrum utilization by SU agents. The purposes of the channel hopping scheme are twofold:(i)firstly, it introduces randomness in channel selection so that the malicious SUs do not jam its selected channels for data transmission;(ii)secondly, it selects a proper number of control and data channels in a single frequency band for control and data packet transmissions. Note that each frequency band consists of a number of channels. Due to the criticality of control channel, duplicate control packets may be transmitted in multiple channels to minimize the effects of jamming, and so a proper number of control channels are necessary.

Note that, as competitors, the malicious SUs aim to minimize the accumulated rewards of SU agents. Table 14 shows the RL model for the scheme. Each state is comprised of four-tuple information; specifically, . With respect to frequency band , the substate represents the presence of PU activities and represents gain, while and represent the numbers of control and data channels that get jammed, respectively. An action represents channel selections within a single frequency band for control and data packet transmissions purpose, and the channels may be jammed or not jammed in the previous time slot. The reward represents the gain (e.g., throughput) of using channels that are not jammed. Note that the reward is dependent on the malicious SU’s (or competitor’s) action . The RL model is embedded in each SU agent.

Algorithm 7 presents the RL algorithm for the scheme. In step (b), the -function is dependent on the competitor’s action , which is thechannels chosen by the malicious SUs for jamming purpose. In step (c), the agent determines its optimal policy , in which the competitor is assumed to take its optimal action that minimizes the -value, and hence the term . Nevertheless, in this worst-case scenario, the agent chooses an optimal action and hence the term . In step (d), the agent updates its value function , which is applied to update the -value in step (b) in the next time instant. Using the optimal policy obtained in step (c), the agent calculates its value function , which is an approximate of the discounted future reward. Again, the competitor is assumed to take its optimal action that minimizes the agent’s -value and hence the term .

Repeat
(a) Choose action
(b) Update -value as follows:
(c) Update optimal strategy as follows:
(d) Update value function as follows:

5. Performance Enhancements

Table 15 shows the performance enhancements brought about by the application of the traditional and enhanced RL algorithms in various schemes in CR networks. The RL approach has been shown to achieve the following performance enhancement.(P1) Higher Throughput/Goodput. Higher throughput (or goodput) indicates higher packet delivery rate, higher successful packet transmission rate, and lower packet loss rate.(P2) Lower End-to-End Delay/Link Delay. Lower end-to-end delay, which is the summation of link delays along a route, indicates shorter time duration for packets to traverse from a source node to its destination node.(P3) Lower Level of Interference to PUs. Lower level of interference to PUs indicates lower number of collisions with PU activities.(P4) Lower Number of Sensing Channels. The lower number of sensing channels indicates lower sensing overheads (i.e., delays and energy consumption).(P5) Higher Overall Spectrum Utilization. In order to increase the overall spectrum utilization, Chen et al. [24] increase channel access time, while Jiang et al. [30, 31] reduce blocking and dropping probabilities of calls, respectively.(P6) Lower Number of Channel Switches. Chen et al. [24] reduce number of channel switches in order to reduce channel switching time.(P7) Lower Energy Consumption. Lower energy consumption indicates longer network lifetime and number of survival nodes.(P8) Lower Probability of False Alarm. Lo and Akyildiz [3] reduce false alarm, which occurs when a PU is mistakenly considered present in an available channel, in channel sensing (A2).(P9) Higher Probability of PU Detection. Lo and Akyildiz [3] increase the probability of PU detection in order to reduce miss detection in channel sensing (A2). Miss detection occurs whenever a PU is mistakenly considered absent in a channel with PU activities.(P10) Higher Number of Channels Being Sensed Idle. Lundén et al. [20] increase the number of channels being sensed idle, which contains more white spaces.(P11) Higher Accumulated Rewards. Wang et al. [14] increase the accumulated rewards, which represent gains, such as throughput performance. Xiao et al. [38] improve SU’s total payoff, which is the difference between gained rewards (or revenue) and total cost incurred.

6. Open Issues

This section discusses open issues that can be pursued in this research area.

6.1. Enhanced Exploration Approaches

While larger value of exploration probability may be necessary if the dynamicity of the operating environment is high, the opposite holds whenever the operating environment is rather stable. Generally speaking, exploration helps to increase the convergence rate of a RL scheme. Nevertheless, higher exploration rate may cause fluctuation in performance (e.g., end-to-end delay and packet loss) due to the selection of nonoptimal actions. For instance, in a dynamic channel selection scheme (A1), the performance may fluctuate due to the frequent exploration of nonoptimal channels. Similarly, in a routing scheme (A7), the performance may fluctuate due to the frequent exploration of nonoptimal routes. Further research could be pursued to investigate the possibility of achieving exploration without compromising the application performance. Additionally, further research could be pursued to investigate how to achieve an optimal trade-off between exploration and exploitation in a diverse range of operating environments. For instance, through simulation, Li [6] found that, with higher learning rate and lower temperature , the convergence rate of the -value is faster.

6.2. Fully Decentralized Channel Auction Models

To the best of our knowledge, most of the existing RL-based channel auction models (see Section 4.6) have been applied in centralized CR networks, in which a centralized entity (e.g., base station) allocates white spaces to SU hosts with winning bids. The centralized entity may perform simple tasks, such as allocating white spaces to SU hosts with winning bids [16], or it may learn using RL to maximize its utility [36]. The main advantage of the centralized entity is that it simplifies the management of the auction process and the interaction among nodes. Nevertheless, it introduces challenges to implementation due to additional cost and feasibility of having a centralized entity in all scenarios. While there have been increasing efforts to enhance the performance of the RL-based auction models, further research is necessary to investigate fully decentralized RL-based auction models, which do not rely on a centralized entity, along with their requirements and challenges. For instance, by incorporating the cooperative learning feature (see Section 3.7.3) into the RL auction model, SUs can exchange auction information with PUs and other SUs in a decentralized manner, which may enable them to perform bidding decisions without the need of a centralized entity. However, this may introduce other concerns such as security and nodes’ selfishness, which can be interesting directions for further research.

6.3. Enhancement on the Efficiency of RL Algorithm

The application of RL in various application schemes in CR networks may introduce complexities, and so the efficiency of the RL algorithm should be further improved. As an example, the collaborative model (see Section 4.8) requires explicit coordination in which the neighboring agents exchange information among themselves in order to expedite convergence to optimal joint action. This enhances the network performance at the expense of higher amount of control overhead. Hence, further research is necessary to investigate the possibility of indirect coordination. Moreover, the network performance may further improve with reduced overhead incurred by RL. As another example, while RL has been applied to address security issues in CR networks (see application (A3)), the introduction of RL into CR schemes may introduce more vulnerabilities into the system. This is because the malicious SUs or attackers may affect the operating environment or manipulate the information so that the honest SUs’ knowledge is adversely affected.

6.4. Application of RL in New Application Schemes

The wide range of enhanced RL algorithms, including the dual -function, partial observable, actor-critic, auction, internal self-learning, collaborative, and competitive models (see Sections 4.34.9), can be extended to other applications in CR networks, including emerging networks such as cognitive maritime wireless ad hoc networks and cognitive radio sensor networks [40], in order to achieve context awareness and intelligence, which are the important characteristics of cognition cycle (see Section 2.2.1). For instance, the collaborative model (see Section 4.8) enables an agent to collaborate with its neighbor agents in order to make decisions on action selection, which is part of an optimal joint action. This model is suitable to be applied in most application schemes that require collaborative efforts, such as trust and reputation system [41] and cooperative communications, although the application of RL in those schemes is yet to be explored. In trust and reputation management, SUs make collaborative effort to detect malicious SUs, such that malicious SUs are assigned low trust and reputation values. Additionally, Section 3 presents new features of each component of RL, which can be applied to enhance the performance of existing RL-based applications schemes in CR networks. Further research could also be pursued to(i)apply new RL approaches, such as two-layered multiagent RL model [42], to CR network applications,(ii)investigate RL models and algorithms applied to other kinds of networks such as cellular radio access networks [43] and sensor networks [44], which may be leveraged to provide performance enhancement in CR networks,(iii)apply or integrate the RL features and enhancements (e.g., state, action, and reward representations) to other learning-based approaches, such as the neural network-based approach [45].

6.5. Lack of Real Implementation of RL in CR Testbed

Most of the existing RL-based schemes have been evaluated using simulations, which have been shown to achieve performance enhancements. Nevertheless, to the best of our knowledge, there is lack of implementation of RL-based schemes in CR platform. Real implementation of the RL algorithms is important to validate their correctness and performance in real CR environment, which may also allow further refinements on these algorithms. To this end, further research is necessary to investigate the implementation and challenges of the RL-based scheme on CR platform.

7. Conclusions

Reinforcement learning (RL) has been applied in cognitive radio (CR) networks to achieve context awareness and intelligence. Examples of schemes are dynamic channel selection, channel sensing, security enhancement mechanism, energy efficiency enhancement mechanism, channel auction mechanism, medium access control, routing, and power control mechanism. To apply the RL approach, several representations may be necessary including state and action, as well as delayed and discounted rewards. Based on the CR context, this paper presents an extensive review on the enhancements of these representations, as well as other features including -function, trade-off between exploration and exploitation, updates of learning rate, rules, and cooperative learning. Most importantly, this paper presents an extensive review on a wide range of enhanced RL algorithms in CR context. Examples of the enhanced RL models are dual -function, partial observable, actor-critic, auction, internal self-learning, and collaborative and competitive models. The enhanced algorithms provide insights on how various schemes in CR networks can be approached using RL. Performance enhancements achieved by the traditional and enhanced RL algorithms in CR networks are presented. Certainly, there is a great deal of future works in the use of RL, and we have raised open issues in this paper.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by the Malaysian Ministry of Science, Technology and Innovation (MOSTI) under Science Fund 01-02-16-SF0027.