Abstract

We investigate the problem of resource allocation in a cognitive long-term evolution (LTE) network, where the available bandwidth resources are shared among the primary (licensed) users (PUs) and secondary (unlicensed) users (SUs). Under such spectrum sharing conditions, the transmission of the SUs should have minimal impact on quality of service (QoS) and operating conditions of the PUs. To achieve this goal, we propose to assign the network resources based on the buffer sizes of the PUs and SUs in the uplink (UL) and downlink (DL) directions. To ensure that the QoS requirements of the PUs are satisfied, we enforce some upper bound on the size of their buffers considering two network usage scenarios. In the first scenario, PUs pay full price for accessing the spectrum and get full QoS protection; the SUs access the network for free and are served on a best-effort basis. In the second scenario, PUs pay less in exchange for sharing the bandwidth and get the reduced QoS guarantees; SUs pay some price for their access without any QoS guarantees. Performance of the algorithms proposed in the paper is evaluated using simulations in OPNET environment. The algorithms show superior performance when compared with other relevant techniques.

1. Introduction

The traditional fixed spectrum allocation policy has been characterized by a very ineffective spectrum utilization resulting in an artificial scarcity of the network resources [1], which stimulated a surge of interest to an alternative spectrum usage concept known as cognitive radio (CR) [2]. In a CR network (CRN), the available bandwidth resources can be shared among the PUs (paying some price for accessing spectrum) and the SUs (who can get the wireless access for free). Needless to say under such spectrum sharing conditions the transmission of the SUs should have minimal impact on QoS and operating conditions of the PUs [2].

Among existing wireless standards considered for deployment in CRNs, LTE is considered to be the most favourable due to such appealing features as spectrum flexibility, fast adaptation to time-varying channel conditions, high spectral efficiency, and robustness against interference [3]. A detailed description of the LTE radio interface can be found, for instance, in [4]. In short, LTE is based on the universal terrestrial radio access (UTRA) and a high-speed downlink packet access (HSDPA). In the DL, LTE uses an orthogonal frequency division multiple access (OFDMA), which has high spectral efficiency and robustness against the interference. A single carrier frequency division multiple access (SC-FDMA) is applied in the UL direction, due to its lower (compared to OFDM) peak-to-average power ratio (PAPR) [5]. The numerology of LTE includes a subcarrier spacing of 15 kHz, support for scalable bandwidth of up to 20 MHz, and a resource allocation granularity of 180 kHz × 1 ms (called a resource block). Available transmission resources are distributed among the users by the medium access control (MAC) schedulers located in enhanced NodeBs (eNBs). Depending on the implementation, the scheduling can be done based on queuing delay, instantaneous channel conditions, fairness, and so forth [6, 7].

Due to existence of two types of users (primary and secondary) with different QoS requirements, the problems of dynamic spectrum access (DSA) and resource allocation for CRNs are more complex than those considered in traditional wireless networks. The majority of the works on resource allocation in LTE-based CRNs focus on various lower layer techniques for spectrum sensing and spectrum mobility (see, e.g., [810]). These techniques are very effective in identifying and reducing the interference in the physical channels but do not improve the overall user-perceived QoS, which is mainly expressed in terms of the packet end-to-end delay and loss for the network users. Consequently, the results of these works are applicable only in combination with the techniques designed for MAC and higher layers [11].

The QoS protection for the PUs and the admissibility of the SUs have been studied using the theoretical analysis of user behaviour. For instance, in [12] the authors propose a statistical traffic control for the LTE-based CRNs. To satisfy the timing constraints of all packets belonging to different streams (with diverse QoS characteristics and requirements) and achieve the statistical QoS guarantees, the authors deploy admission control and coordinated transmissions of the constant-bit-rate (CBR) and the variable-bit-rate (VBR) streams. Dynamic channel selection for autonomous wireless users transmitting delay-sensitive multimedia applications over a CRN has been studied in [13]. Unlike prior works where the application-layer requirements of the users have not been considered, here the rate and delay requirements of heterogeneous multimedia users are taken into account. The authors propose a novel priority based virtual queue interface to efficiently manage the available spectrum resources. This interface is used to (i) determine the required information exchanges, (ii) evaluate the expected delays experienced by various priority traffics, (iii) design a dynamic strategy learning (DSL) algorithm deployed at each user that exploits the expected delay, and (iv) adapt to the channel selection strategies to maximize the user’s utility function.

The authors of [5] present a delay-power control (DPC) exploiting the trade-off between the transmission delay and transmission power in wireless networks. In a proposed resource allocation procedure, each wireless link autonomously updates its power based on the interference observed at its receiver (without any crosslink communication). The DPC scheme has been proved to converge to a unique equilibrium point and contrasted to the well-known Foschini-Miljanic (FM) formulation. Some theoretical underpinnings of DPC and their practical implications for efficient protocol design have also been established. In [14], the problem of allocating the network resources (channels and transmission power) in a multihop CRN is modeled as a multicommodity flow problem with the dynamic link capacity resulting from dynamic resource allocation. Based on such queue-balancing, the authors propose a distributed scheme for optimal resource allocation without exchanging the spectrum dynamics between remote nodes. Considering the power masks, each node makes resource allocation decisions based on current or past local information from neighboring nodes to satisfy the throughput requirement of each flow and maintain the network stability.

In this paper, we suggest an alternative strategy for resource allocation in a LTE-based CRN and propose to assign the network resources (bandwidth and transmission power) to the UL and DL of LTE system, based on buffer size of user equipment (UE) pieces. Note that, in many previously proposed algorithms (e.g., [813]), the bandwidth and transmission power are assigned without considering the buffer occupancy of UE pieces, which may lead to a rather unfair resource allocation (when users with lower demands are allocated larger bandwidth than the users with higher demands).

To ensure that the QoS requirements of the PUs are satisfied, we put the constraints on the sizes of their buffers. Two network usage scenarios are considered. In the first scenario, the PUs pay full price for accessing the spectrum and get the full QoS protection, whereas the SUs access the network for free and are served on a best-effort basis. In the second scenario, the PUs pay less in exchange for sharing the bandwidth and get the reduced QoS guarantees; SUs pay some price for their usage without any QoS guarantees. The proposed resource allocation algorithm is derived based on a discrete spectrum assumption, with the spectrum resources counted in terms of LTE resource blocks (RBs). Note that a continuous spectrum assumption (used in all past works) is not applicable for a practical LTE realization, since the number of RBs comprising the bandwidth is relatively small (6, 10, 20, 50, and 100 RBs corresponding to 1.4, 3, 5, 10, and 100 MHz widebands, resp., [15]). Unlike existing research contributions, the transmission rates of SUs and DUs are calculated using the modified Shannon expression which accounts for the adaptive modulation and coding (AMC) used in LTE.

The rest of the paper is organized as follows. In Section 2 we describe the network model and formulate the resource allocation problems in two network usage scenarios. In Section 3 we provide the solution methodology and discuss the implementation of a presented resource allocation approach in a real LTE system. In Section 4 we outline a simulation model and evaluate performance of the proposed resource allocation algorithms in two network usage scenarios. The paper is finalized in Conclusion.

2. Problem Statement

2.1. Network Model, Assumptions, and Notation

In this work, the problem of joint power and bandwidth allocation for the LTE-based CRN is investigated for both the UL and the DL directions. Similarly, the discussion through the rest of the paper is applicable (if not stated otherwise) to either direction.

Consider a basic LTE-based CRN architecture which consists of one eNB providing wireless access to PUs numbered , and SUs numbered . Two spectrum usage scenarios are considered in this paper. In the first scenario, PUs are the licensed network users who pay some price for accessing the spectrum and therefore must be provided with the certain guaranteed QoS levels; SUs are the unlicensed network users, who can access the spectrum for free, and therefore they are served on a best-effort basis. In the second scenario, the PUs pay less in exchange for sharing the spectrum; the SUs have to pay some price for the shared spectrum to compensate for the income losses of a service provider.

The considered network operates on a slotted-time basis with the time axis partitioned into mutually disjoint time intervals (slots) , , with denoting the slot length and being the slot index. The number of active PUs and SUs can be tracked using an LTE random access channel (RACH) procedure [15], which is used for initial access to the network (i.e., for originating, terminating, or registration calls).

In the UL direction, the user-generated traffic (in bits per slot or bps) is enqueued in the buffers of UE pieces and then transmitted to the eNB using a standard packet scheduling procedure [15]. In this procedure, the information about the amount of data (in bps) enqueued in the buffers of UE pieces is constantly transmitted to the eNB, so that the eNB “knows” the exact amount of data generated by the users at any slot . This information is then used by the eNB to allocate the UL transmission resources to UE pieces. In the DL direction, the transmission resources are allocated based on the amount of data transmitted by eNB to the users.

In LTE system, the transmission resources are allocated to the users in terms of RBs. Each user can be allocated only the integer number of RBs, and these RBs should not be adjacent to each other [7, 15]. Another important feature of both the SC-FDMA (applied in the UL direction) and OFDMA (applied in the DL direction) is the orthogonality of resource allocation, which allows achieving minimal level of cochannel interference between different transmitter-receiver pairs located within one cell (i.e., when no frequency reuse is considered).

It is assumed that the bandwidth of the eNB is fixed and equal to RBs. For any in and any in , consider the following:(i)The channel between the eNB and is characterized by the noise and interference coefficient , which is known to the eNB and . The channel between the eNB and is characterized by the noise and interference coefficient , which is known to the eNB and . The values of and in the UL and DL directions can be obtained from the channel state information (CSI) through the use of the LTE reference signals (RSs) [16, 17].(ii)The size of the buffers and the arrival rates (in bits) of and can be observed at any slot in the UL and DL directions.

The following notations are used throughout the paper:(i) and denote the number of RBs allocated at slot to and , respectively;(ii) and denote the transmission power allocated at slot to and , respectively;(iii) and denote the arrival rate (in bps) at slot in and , respectively;(iv) and denote the buffer size (in bits) at slot of and , respectively;(v) and denote the transmission rate (in bps) at slot in the channels of and , respectively;(vi), , and denote the maximal transmission power of the eNB, , and , respectively.

2.2. Scenario 1

The objective of this paper is to devise a sustainable algorithm for joint bandwidth and transmission power allocation, based on two goals:(1)QoS Protection for the PUs. PUs should maintain their target QoS, irrespective of how many SUs enter the network and how much load they are generating.(2)Admissibility of the SUs. SUs should be able to utilize the spare capacity of the eNB (if left) to maximize their QoS.

These goals, however, cannot be achieved without providing the quantified measures for the user-perceived QoS. Note that because of the orthogonality of the RB allocation, the need for interference mitigation in a considered CRN is eliminated. In this case, the use of such physical layer characteristics as signal-to-noise ratio (SNR) is not well-reasoned. More practical QoS measures can be obtained from the higher layers network information. For instance, such metrics as round-trip latency and loss are traditionally used for performance evaluation in wireless networks [5]. Unfortunately, in LTE system the direct estimation of delay and loss is rather complex. For instance, the end-to-end latency consists of various delay components, including transmission and queuing delays, propagation and processing delays, the UL delay due to scheduling, and delay due to hybrid automatic repeat request (HARQ) [18]. The accurate analysis of these delay components requires knowledge of many system parameters, which may be not available during resource allocation.

Consequently, in this paper we argue in favour of using the buffer size of UE pieces as a QoS measure. The buffer size of PUs and SUs can be easily estimated from the known parameters , and , using well-known Lindley’s equation [19]:where , whereas the transmission rates , depend on the unknown bandwidth and transmission power allocation vectors given by (relationship between the transmission rate, transmission power, and the bandwidth will be established in the next section)

Note that the average round-trip latency of any primary UE can be retained below some upper bound level by restricting its buffer size to be less than a certain target buffer size. The only information which is necessary in this case is the desired upper bound on delay and the average arrival rate in PUs during some interval . Then, we can find the target buffer size by deploying Little’s formula [20]where is the upper bound on delay for ; is the target buffer size for ; is the average arrival in during time interval given by

To maintain the desired upper bound on the end-to-end delay, the buffer size of PUs should be constrained as follows:or equivalently using (1a)

If added to an optimization problem, constraint (5) can guarantee the QoS protection for PUs. Except some (very rare) cases when constraint (5) is not feasible, the buffer size will always stay below the target level irrespective of how many SUs are in the network and how much load they are generating. The setting of the parameters , as well as the cases when constraint (5) is unfeasible, is discussed in Section 3.

Note that, at any slot , the number of RBs allocated for data transmissions of PUs and SUs in the UL and DL directions should not exceed RBs. That is,where where represents the set of all nonnegative integers.

Additionally, the transmission power allocated to the users at any slot should be nonnegative and cannot exceed the maximal transmission power levels. Hence,for the UL direction, andfor the DL direction, where

We are now ready to show how to attain the second goal of resource allocation. As in case with QoS protection of PUs, we use the buffer sizes of SUs as a measure of the SUs’ QoS. We say that the admissibility of SUs in the network is preserved if they are allocated the bandwidth that is not used by the PUs. This can be done, for instance, by minimizing the sum (or the logarithmic sum) of the buffer sizes of SUs or minimizing the maximal buffer size of SUs subject to the constraints listed above.

For the UL direction, this gives us the following problem (to simplify notation, we skip the index t below):For the DL direction, constraint (8e) should be replaced by

In above formulation, each PU is guaranteed a certain target QoS level, and the optimal allocations for PUs are such that

The spare bandwidth (if left) is distributed among the SUs to minimize the size of their buffer, which means that the two goals of resource allocation are achieved. However, this formulation works well only if the aggregated traffic demand of the SUs is greater than that of the PUs. Otherwise, there might be a situation when the optimal solution for SUs is such thatwhich is apparently not fair, since the PUs have larger buffer sizes than the SUs, and therefore they will experience longer delays than the SUs.

To avoid such situation, we propose to modify problem (8a)–(8g) by replacing its objective (8a) with

Objective (11) guarantees that PUs are served with maximal QoS, even if the aggregated traffic demand of SUs is greater than that of the PUs.

2.3. Scenario 2

We now consider a different scenario in which the PUs will benefit (by paying less in exchange) from sharing their bandwidth with the SUs. Let us assume that, in the network without price benefits (i.e., in Scenario 1), the price for accessing the licensed spectrum of the PUs equals 1. In this case, all the PUs get the full QoS protection (i.e., guaranteed target buffer size and the average end-to-end latency ). Suppose that each PU can define the portion of allocated spectrum that it is willing to share with the SUs. The delay in turn is related to .

Let us denote by , the price for the portion of spectrum that is willing to share with the SUs. So, represents the willingness of to trade off its QoS for money. Subsequently, the guaranteed QoS level of will reduce, and instead of the QoS protection constraint (8b) we will have

Now, to compensate for the income losses of a service provider, the SUs will have to pay the price for the shared spectrum. Let denote the price that each SU will pay for accessing the shared spectrum. The amount that pays in this case is , whereas the revenue of the service provider is

Assuming that a service provider does not incur loss by allowing the secondary users to use the spectrum, we have to incorporate the constraintwhere is the profit that a service provider would like to make by allowing the SUs into the network.

Note that, in such formulation, the SUs get no QoS guarantee, and therefore we have to minimize the price that the SUs pay for their usage. Thus, the optimization problem isfor the UL direction. For the DL direction constraint (15e) is replaced by constraint (8g). The value of obtained by solving (15a)–(15g) is then broadcasted by a service provider to the SUs that will have to pay a determined price.

We note, in passing, that there are many other ways in which the pricing problem could be addressed. For instance, the numbers may be negotiated between the PUs and a service provider. One can also think of a scenario where different SUs may be able to express their willingness to pay via bidding the prices . The options are numerous, but they are beyond the scope of this paper.

3. Implementation Issues

3.1. Rate as a Function of Bandwidth and Power

Note that the transmission rates in the UL and DL channels of a LTE system can be found using the modified Shannon equation [21], expressed aswhere and are the SNRs in the channels of and , respectively; and are the system bandwidth efficiencies for and , respectively; and are the SNR efficiencies for and , respectively; is the bandwidth of one RB ( = 180 kHz) [21].

In real LTE networks, the system bandwidth efficiency is strictly less than 1 due to the overheads on the link and system levels. The bandwidth efficiency is fully determined by the design and internal settings of the system and does not depend on the physical characteristics of the wireless channels between the user and the eNB [21, 22]. Hence, it is reasonable to assume that

The SNR efficiency is mainly limited by the maximum efficiency of the supported modulation and coding scheme (MCS) [16]. In LTE, MCS is chosen using AMC to maximize the data rate by adjusting the transmission parameters to the current channel conditions. AMC is one of the realizations of a dynamic link adaptation. In AMC algorithm, the appropriate MCSs for packet transmissions are assigned periodically (within short fixed time interval usually equal to one slot) by the eNB based on the instantaneous channel conditions reported by the users. The higher MCS values are allocated to the high-quality channels to achieve higher transmission rate. The lower MCS values are assigned to the channels of poor quality to decrease the transmission rate and, consequently, ensure the transmission quality [16, 17].

The method for choosing MCS can be expressed as follows. Based on the instantaneous radio channel conditions, the SNR is calculated for the DL wireless channels between the eNB and the users. A supported MCS index is chosen using expression [16, 17]where are the SNR thresholds corresponding to −10 dB bit error ratio (BER) given by the Additive White Gaussian Noise (AWGN) curves for each MCS (the standard AWGN curves can be found in [17]). The LTE standard allows = 15 MCS levels (description of MCS levels, their code rate, and efficiency is shown in Table 1) [17].

Based on the above, the SNR efficiency depends on the SNR of the DL wireless channel and therefore can be represented by a functionwhere are the values of the SNR efficiency corresponding to the supported MCS.

Using (17) and (19), the transmission rates of the users can be expressed as

3.2. Solution Methodology

In this subsection, we describe a solution methodology for the resource allocation problem in Scenario 1 in the UL direction (with the objective given by (11) and the constraints defined in (8b)–(8f)). A resource allocation problem for the DL direction as well as problem (15a)–(15g) for Scenario 2 can be solved analogously.

Note that, in Scenario 1, a resource allocation problem in the UL direction is equivalent to

In above problem, some of the optimization variables (particularly the components of and ) can take only integer values, whereas the other variables (the components of and ) are real-valued ones. In addition, constraints (21g)–(21j) are represented by the nonsmooth nonlinear functions of (, ) and (, ). Hence, (21a)–(21j) is a nonlinear mixed integer programming (MINLP) problem, which is Nondeterministic Polynomial-time (NP) hard. For immediate proof of NP-hardness, note that MINLP includes mixed integer linear programming (MILP) problem, which is NP-hard [23].

Before applying any MINLP method for solving (21a)–(21j), the nonsmooth function in (19), included in the expressions of and , should be replaced by its smooth approximation. To construct a smooth approximation of , note that this function is equivalent to the sum of the shifted and scaled versions of a Heaviside step function [24]. That is,where .

Recall that a smooth approximation for a step function is given by a logistic sigmoid function [25]:where ; is in range of real numbers from to . If we take , then larger corresponds to a closer transition to ; that is,

The above holds, because, for , we havefor , for ,

Consequently, an approximation for a shifted Heaviside function is represented by a shifted logistic function defined for , with real in range from to .

Based on (28), we can construct a smooth approximation for as

Then, it is rather straightforward to verify that and the approximations for and will take the form

With the approximations given by (31a) and (31b), problem (21a)–(21j) can be solved using a sequential optimization approach as

The above problem is smooth MINLP, which can be solved using some fast and effective MINLP method. Note that most of the MINLP techniques involve construction of the following relaxations to the considered problem: the nonlinear programming (NLP) relaxation (original problem without integer restrictions) and the MILP relaxation (original problem where nonlinearities are replaced by supporting hyperplanes). In our case, a smooth MILP relaxation to (32a)–(32j) in a given point is given by

The NLP relaxation to (32a)–(32j) is

Note that the MINLP problems can be solved using either deterministic technique or an approximation. A typical exact method for solving MINLPs is a well-known branch-and-bound algorithm and its various modifications [26]; examples of heuristic approaches include local branching [27], large neighborhood search [28], and feasibility pump [29], to name a few. Since we are interested in a reasonably simple and fast algorithm, it is more convenient to use heuristics for solving the problem. In this paper, a Feasibility Pump (FP) heuristic [29, 30] is applied to solve (32a)–(32j). FP is perhaps the most simple and most effective method for producing more and better solutions in a shorter average running time. For the problems with nonbinary integer variables, the complexity of FP is exponential in size of a problem [31].

A fundamental idea of FP algorithm is to decompose the problem into two parts: integer feasibility and constraint feasibility. The former is achieved by rounding (solving a NLP relaxation to the original problem) and the latter by projection (solving a MILP relaxation). Consequently, two sequences of points are generated. The first sequence , , contains the integral points that may violate the nonlinear constraints. The second sequence contains the points which are feasible for a continuous relaxation to the original problem but might not be integral.

More specifically, with input being a solution to the NLP relaxation given by (34a)–(34j), the algorithm generates two sequences by solving the following problems for : where and are norm and norm, respectively. The rounding step is carried out by solving (35a)–(35j), whereas the projection is the solution to (36a)–(36j). A suggested FP algorithm alternates between the rounding and projection steps until = (which implies feasibility) or the number of iterations has reached the predefined limit . The workflow of a corresponding FP algorithm is illustrated in the following part.

FP Algorithm for Non-Convex MINLP(0)INITIALIZATION: input ; set ;       solve (34a)–(34j) to obtain ;(1)WHILE () do:            (2)ROUNDING: solve (35a)–(35j) to obtain ;(3)IF THEN goto step ;(4)PROJECTION: solve (36a)–(36j) to obtain ;(5)Set ;           (6)OUTPUT: solution .

Note that in order to retain the convergence of the algorithm, both problems (35a)–(35j) and (36a)–(36j) need to be solved exactly. Problem (36a)–(36j) (and consequently problem (34a)–(34j)) can be solved using any standard NLP method. In this paper, an interior point algorithm (described, e.g., in [32]) with polynomial complexity is applied to solve (36a)–(36j) and (34a)–(34j). MILP problem (35a)–(35j) is relatively simple and therefore can be solved efficiently for optimality by any technique from the family of the branch-and-bound methods (e.g., [26]).

3.3. Target Buffer Size for PUs and Feasibility Issues

Recall that represents the target buffer size for , which indicates that the QoS requirements of are satisfied. By limiting the buffer size of PUs, we restrict the QoS of PUs (measured in terms of their buffer size) to be higher (or at least not smaller) than the required minimum. From this point of view, constraints (5) and (12) guarantee the QoS protection of PUs in Scenario 1 and Scenario 2, respectively. Apparently, one can always restrict the target buffer size of PUs to be equal to zero, so that after each subsequent data arrival the buffer of each PU is fully cleared. This approach represents a “greedy” policy which ensures that the PUs are always served with the best possible QoS but does not guarantee that some spare capacity will be left to serve the SUs.

A “less greedy” strategy would be to establish some lower bound on QoS for the PUs, which will provide an “opportunity” for SUs to be served. To establish this lower bound, recall that the growing delay and loss indicate that a buffer of a corresponding node is congested, whereas in an uncongested node the packet delay and loss are always minimal [33]. On the other hand, it has been reported in [33, 34] that in an uncongested node the buffer size is increasing very slowly, that is, , whereas in a congested node the buffer size is growing very fast, so that . Hence, to serve the PUs with minimal delay and loss, it is enough to prevent the growth of their buffers; that is, for each PU setwhere is some small value, such that

More sophisticated approach to find the lower QoS bound can be deployed if information about the past buffer size is available at any time slot . In this case, one can set the target buffer size based on past T observations asorwhere .

In some (very rare) cases (when the service capacity of the eNB is not enough to guarantee that the buffer sizes in all PUs do not exceed the target buffer size), constraints (5) and (12) in Scenarios 1 and 2 may be not feasible. In the following, we show how to identify such situation and what should be the optimal resource allocation strategy in this case.

Note that the service capacity of LTE system greatly depends on physical characteristics (e.g., SINR and MCS) of the wireless channels between the eNB and the users. Hence, at the moment preceding the bandwidth/power allocation, it is rather difficult to estimate the service capacity of a system. Nevertheless, it is possible to find the average service capacity and to determine the upper and lower bounds on service capacity of the eNB in the UL and DL directions, such thatwhere , , and are the lower bound, upper bound, and average service capacity of the eNB, respectively.

In our case, the upper bound on service capacity of the eNB is given bywhere is the maximal possible SNR value.

The lower bound on service capacity of eNB iswhere is the minimal possible SNR value.

The average service capacity of the eNB equalswhere is the average SNR value.

If SNR information is available, then the values , , and can be set asOtherwise (i.e., if SNR information is not available), the values are set arbitrarily.

Now we can identify the situation when constraints (5) and (12) cannot be satisfied. For this, we should perform the following test. For Scenario 1, if then constraint (5) is feasible. Otherwise, condition (5) cannot be satisfied. Hence, all the SUs receive zero allocations, and we assign the bandwidth/power resources only to the PUs by solving the following optimization problem: for the UL direction. For the DL directions, constraint (43e) should be replaced by

For Scenario 2, if then constraint (12) is feasible. Otherwise, inequality (12) cannot be satisfied, and the bandwidth/power resources are assigned only to the PUs (according to (43a)–(43g)), whereas the SUs receive zero allocations.

4. Performance Evaluation

4.1. Simulation Model

A simulation model of the network consisting of one eNB, PUs, and SUs randomly located in a service area with 3 km radius has been developed upon a standard LTE platform using the OPNET package [35]. The bandwidth of the eNB equals = 50 RBs (which is equivalent to 10 MHz). The slot duration in the algorithm equals = 1 ms. The maximal number of iterations in FP algorithm is set = 200. The maximal transmission power of the eNB equals = 46 dBm and the maximal transmission power of each PU/SU equals  dBm. Other simulation settings of the model are listed in Table 2 (all the parameters are based on LTE specifications [36]). A radio model of the network corresponds to the requirements of ITU-T Recommendation M.1225.

The algorithms proposed in the paper are denoted as follows:(i)GBC (greedy buffer Control), where the target buffer size equals 0 for all the PUs;(ii)NBC (nongreedy buffer control) with the target buffer size calculated according to (37a) and (37b);(iii)MBC (Minimal Buffer Control) with = 10 slots and the target buffer size given by (38a);(iv)ABC (Average Buffer Control) with = 10 slots and the target buffer size equaling (38b).

Two previously proposed schemes used to benchmark the performance of the algorithms proposed in this paper are(i)UBA (Utility Based Allocation) presented in [11], where the network resources (bandwidth and transmission power) are allocated to the UL channels to maximize the total transmission rate of the users subject to the capacity and transmission power constraints,(ii)DPC (Delay Power Control) derived in [5], which is used to balance the transmission delay against transmission power in wireless channels between the eNB and the users.

We also gather simulation results for a proposed resource allocation approach in Scenario 2, with different settings of the parameters and , and benchmark its performance with the performance of the algorithm in Scenario 1.

All algorithms are simulated using identical settings. The data traffic in simulations is represented by voice over Internet Protocol (VoIP), video, and Hypertext Transfer Protocol (HTTP) applications mixed in proportion 2 : 3 : 5 using the models defined in [37].

4.2. Complexity and Fairness Issues

Let us first evaluate the complexity and the fairness of resource allocation in different algorithms. Figure 1 shows the average number of iterations before convergence for UBA, DPC, and the proposed algorithms in Scenario 1 (denoted as MBC) and Scenario 2, gathered in simulations with = 100 PUs and average = 0 dB. Note that the target buffer size for the PUs is calculated according to (38a) with = 10 slots (in Scenarios 1 and 2) and ,  Mbps (for Scenario 2). Results demonstrate that the complexity of FP method deployed for resource allocation in Scenarios 1 and 2 is exponential in size of the problem (which corresponds to the findings reported in [31]) and is comparable to the complexities of other techniques (UBA and DPC).

Figure 2 shows the fairness of different methods calculated according to Jain’s equation [38] aswhere , , and stand for Jain’s index of fairness for the PUs, SUs, and all of the users, respectively; are the mean throughputs for and , respectively. Results have been gathered for = 100 PUs, = 100 SUs, average = 0 dB, and target buffer size for the PUs calculated according to (38a) (Scenarios 1 and 2) with different settings of and in Scenario 2.

It follows from these graphs that the fairness index among all of the users (denoted as ) is higher in Scenario 2 with and lower in Scenario 1 and Scenario 2 with . Such results are rather expectable; in Scenario 1 and Scenario 2 with , the SUs are served for free on a best-effort basis, whereas the PUs enjoy full QoS guarantees. Hence, the network resources are shared nonequally between the PUs and the SUs, with most of the RBs allocated to the PUs and the rest of the bandwidth distributed among the SUs. On the other hand, the fairness index for the PUs and SUs (i.e., and ) is rather high in both scenarios with different settings of and , which indicates that the users of the same type are treated equally during resource allocation.

4.3. Simulation Results in Scenario 1

Let us evaluate the performance of a resource allocation algorithm in Scenario 1 in terms of mean throughput, packet end-to-end delay, and loss for different user types. Figures 36 show results obtained in simulations with = 100 PUs and average SNR = 0 dB. Results demonstrate that a UBA scheme has the largest packet delay and loss and the lowest throughput for both the PUs and the SUs. A poor performance of this scheme is explained as follows:(i)QoS protection of PUs is not considered in UBA. Hence, the throughput, delay, and loss are the same for PUs and SUs.(ii)Buffer sizes of individual users are not considered in resource allocation, and therefore the QoS for different users varies significantly (since different users have different traffic demands).(iii)Although widely used, the choice of transmission rate as an individual user utility is not very rational: some users may have very low demand and small buffer size and therefore there is no need to maximize their transmission rate and spend the network resources.

Performance of the DPC scheme is a little better than that of UBA. Note that DPC takes into account transmission delay, which is balanced against the transmission power in wireless channels, and therefore performance of DPC for PUs and SUs is better than performance of UBA. However, the queuing delay related to users’ buffer size and the QoS requirements of PUs are not considered, and therefore the delay and loss for PUs are much greater in UBA than those in GBC, NMC, MBC, and ABC.

GBC, NBC, MBC, and ABC show very consistent performance for the users. All of these techniques take into account the users’ buffer size, which makes it possible to minimize the average delay and loss in the network for both the PUs and the SUs. Additionally, the QoS for the PUs is guaranteed by restricting the buffer size of each PU to stay below the target buffer size level, which ensures that the packet delays for PUs do not exceed the average target delays.

The best performances for the PUs and the SUs are achieved by GBC and NBC, respectively. Such results provide good demonstration of trade-off between delay and loss for the SUs and the target buffer size of the PUs. In particular, a “greedy” strategy shows good throughput, delay, and loss results for the PUs but poor performance for the SUs. On the other hand, it is possible to achieve some compromise between holding the QoS guarantees of the PUs and providing a reasonable performance for the SUs by using the less strict target buffer size settings (e.g., NBC, ABC, and MBC).

4.4. Simulations Results in Scenario 2

We now present simulation results in Scenario 2 with different settings of the parameters and and the target buffer size of the PUs calculated according to (38a) for = 10 and benchmark these results with the performance of the MBC algorithm in Scenario 1. Figures 7 and 8 show mean throughput, packet end-to-end delay, and loss in simulations with = 100 PUs, = 50 SUs, and average SNR = 0 dB.

Results demonstrate that QoS of the PUs and the SUs depends to a great extent on the settings of and . The parameter has negative impact on service performance for the PUs and positive impact on performance for the SUs. Note that the target buffer size of the PUs increases with , which leads to decreased throughput, and increased delay, and loss for these users.

We also observe that a service provider’s revenue has negative impact on QoS of the SUs and positive impact on performance of the PUs. Such results are due to the more strict constraints on the network usage for different types of users. Consequently, the cases when the target QoS of PUs cannot be satisfied (and problem (15a)–(15g) is unfeasible) happen more frequently, and the QoS for these users degrades. On the other hand, the SUs utilize more bandwidth in response to the revenue constraints, and therefore their QoS improves.

4.5. Comparison between Scenario 1 and Scenario 2

Results below show performance of the algorithms in Scenario 1 and Scenario 2 with the target buffer size for the PUs calculated according to (38a) with = 10 slots (in Scenarios 1 and 2) and different settings of the parameters and in Scenario 2. Figures 9 and 10 show mean throughput, packet end-to-end delay, and loss in simulations with = 100 PUs, = 50 SUs, and average SNR = 0 dB.

Results demonstrate that the QoS of the PUs is maximal in Scenario 1 and Scenario 2 with = 0, because in this case the PUs are provided with full QoS guarantees, whereas the SUs are served on a best-effort basis. On the other hand, the service performance for the SUs is better in Scenario 2 with = 1, since the target buffer size for the PUs increased, leading to degraded throughput and increased delay and loss for these users and improved performance for the SUs that pay for the shared resources.

5. Conclusion

The problem considered in this paper relates to the problem of transmission power and bandwidth allocation in a cognitive LTE network where the bandwidth is shared among the PUs and SUs. To guarantee that the QoS requirements of the PUs are satisfied, we enforce some upper bound on their buffer size. Unlike previously proposed resource allocation techniques, our algorithm is derived based on realistic assumptions, such as discrete spectrum resources and consideration of AMC deployed in LTE systems. Performance of the algorithm is evaluated using simulations in OPNET environment. Results show that the proposed resource allocation strategy performs significantly better than other relevant resource allocation techniques.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.