Abstract

Among the goals of 3GPP LTE networks are higher user bit rates, lower delays, increased spectrum efficiency, support for diverse QoS requirements, reduced cost, and operational simplicity. Resource scheduling and interference mitigation are two functions which are key to achieving these goals. This paper provides a survey of related techniques which have been proposed and shown to be promising. A brief discussion of the challenges for LTE-Advanced, the next step in the evolution, is also provided.

1. Introduction

The demand for cellular communication services is expected to continue its rapid growth in the next decade, fuelled by new applications such as mobile web-browsing, video downloading, on-line gaming, and social networking. The commercial deployment of 3G. Cellular network technologies began with 3GPP UMTS/WCDMA in 2001 and has evolved into current UMTS/HSPA networks. To maintain the competitiveness of 3GPP UMTS networks, a well-planned and graceful evolution to 4G networks [1] is considered essential. LTE is an important step in this evolution, with technology demonstrations beginning in 2006. Commercial LTE network services started in Scandinavia in December 2009 and it is expected that carriers worldwide will shortly be starting their upgrades.

The main design goals behind LTE are higher user bit rates, lower delays, increased spectrum efficiency, reduced cost, and operational simplicity. The first version of LTE, 3GPP Release 8, lists the following requirements [2]. (1) peak rates of 100 Mbps (downlink) and 50 Mbps (uplink); increased cell-edge bit rates; (2) a radio-access network latency of less than 10 ms; (3) two to four times the spectrum efficiency of 3GPP Release 6 (WCDMA/HSPA); (4) support of scalable bandwidths, 1.25, 2.5, 5, 10,15, and 20 MHz; support for FDD and TDD modes; smooth operation with and economically viable transition from existing networks. In order to meet these demanding requirements, LTE makes use of multiantenna techniques and intercell interference coordination.

In this paper, we provide a survey of radio resource scheduling and interference mitigation in LTE. Both are widely recognized as areas which can greatly affect the performance and spectrum efficiency of an LTE network. The paper is organized as follows. In Section 2, a brief overview of some aspects of LTE, necessary to discuss scheduling in a meaningful way, is given. Radio resource scheduling methods proposed are discussed in Section 3. Methods based on interference mitigation to improve the QoS of cell-edge users are described in Section 4. Section 5 provides a brief discussion of LTE-Advanced, as the next step towards a 4G network.

2. Overview of LTE PHY

A brief description of LTE Physical Layer features, necessary to discuss scheduling algorithms, is now provided. It should be noted that scheduling decisions are made at the base station (eNB in 3GPP parlance) for both downlink and uplink radio transmissions. More details can be found in [35].

2.1. Downlink

LTE radio transmission on the downlink is based on OFDM, a modulation scheme that is used in a variety of wireless communication standards. OFDMA, a variant of OFDM which allows several users to simultaneously share the OFDM subcarriers, is employed in LTE in order to take advantage of multiuser diversity and to provide greater flexibility in allocating (scheduling) radio resources. Specifically, the OFDM subcarriers are spaced 15 kHz apart and each individual subcarrier is modulated using QPSK, 16-QAM, or 64-QAM following turbo coding. AMC is used to allow the optimal MCS to be chosen, based on current channel conditions.

A major difference between packet scheduling in LTE and that in earlier radio access technologies, such as HSDPA, is that LTE schedules resources for users in both TD and FD, whereas HSDPA only involves TD. This additional flexibility has been shown to provide substantial throughput and coverage gains. In order to make good scheduling decisions, a scheduler requires knowledge of channel conditions. Ideally, at each scheduling time, the scheduler should know the channel gain for each sub-carrier and each user. However, due to limited signalling channel resources, subcarriers are grouped into RBs, each consisting of 12 adjacent subcarriers. Each RB has a time slot duration of 0.5 ms, which corresponds to 6 or 7 OFDM symbols depending on whether an extended or normal cyclic prefix is used. The smallest resource unit that a scheduler can allocate to a user is an SB, which consists of two consecutive RBs, spanning a subframe time duration or TTI of 1 ms and a bandwidth of 180 kHz (see Figure 1).

2.2. Uplink

Except for the use of SC-FDMA, a precoded version of OFDM, in place of OFDMA, the PHY for the uplink is similar to that for the downlink, especially for the FDD mode. SC-FDMA is preferred over OFDMA since SC-FDMA has a smaller PAPR which allows a reduced power consumption. This is an important consideration for the UE (user/mobile terminal) and results in improved coverage and cell-edge performance. Another advantage is a simpler amplifier design, which translates to a lower-cost UE.

3. Scheduling

Due to the central role of the scheduler in determining the overall system performance, there have been many published studies on LTE scheduling [641]. Simply stated, the scheduling problem is to determine the allocation of SBs to a subset of UEs in order to maximize some objective function, for example, overall system throughput or other fairness-sensitive metrics. The identities of the assigned SBs and the MCSs are then conveyed to the UEs via a downlink control channel.

To reduce complexity, most schedulers operates in two phases: TDPS followed by FDPS. The TD PS creates a SCS which is a list of users which may be allocated resources in the current scheduling period and the FD PS determines the actual allocation of SBs to users in the SCS. Since the TD PS does not concern itself with the actual allocation of SBs, it typically uses only average full-band subcarrier CQI and not individual subcarrier CQI. The TD and FD packet schedulers can be designed using possibly different metrics depending on the characteristics desired of the scheduler, for example, high throughput, fairness, low packet drop rate, and so forth.

We discuss scheduling on the downlink in Section 3.1 and scheduling on the uplink in Section 3.2.

3.1. Downlink Scheduling

By taking advantage of the time and frequency channel quality variations of the SBs associated with different users, a scheduler can greatly improve average system throughput compared to a round-robin scheduler which is blind to these variations. The basic idea is to allocate each SB to the user who can best make use of the SB according to some utility function. Scheduling with a single-carrier-based downlink access technology, such as 3GPP Release 6 HSDPA, can only be time-opportunistic. In contrast, OFDMA schedulers can be both time- and frequency-opportunistic.

The performances of different aspects of FDPS in a variety of scenarios have been studied in [614]. Results in [7], obtained using a detailed link simulator, indicate that a PF FD PS can provide average system throughput and cell-edge user bit rate gains of about 40% compared to a time-opportunistic scheduler which does not use subcarrier CQI information. As to be expected, the magnitudes of the gains depend on many factors including CQI accuracy, CQI frequency resolution and the number of active users. A CQI error standard deviation of 1 dB, a frequency resolution similar to the channel coherence bandwidth and a SCS size of at least 5 are found to be adequate.

In [9], the spectral efficiencies and cell-edge user throughputs for three different packet scheduler combinations, namely, TD-BET/FD-TTA, TD-PF/FD-PF, and TD-MT/FD-MT, are compared with those for a reference round-robin scheduler with one user scheduled per SP. The benefits of multiuser diversity and channel-dependent scheduling are illustrated. The throughput performances of an optimal and a suboptimal FD PS are studied as a function of subcarrier correlation in [10]. An interesting observation is that the throughput tends to increase with correlation. A scheduler which takes into account the status of UE buffers, in addition to channel conditions, is proposed in [11] to reduce PLR due to buffer overflow while maintaining a high system throughput and good fairness among users. Significant improvements are reported relative to the RR, PO, and PF schedulers. The design of a low-complexity queue- and channel-aware scheduler which can support QoS for different types of traffic is presented in [20]. The use of decoupled TD and FD metrics for controlling throughput fairness among users is investigated in [18]. Simulation results show that a large gain in coverage can be obtained at the cost of a small throughput decrease. The authors suggest the use of FD-PF in LTE, complemented by TD-PSS or FD metric weighting depending on the number of active UEs in the cell. In [12], the impact of downlink CC signaling overhead and erroneous decoding of CC information by UEs on scheduler performance is examined.

MBMS provides a means for distributing information from the eNB to multiple UEs. A group of MBMS subscribers listen to a common channel, sharing the same SBs. In order to meet the BLER target, the MCS has to be selected based on the UE with the poorest SB channel quality in the group. A MBMS FD PS is proposed in [17] which attempts to allocate SBs so as to improve the channel quality of the poorest quality UE in the group. Simulation results are presented to illustrate the effectiveness of the proposed scheduler.

There are few studies on schedulers designed to operate in combination with MIMO techniques [21, 22]. A MIMO aware FDPS algorithm, based on the PF metric, is presented in [21]. Both single-user and multiuser MIMO cases are treated. In the single-user case, only one user can be assigned to one time-frequency resource element whereas in the multiuser case, multiple users can be assigned on different streams on the same time-frequency element. A finite buffer best effort traffic model is assumed and a 1×2 maximal ratio combining case is used as reference. Simulations results show that in a macrocell scenario, the average cell throughput for MIMO without precoding does not improve relative to the reference case; with precoding, a gain of 20% is observed. The throughput gains in a microcell scenario are significantly larger due to the larger SINR dynamic range.

3.2. Uplink Scheduling

Similar to OFDMA schedulers used on the DL, SC-FDMA schedulers for the UL can be both time- and frequency-opportunistic. An important difference between DL and UL scheduling is that SB CQI reporting is not needed for UL since the scheduler is located at the eNB which can measure UL channel quality through SRS.

Scheduling algorithms for the LTE UL have been discussed by many authors. In [24], a heuristic localized gradient algorithm is used to maximize system throughput given the constraint that subcarriers assigned to a specific user must be contiguous. This work is extended to dynamic traffic models and finite UE buffers in [25]. A variety of channel-aware schedulers with differing optimization metrics and computational complexities are described in [2630, 32, 34, 35].

A scheduler which minimizes transmit power subject to a bound on average delay is treated in [31]. The scheduler chooses the power and transmission rate based on channel and UE buffer states. It is observed that substantial reductions in transmit power can be achieved by allowing small increases in average delay.

To provide the QoS differentiation needed to support diverse services such as VoIP, web browsing, gaming, and e-mail, a combined AC/PS approach is proposed in [36]. Both AC and PS are assumed to be QoS- and channel-aware. The AC decides to accept or deny a new service request based on its channel condition and whether its QoS can be satisfied without compromising the required QoS of ongoing sessions. The PS dynamically allocates resources to ongoing sessions to meet their required QoS. Simulation results are presented to illustrate the effectiveness of the AC/PS in meeting the QoS requirements of different types of users in a mixed traffic scenario.

3.3. VoIP

Even though the major growth will be in data services, telephony will continue to represent a significant portion of the traffic carried in LTE networks. Voice applications will be supported using VoIP. Scheduling for VoIP is discussed in [3741].

The nature of the carried traffic plays an important role in how scheduling should be done. VoIP users are active only half of the time and VoIP traffic is characterized by the frequent and regular arrival of short packets. VoIP service also has tight packet delay and PLR requirements. While dynamic scheduling based on frequent downlink transmit format signalling and uplink CQI feedback can exploit user channel diversity in both frequency and time domains, it requires a large signalling overhead. This overhead consumes time-frequency resources, thereby reducing the system capacity.

In order to reduce signalling overhead for VoIP-type traffic, persistent scheduling has been proposed. The idea behind persistent scheduling is to preallocate a sequence of frequency-time resources with a fixed MCS to a VoIP user at the beginning of a specified period. This allocation remains valid until the user receives another allocation due to a change in channel quality or an expiration of a timer. A big disadvantage of such a scheme is the lack of flexibility in the time domain which may result in a problematic difference between allocated and actually needed resources. To reduce the wastage when a VoIP call does not make full use of its allocated resources, a scheduling scheme is proposed in [40] which dynamically allows two “paired” VoIP users to share resources.

In [37], it is noted that meeting the QoS requirements of VoIP packets by giving these packets absolute priority may negatively impact other multimedia services and degrade the overall system performance. A scheduling scheme which dynamically activates a VoIP packet priority mode (to satisfy voice QoS requirements) and adjusts its duration based on VoIP PLRs (to minimize system performance degradation) is proposed. Simulation results are provided to demonstrate the effectiveness of the scheme.

Semipersistent scheduling, which represents a compromise between rigid persistent scheduling on the one hand, and fully flexible dynamic scheduling on the other, has been proposed by several authors. In [38], initial transmissions are persistently scheduled so as to reduce signalling overhead and retransmissions are dynamically scheduled so as to provide adaptability. Another semipersistent scheduling scheme based on that in [40] is studied in [41].

3.4. Channel Quality Indicator

It is well documented that an accurate knowledge of channel quality is important for high-performance scheduler operation. From the perspective of downlink scheduling, the channel quality is reported back to the eNB by the UE (over the uplink) using a CQI value. If a single “average” CQI value is used to describe the channel quality for a large group of SBs, the scheduler will be unable to take advantage of the quality variations among the subcarriers. This may lead to an unacceptable performance degradation for frequency-selective channels. On the other hand, if a CQI value is used for each SB, many CQI values will need to be reported back, resulting in a high signalling overhead.

Several CQI reporting-schemes and associated trade-offs are discussed in [10, 15, 16]. Two broad classes of reporting schemes can be identified. In the first class, schemes report a compressed version of the CQI information for all SBs to the eNB. In the second class, schemes report only a limited number of the SBs with the highest CQI values.

The dependence of the system throughput on the number of SBs whose CQI values are reported by UEs is discussed in [10]. In [15], a “distributed-Haar” CQI reporting scheme is proposed and simulation results are reported which show that it can provide a better trade-off between throughput performance and CQI feedback signaling overhead, compared to a number of other CQI-compression schemes. The effect of varying FD and TD granularity in CQI reporting is discussed in [16]. It is concluded that a CQI measurement interval of 2 ms and a frequency resolution of 2 SBs per CQI is adequate in most cases. The spectral efficiencies for four different CQI reporting schemes are also compared in [16] and it is suggested that the Best-M average and Threshold-based schemes provide a good trade-off between system performance and UL signaling overhead.

4. Interference Mitigation

ICI is a major problem in LTE-based systems. As the cell-edge performance is particularly susceptible to ICI, improving the cell-edge performance is an important aspect of LTE systems design.

As a UE moves away from the serving eNB, the degradation in its SINR can be attributed to two factors. On the one hand, the received desired signal strength decreases. On the other hand, ICI increases as the UE moves closer to a neighbouring eNB, as illustrated in Figure 2.

One approach for reducing ICI from neighbouring eNBs is to use enhanced frequency reuse techniques. To motivate such techniques, we first take a look at conditions under which conventional frequency reuse can be beneficial.

Let 𝛾(𝑛)𝑖,𝑗 be the SINR of UE 𝑖 which is served by eNB 𝑗, based on an FRF 𝑛 configuration. The resulting spectral efficiency is given by𝐶𝑛=1𝑛log21+𝛾(𝑛)𝑖,𝑗.(1) To assess the benefits of conventional frequency reuse, it is convenient to define the spectral efficiency improvement factor asΔ𝑛Δ=𝐶𝑛𝐶1.(2) Thus, Δ𝑛 is a measure of the spectral efficiency improvement that is obtainable with a FRF 𝑛 configuration compared to full reuse, that is, 𝑛=1. For the FRF 𝑛 configuration to have a higher spectral efficiency than the full reuse case, we require that Δ𝑛>1. Using (1) and (2), it can be shown that this requirement is equivalent to𝛾(𝑛)𝑖,𝑗>𝑛𝛾(1)𝑖,𝑗+𝑛2𝛾(1)𝑖,𝑗2𝛾++(1)𝑖,𝑗𝑛.(3) In the case of a UE which is close to its serving eNB, hereafter referred to as a cell-center UE, the SINR value is usually high, that is, 1, and (3) can be approximated as𝛾(𝑛)𝑖,𝑗>𝛾(1)𝑖,𝑗𝑛.(4) Thus, for cell-center UEs, an SINR increase in the order of the 𝑛th power of 𝛾(1)𝑖,𝑗 must be achieved in order to justify a FRF 𝑛 scheme. On the other hand, for cell-edge UEs for which 𝛾(1)𝑖,𝑗1, (3) can be approximated by𝛾(𝑛)𝑖,𝑗>𝑛𝛾(1)𝑖,𝑗.(5)

While it is possible to obtain a large SINR improvement for cell-edge UEs by eliminating ICI, the SINR improvement will be smaller for cell-center UEs, as ICI is not as dominant for these UEs. Also, since a cell-center UE SINR is typically much higher than unity, the spectral efficiency increases only logarithmically with SINR. However, for a cell-edge UE with SINR value much less than unity, the spectral efficiency increases almost linearly with SINR. From (4) and (5), we can conclude that, in general, only cell-edge UE spectral efficiency will improve from using a high FRF value.

4.1. Fractional Frequency Reuse

The above discussion indicates that it is not bandwidth-efficient to use the same FRF value for the entire cell. One way to improve the cell-edge SINR while maintaining a good spectral efficiency is to use an FRF greater than unity for the cell-edge regions and an FRF of unity for the cell-center regions [42]. Such an FFR scheme is illustrated in Figure 3. In this example, the entire bandwidth is divided into four segments, that is, {𝑓1,𝑓2,𝑓3,𝑓4}. The idea is to apply a unity FRF scheme using 𝑓1 for cell-center regions, whereas 𝑓2,𝑓3,and𝑓4 are used for an FRF of 3 configuration for the cell-edge regions.

4.2. Soft Frequency Reuse

In FFR, adjacent cell-edge regions do not share the same frequency segments, thereby resulting in a lower ICI. However, this strict no-sharing policy may under-utilize available frequency resources in certain situations. In order to avoid the high ICI levels associated with a unity FRF configuration, while providing more flexibility to the FFR scheme, an SFR scheme has been proposed, in which the entire bandwidth can be utilized [42]. As in FFR, the available bandwidth is divided into orthogonal segments, and each neighboring cell is assigned a cell-edge band. In contrast to FFR, a higher power is allowed on the selected cell-edge band, while the cell-center UEs can still have access to the cell-edge bands selected by the neighbouring cells, but at a reduced power level. In this way, each cell can utilize the entire bandwidth while reducing the interference to the neighbours. An example SFR scheme is illustrated in Figure 4. Note that this scheme can improve the SINR of the cell-edge UEs using a greater than unity FRF, while degrading the SINR of the cell-center UEs. This degradation is due to the overlap in frequency resources between the cell-edge band of the neighbouring cells, and the cell-center band of the serving cell. However, as mentioned earlier, the cell-edge performance improvement is almost linear while the degradation to the cell-center UEs is logarithmic. In SFR, the power ratio between the cell-edge band and the cell-center band can be an operator-defined parameter, thereby increasing the flexibility in system tuning. The parameters for both FFR and SFR can be varied semistatically, depending on the traffic loads and user channel qualities.

Variations can be made to the above reuse schemes. One example is given in [4345]. Instead of using a fraction of the bandwidth as shown in Figure 4, the full bandwidth can be utilized for the cell-center UEs, at the expense of higher ICI to the cell-edge UEs. An example of a reuse 3/7 scheme [46, 47] is shown in Figure 5, where frequency reuse is applied to the cell-edge UEs of the respective sectors.

4.3. Adaptive Frequency Reuse Schemes

One of the main assumptions behind the FFR and SFR schemes is that the traffic load within each cell remains stable throughout the life-time of the deployment. However, in practice, this is not always the case, and further improvements may result in better resource utilization and system performance. In [48], simulation results suggest that a fixed FFR or SFR may not provide an improvement over the full reuse scheme, and that a more dynamic reuse scheme may be beneficial.

In [49], the entire bandwidth is divided into 𝑁 subbands, and 𝑋 subbands are used for the cell-edge UEs, and 𝑁3𝑋 subbands are allocated for the cell-center UEs. The 𝑋 subbands between neighbouring cell edge regions are orthogonal, while the 𝑁3𝑋 subbands are the same for all cells. The value for 𝑋 can be adjusted based on the traffic load ratio between the cell-edge and the cell-center regions. In [50], an improvement on the above scheme is made, which not only takes into the traffic ratio between the edge and the center regions of the serving cell, but also of the neighbouring cells. For example, when a cell detects that heavy traffic is present at the cell edge, but not for a subset 𝒞𝑛 of the neighbouring cells, more frequency resource can be allocated to the edge UEs by borrowing some edge resources from cells within 𝒞𝑛. For this scheme, some form of intercell communication may be required in order for a cell to assess the edge traffic loads of its neighbouring cells.

Recently, messages for inter-eNB communications have been standardized for LTE Release 8 to allow different eNBs to exchange interference-related information. For example, an Uplink HII message is introduced per PRB, to indicate the occurrence of high interference sensitivity of these PRBs as experienced from the sending eNB [51]. The receiving eNB(s) would then try to avoid allocating these PRBs to the cell-edge UEs. In [52], the cell-edge bandwidth is allocated dynamically according to the traffic load. In this scheme, each cell maintains two disjoint sets of PRBs, one for the cell-edge region and one for the cell-center region. When a cell-edge region load is high, the cell compares the identities of its own reserved cell-edge PRBs and those included within the neighbouring HII(s), in order to determine the “borrowable” cell-edge PRBs, and select the PRBs that would minimize ICI. In order to avoid intracell interference, the borrowed PRBs are not assigned to the cell-center UEs.

In [53], a flexible frequency reuse scheme is proposed, in which the entire bandwidth is divided into the cell-center band and the cell-edge band(s). In this scheme, the cell-center band can “borrow” a fraction of the cell-edge bands, thereby providing a way to adapt the bandwidth according to the nonhomogeneous nature of the traffic load within a cell. A similar load-based frequency reuse scheme is discussed in [54].

In [55], a decentralized, game-theoretic approach is proposed, whereby each eNB iteratively selects a set of PRBs that minimizes its own perceived interference from its neighbours. At each allocation instance, a new set of PRBs is selected with a certain probability if the perceived interference can be reduced by a specified threshold.

4.4. Graph-Based Approach

The problem of interference mitigation can be formulated as an interference graph in which the UEs correspond to the nodes and relevant interference relations between UEs correspond to the respective edges. To minimize interference, connected UEs should not be allocated the same set of resources. Such a problem is directly related to the graph coloring problem in which each color corresponds to a disjoint set of frequency resources. The goal is for each node in a graph to be assigned a color in such a way that no connected nodes are assigned the same color. In [56], a centralized graph coloring approach is proposed, where a “generalized” frequency reuse pattern is assigned to UEs at the cell edge by a centralized coordinator. In order to generate the interference graph, UEs are required to measure the interference and path losses, and report back to their respective eNBs. Subsequently, every eNB sends the necessary information to the central coordinator, which then generates the interference graph and performs the necessary optimization. Note that the definition of the edges of a graph is model- and problem-specific. Other graph-based approaches can be found in [57, 58].

4.5. Interactions with Scheduler

It is important to note that the potential benefits of the FFR or SFR schemes may not always be realized [45, 48, 59, 60]. On the one hand, ICIC can provide interference reduction for cell-edge users, thereby improving the cell-edge user bit rate. On the other hand, in a unity FRF system, the reduced SINR due to ICI can be compensated by allocating more bandwidth to these users. By compensating for the lower SINR with more bandwidth, that is, resource blocks, to suitable users using an intelligent scheduler, a good performance can still be obtained, even without ICIC. In this case, the benefit of ICIC may not justify the added complexity of intercell communications [59]. However, it is pointed out in [60] that if a limit is imposed on the peak rate that can be used for a particular bearer, the benefit of bandwidth compensation is not fully realized. It is found that ICIC is useful mostly for low-to-moderate traffic loads. A similar observation is made in [61].

The frequency of CQI feedback provides a trade-off between temporal diversity gain and the effective interference reduction achieved by a frequency-selective scheduler. A more frequent CQI feedback provides a good temporal adaptation to the channel variation, thereby improving the spectral efficiency. However, the frequency of CQI feedback should also depend on the short-term interference due to the instantaneous scheduling decisions of the neighboring cells. Since this interference is weakly correlated with that in subsequent subframes, the usefulness of the CQI feedbacks, especially those from the cell-edge users, is greatly reduced. Thus, from the point of view of interference mitigation, a less frequent scheduling decision would be useful. It is suggested in [62] that CQI feedback filtering should be performed in order to average out the temporal variations of the ICI, thereby providing a more stable CQI value for the cell-edge users.

5. Conclusion

A high-level survey of works on resource scheduling and interference mitigation in 3GPP LTE was presented. These two functions will be key to the success of LTE. The next step in the evolution of LTE is LTE-A, a 4G system which promises peak data rates in the Gbps range and improved cell-edge performance. Important scheduling/interference mitigation-related technical issues which require further exploration include [63]:(1)use of relaying techniques which can provide a relatively inexpensive way of increasing spectral efficiency, system capacity, and area coverage. Preliminary studies can be found in [23, 33, 64];(2)DL and UL coordinated multipoint transmission/reception to improve high data rate coverage and cell-edge throughput. For DL, this refers to coordination in scheduling transmissions from multiple geographically separated transmission points. For UL, this involves different types of coordination in reception at multiple geographically separated points.(3)support for UL spatial multiplexing of up to four layers and DL spatial multiplexing of up to eight layers to increase bit rates.

Another general area deserving attention is the design of low-complexity scheduling/interference mitigation schemes which provide near optimal performance.

List of Acronyms

3GPP:Third Generation Partnership Project
AC:Admission control
AMC:Adaptive modulation and coding
BLER:Block error rate
CQI:Channel quality indicator
CC:Control channel
eNB:Enhanced NodeB (3GPP term for base station)
E-UTRA:Evolved universal terrestrial radio access
FD:Frequency domain
FDD:Frequency division duplex
FDPS:Frequency domain packet scheduling
FFR:Fractional frequency reuse
FRF:Frequency reuse factor
HII:High-interference indicator
HSDPA:High-speed downlink packet access
HSUPA:High-speed uplink packet access
HSPA:Refers to HSDPA and/or HSUPA
ICI:Intercell interference
ICIC:Intercell interference coordination
PRB:Physical resource block, same as RB
LTE:Long-term evolution
LTE-A:Long-term evolution-Advanced
MBMS:Multimedia broadcast/multicast service
MCS:Modulation and coding scheme
OFDM:Orthogonal frequency division multiplexing
OFDMA:Orthogonal frequency division multiple access
PAPR:Peak-to-average power ratio
PF:Proportional fair
PLR:Packet loss rate
PO:Pure opportunistic
PS:Packet scheduler
PSS:Priority set scheduling
QoS:Quality of service
RB:Resource block, same as PRB
RR:Round robin
SB:Scheduling block
SC-FDMA:Single-carrier frequency division multiple access
SCS:Scheduling candidate set
SFR:Soft frequency reuse
SINR:Signal to interference and noise ratio
SRS:Sounding reference signal
TD:Time domain
TDD:Time division duplex
TDPS:Time domain packet scheduling
TTI:Transmission time interval
UE:User equipment (3GPP term for mobile terminal)
UMTS:Universal Mobile Telecommunications System
UTRA:UMTS terrestrial radio access
UTRAN:UMTS terrestrial radio access network
VoIP:Voice over Internet Protocol
W-CDMA:Wideband code division multiple access.

Acknowledgments

This work was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada under Grant OGP0001731 and by the UBC PMC-Sierra Professorship in Networking and Communications.