Abstract

Network quality of experience (QoE) metrics are proposed in order to capture the overall performance of radio resource management (RRM) algorithms in terms of video quality perceived by the end users. Metrics corresponding to average, geometric mean, and minimum QoE in the network are measured when Max C/I, proportional fair, and Max-Min RRM algorithms are implemented in the network. The objective is to ensure a fair QoE for all users in the network. In our study, we investigate both the uplink (UL) and downlink (DL) directions, and we consider the use of distributed antenna systems (DASs) to enhance the performance. The performance of the various RRM methods in terms of the proposed network QoE metrics is studied in scenarios with and without DAS deployments. Results show that a combination of DAS and fair RRM algorithms can lead to significant and fair QoE enhancements for all the users in the network.

1. Introduction

With the increased video traffic in state-of-the art cellular networks, it is imperative to enhance the quality of service (QoS) of video transmissions, usually represented by the video peak signal to noise ratio (PSNR). On the other hand, video quality of experience (QoE) is gaining significant interest as a method to quantify the multimedia experience of mobile users; for example, see [1]. It can be considered as a “perceived QoS,” and reflects better than QoS the quality of the video as seen by the mobile users. QoE measures are based on subjective assessment of video quality by the users. Mean opinion scores (MOS) are then collected and analyzed in order to derive an objective QoE metric translated into a mathematical formula similarly to QoS.

Most of the QoE investigations in the literature, for example, [14], consider link level QoE, that is, the QoE perceived by a given user in the network. The novelty in this work is in proposing metrics for assessing the QoE performance over the whole network, taking into account fairness constraints in the QoE perceived by different users. Furthermore, we study the impact of different radio resource management (RRM) algorithms on optimizing the network QoE performance and ensuring fairness towards the various users in the network.

The investigation is performed under the framework of the long term evolution (LTE) system. In LTE, orthogonal frequency division multiple access (OFDMA) is the access scheme for the downlink (DL), that is, the direction of transmission from the BS to the users. In the LTE uplink (UL), that is, the direction of transmission from the users to the BS, single carrier frequency division multiple access (SCFDMA), a modified form of OFDMA, is used [5]. In LTE, the available spectrum is divided into resource blocks (RBs), each consisting of 12 adjacent subcarriers. The assignment of an RB takes place every 1 ms, agreed to be the duration of one transmission time interval (TTI), or the duration of two 0.5 ms slots [6]. The LTE standard imposes the UL constraint that the RBs allocated to a single user should be consecutive with equal power allocation over the RBs [57].

An important method to boost performance in cellular systems is the deployment of distributed antenna systems (DASs). DASs are used to increase the coverage and capacity of wireless networks in a cost effective way. Although they are used in previous generations of wireless systems, DASs are receiving significant research attention for their deployment in LTE; for example, see [8]. The impact of DAS on LTE uplink scheduling was studied in [9], and their uplink and downlink performance were studied in [10] in the context of public safety networks.

Generally, a DAS system consists of a single central BS connected to several remote antenna heads (RAHs) distributed throughout the cell area. The BS controlling the RAHs could be colocated with any of the RAHs or in a separate location. The BS could be connected to RAHs via wired cables (e.g., fiber optic). Connection topologies include star, chain, tree, and ring topologies [11]. Since an RAH is composed of a remote antenna connected to the BS, this allows centralized control to be performed by the BS as in the conventional case while the RAHs allow extended coverage and/or more user capacity. In addition, for fixed coverage and user capacity, the RAHs provide the users with better QoS since the distance from a user to the nearest RAH will be smaller than the distance to the central BS antenna in the conventional case, which leads to a higher signal to noise ratio (SNR).

Another novelty of this work is the investigation of the role of LTE DAS systems in enhancing network QoE performance, both in the UL and DL directions. LTE RRM algorithms to ensure fair QoE optimization are investigated and compared in scenarios with and without DAS deployments.

The paper is organized as follows. Video transmission and QoS/QoE metrics are overviewed in Section 2. The proposed network QoE metrics are derived in Section 3. LTE resource allocation is described in Section 4. Section 5 analyzes radio resource management in LTE with different utility functions. The simulation results are presented and analyzed in Section 6. Finally, conclusions are drawn in Section 7.

2. Video Transmission over Wireless Channels

We consider that video sequences are encoded into groups of pictures (GOPs) according to the H.264 standard using the joint scalable video model (JSVM) software. Each GOP of frames is considered to consist of one I-frame and P-frames. When a GOP is available for real-time transmission, it should be transmitted within a duration of . When has elapsed, all the frames that are not received are assumed lost, and the transmission of a new GOP begins. Due to the interdependencies of the video frames, the loss of a frame in a GOP leads to the loss of all subsequent frames in the GOP, until the next I-frame is received. The loss of an I-frame leads to the loss of all the frames in the GOP [12].

2.1. Video QoS Evaluation

To measure QoS in video transmission, one of the most widely used metrics is the mean-squared error distortion. Two types of distortion affect a video sequence: source distortion and loss distortion. Source distortion depends on the compression method at the source and is beyond the scope of this paper. Loss distortion corresponds to the distortion caused by lost frames during transmission over the wireless channels. Hence, in this paper, loss distortion is considered. The distortion for replacing a frame , of dimensions pixels, with an estimated frame can be computed as follows [12]: where indicates the pixel value of frame at position . The peak signal to noise ratio (PSNR) in this case can be expressed as with the number of bits used to encode a single pixel in the picture frame.

The total (or cumulative) loss distortion depends on the frame type (I-frame, P-frame) and position (in the same GOP, the loss of a P-frame leads to more distortion than the loss of a P-frame when ). It also depends on the coding method used and error concealment method. The most common approach for error concealment is known as the previous frame concealment method. It consists of repeating the last correctly received frame until the next I-frame is received [12]. An example is shown in Figure 1 assuming frame number 5 is lost. The notation in Figure 1 is used to indicate that frame is used for error concealment to replace the lost frame at a given position. In the shown example, frame number 4 would be used to replace all the remaining frames in the GOP. The total loss distortion when frame is lost in a GOP of frames with one I-frame and P-frames, using the previous frame concealment method, can be obtained from (1) as follows: The expression in (3) is obtained due to replacing all lost frames in the GOP from frame onwards by the last correctly received frame . When an I-frame is lost, it is replaced by the last correctly received frame from the previous GOP.

2.2. Video QoE Metric

QoE is gaining significant interest as a method to quantify the multimedia experience of mobile users; for example, see [14]. A survey of QoE techniques is presented in [1]. QoE tries to measure the QoS as it is finally perceived by the end user. For example, following [2], the QoE can be related to the PSNR as follows: where and are parameters that depend on the video characteristics and is the PSNR expressed in dB. In (4), indicates the best quality and indicates the worst quality. The relation between QoE and QoS to PSNR and distortion is an ongoing research activity; for example, see [13, 14]. In [13], subjective quality assessment of video is performed in order to determine novel QoE metrics. In [3], an empirical QoE metric taking into account PSNR, spatial resolution, and frame rate, in addition to spatial and temporal variances, was derived. A QoE metric derived in [4], based on the metric in [2], is expressed as where is another parameter that depends on the video characteristics, is a constant corresponding to maximum quality, is the frame rate at which the video is displayed, and is the maximum frame rate. The QoE derivations take into account user experience while playing the video and are not inherently designed to assess the transmission over wireless channels. Using the previous frame error concealment method, the same frame rate can be maintained after error concealment (i.e., ), and hence (5) can be simplified to Hence, in this paper, we use (6) with , thus displaying QoE on a scale from 0 to 100.

In practice, during the streaming of a stored video at the streaming server or BS, the video characteristics can be extracted offline and used to determine , , and in the QoE metrics. Although these parameters are difficult to extract during the streaming of a live video, an approach for their dynamic real-time estimation is presented in [4].

3. Network QoE

The QoE metric of (6) is an “individual” metric reflecting the QoE experience of a particular user. The objective of this paper is to investigate radio resource management (RRM) algorithms that ensure a fair QoE satisfaction for all users in the network. Hence, “network” QoE metrics reflecting the overall performance of RRM algorithms in terms of enhancing QoE for all users in the network need to be derived. This section presents novel network QoE metrics that could help assess the fairness of RRM algorithms in ensuring QoE satisfaction.

3.1. Proposed Network QoE Metrics

The first metric is the average QoE. It is given by where is the QoE metric of user , expressed as in (6) for example. The metric in (7) reflects the average performance in the network. However, in some instances, could be relatively high when some users have very high QoE while others have a relatively low QoE. Consequently, this could mask the unfairness towards users with low QoE.

A possible solution to this problem is to derive RRM algorithms maximizing the minimum QoE in the network given by This allows enhancing the worst case performance. However, this could come at the expense of users with good channel conditions (and who could achieve high QoE) that will be unfavored by the RRM algorithms in order to increase the QoE of worst case users.

A tradeoff between the metrics in (7) and (8) could be the use of the geometric mean QoE, given by The metric (9) is fair, since a user with a QoE close to zero will make the whole product in go to zero. Hence, any RRM algorithm maximizing would avoid having any user with very low QoE. In addition, the metric (9) will reasonably favor users with good wireless channels (capable of achieving high QoE), since a high QoE will contribute in increasing the product in (9).

3.2. QoE Optimization

Using the metrics derived in (7)–(9), the objective is to maximize a network QoE metric as follows: subject to where is one of the metrics in (7)–(9). In addition, is the number of users in cell , is the number of base stations (BSs), is the transmit power of user in cell in the uplink (UL), is its maximum transmit power, is the transmit power of BS in the downlink (DL), and is its maximum transmit power. Furthermore, and are the numbers of OFDMA subcarriers in the UL and DL, respectively. Finally, and are indicator variables for the UL and DL, respectively. They are set to one if subcarrier is allocated to user in cell and set to zero otherwise.

The constraints in (11) and (12) indicate that the transmit power cannot exceed the maximum power for the UL and DL, respectively. The constraints in (13) and (14) correspond to the exclusivity of subcarrier allocations in each cell for the UL and DL, respectively, since, in each cell, a subcarrier can be allocated at most to a unique user at a given scheduling instant.

It should be noted that, using individual QoS metrics instead of individual QoE metrics in (7)–(9), these equations become novel definitions of network QoS metrics instead of network QoE metrics. Indeed, Section 6.2 presents a performance comparison when network QoS (considered to be video PSNR in this paper) and network QoE metrics are used.

4. LTE Resource Allocation

In this Section, we describe the approach used in this paper for LTE resource allocation.

4.1. System Model

The system model is displayed in Figure 2, where Scenario (a) shows a traditional BS at the cell center, whereas Scenario (b) shows a DAS deployment with six RAHs deployed throughout the cell area. Consequently, Scenario (b) consists of seven RAHs: one located at the cell center and six located at a distance of , with 60 degrees angular separation between them.

In this work, we consider a single cell, and we compare the LTE performance in the presence and absence of DASs, for the scenarios presented in Figure 2. In the comparisons, we consider the same coverage area and the same number of users in the cell in the case of a single centralized BS (Scenario (a) in Figure 2) and in the case of DASs (Scenario (b) in Figure 2). We also consider the same number of subcarriers in both scenarios.

In the DL, users stream video files coming from the BS in real time. We also consider video transmission in the UL, which could correspond, in practice, to wireless transmission of videos captured by surveillance cameras, to police, or other public safety teams sending real-time videos to a command center during an incident or pursuit, among other possible applications.

In the DAS scenario, the presence of RAHs is transparent to the users who act as if there was only a single central BS in the cell. The communication between the central BS and RAHs is via fiber optic (or microwave links having a nonoverlapping spectrum with LTE) and consequently does not consume any LTE radio resources. The presence of RAHs contributes in enhancing the channel states of the different users by providing each user with an antenna that is closer to it than the central BS antenna. Hence, the channel gain of user over subcarrier on the link with RAH can be expressed as follows: where the first factor captures propagation loss, with the path loss constant, the distance in km from mobile to RAH , and the path loss exponent. The second factor, , captures log-normal shadowing with zero mean and a standard deviation , whereas the last factor, , corresponds to Rayleigh fading power with a Rayleigh parameter such that .

In the traditional scenario (Scenario (a)), denoting by the radio head colocated with the BS at the cell center, the channel gain between the BS and user over subcarrier is expressed as . With DASs, it will transparently “appear” to user that its channel gain with the BS over subcarrier is It should be noted that the above analysis applies to both the UL and DL, depending on whether is an UL or DL subcarrier, respectively.

4.2. Throughput Calculations in the Uplink

Let be the power transmitted by user over subcarrier , be the maximum transmission power of user , and be its achievable throughput in the UL. Then, the SCFDMA throughput of user is given by where is the UL subcarrier bandwidth, is the cardinality of , is the number of UL subcarriers, and represents a vector of the transmitted power on each subcarrier, . is called the SNR gap. It indicates the difference between the SNR needed to achieve a certain data transmission rate for a practical M-QAM system and the theoretical limit (Shannon capacity) [15]. It is given by where denotes the bit error rate (BER). Finally, is the SNR of user after minimum mean squared error (MMSE) frequency domain equalization at the receiver [5]: In (19), is the UL SNR of user over subcarrier . It is given by where is the channel gain over UL subcarrier allocated to user and is the noise power at the receiver of the BS (i.e., the receiver of the RAH that is nearest to user ).

The LTE standard imposes the constraint that the RBs allocated to a single user should be consecutive with equal power allocation over the subcarriers of those RBs [57]. The contiguous RB constraint is enforced by Step 4 of the algorithm in Section 4.4. To ensure equal power allocation, we set

4.3. Throughput Calculations in the Downlink

The DL achievable throughput of user over RB is given by where is the subcarrier bandwidth. In (22), the summation is taken over the consecutive subcarriers that constitute RB . We consider that the BS transmits at the maximum power and the power is assumed to be subdivided equally among all the subcarriers. Hence, the DL SNR of user over a single subcarrier , , is given by where is the total number of DL subcarriers, is the channel gain over DL subcarrier allocated to user , and is the noise power at the receiver of user .

4.4. LTE Resource Allocation

The resource allocation algorithm presented in this section was proposed by the authors in [16] where it was used to enhance individual QoEs. It is repeated here for completeness of the analysis. Furthermore, it is used with various utilities in order to maximize the various novel network QoE metrics presented in Section 3. The algorithm is applicable to both UL and DL. Hence, we will drop the superscripts and to avoid repetition. We denote by the set of subcarriers allocated to user , the set of RBs allocated to user , the total number of RBs, the number of users, and the achievable throughput of user . We define as the utility of user as a function of the throughput given the allocation .

The resource allocation algorithm presented below consists of allocating RB to user in a way to maximize the difference where the marginal utility, , represents the gain in the utility function when RB is allocated to user , compared to the utility of user before the allocation of . The algorithm is described as follows.

(i) Consider the set of available RBs and the set of available users . At the start of the algorithm, and .

Step 1. Find the user that has the highest marginal utility defined in (24) among all available users when the first available RB in is allocated to it. In other words, for each RB , find the user such that

Step 2. Allocate RB to user : .

Step 3. Delete the RB from the set of available RBs:

Step 4. This step is only for the UL direction in order to guarantee the contiguity of subcarrier allocations. It is not needed for the DL. In the UL, if is the same user to which RB was allocated, that is, , then keep in . Otherwise, delete user from the set of available users:

(ii) Repeat Steps 1, 2, 3, and 4, until there are no available RBs or no available users.

The utility function depends on the data rate and can be changed depending on the different services and QoS/QoE requirements. Different utility functions that can be used with the proposed algorithm are presented in Section 5.

5. RRM Utility Selection for QoE Maximization

To perform the maximization of (10), we use the utility maximization algorithm of Section 4.4, applicable for the UL and DL. The proposed algorithm can be applied with a wide range of utility functions, being able to achieve various objectives, with each objective represented by a certain utility function.

To explicitly implement RRM algorithms maximizing QoE metrics, real-time feedback is needed from mobile terminals about the QoE achieved by each user, which requires modifications to the standards. Furthermore, this feedback would depend on each video sequence being streamed. In this paper, we propose to perform RRM without any QoE feedback, using standard compliant algorithms. With the utility maximization algorithm of Section 4.4, we use utility functions depending on the users’ data rates. We investigate Max C/I, proportional fair, and Max-Min utilities for data rates and study the impact of their implementation on the average, geometric mean, and minimum QoE network metrics.

5.1. Max C/I Utility

Letting the utility equal to the data rate , the algorithm of Section 4.4 leads to a maximization of the sum rate of the cell (and hence of the average data rate in the cell). However, in this case, users close to the BS will be allocated most of the resources, and hence will have the highest QoE. However, edge users will generally suffer from starvation and will have very low data rates and consequently very low QoE.

5.2. Max-Min Utility

In this section, we discuss utilities corresponding to the problem of rate maximization with fairness constraints, by attempting to maximize the minimum data rate in the network, for example, [17, 18]. A vector of user data rates is Max-Min fair if and only if, for each , an increase in leads to a decrease in for some with [17]. Max-Min utilities lead to more fairness by increasing the priority of users having lower rates [18]. It was shown that Max-Min fairness can be achieved by utilities of the form [18]: where the parameter determines the degree of fairness. Max-Min fairness is attained when [18]. We use in this paper.

5.3. Proportional Fair Utility

In this section, in order to ensure a more fair allocation of wireless resources, we model the problem as a bargaining game. We consider that each user is a player who wants to maximize its payoff, considered to be its data rate. Consequently, players should share the resources in an optimal way, that is, a way they cannot jointly improve on. The resources to be shared are the OFDMA subcarriers. Allocating the shared resources in a way to maximize the players’ payoffs is equivalent to allocating the subcarriers to users in a way to maximize each user’s data rate, given the shares of subcarriers allocated to the other users. With each user wanting to selfishly maximize its data rate, the users engage in a “bargaining” process. It is a well-known result in game theory that the solution to the bargaining problem maximizes the Nash product [19]: Interestingly, the algorithmic implementation of (29) can be handled by the algorithm of Section 4.4, by using, in that algorithm, as the utility of user , where represents the natural logarithm. Maximizing the sum of logarithms in (29) is equivalent to maximizing the product and is easier to implement numerically. This approach represents proportional fair (PF) scheduling, a well-known resource allocation approach in wireless communications systems. PF scheduling is known to correspond to a sum of the logarithms of the user rates and represents the Nash bargaining solution [20]. Hence, letting provides proportional fairness. Using, in the logarithm, the achievable data rate at the current scheduling instant achieves proportional fairness in frequency (PFF), whereas including the previous scheduling instants by using the cumulative data rate (since the start of the video transmission to/by the user) achieves proportional fairness in time and frequency (PFTF) [9]. In this paper, PFTF is used, since it gives a fair allocation for all users to transmit/receive their videos on the UL/DL. It can be easily shown from (29) that PF RRM algorithms maximize the geometric mean of the user data rates.

6. Results and Discussion

This section presents the simulation results obtained by comparing the scenarios of Figure 2 using RRM with the utilities of Section 5 to maximize the network QoS/QoE metrics of Section 3.

6.1. Simulation Model

The simulation model consists of a single cell with a BS equipped with an omnidirectional antenna or several RAHs each having an omnidirectional antenna, as shown in Figure 2. The simulation parameters are shown in Table 1. LTE parameters are obtained from [7, 21], and channel parameters are obtained from [22]. Users are considered to be uniformly distributed in the cell area.

To simulate the video transmission in both directions, UL and DL, the football sequence, encoded in QCIF format, is used, with GOPs consisting of 15 frames, one I-frame and 14 P-frames, having a GOP duration  s. The results are averaged over 2500 iterations, where, in each iteration, a video sequence has to be transmitted from the BS to each mobile user in the DL, or from each mobile terminal to the BS in the UL.

6.2. Video QoS/QoE Results without DAS Deployment

Figures 3 and 4 show the network QoS and QoE results, respectively, for the DL and UL. Comparing the DL and UL performance in both Figures 3 and 4, it can be noted that the QoS/QoE performance in the DL is slightly better than in the UL, due to the higher transmission power available at the BS.

It should be noted that although the same video sequence was used for simulation purposes, the scenario considered corresponds in practice to a unicasting scenario where different videos are transmitted to (by) each user in the DL (UL). Otherwise, it would be better to perform multicasting by the BS in the DL and collaborative transmission by the users in the UL, which represent interesting topics for future research.

It can be seen that PF scheduling maximizes the average network QoS/QoE both in the DL and UL. It also leads to more fairness in the UL, since the best results for geometric mean QoS/QoE are achieved with PF scheduling. The same is achieved in the DL when the number of users increases by maximizing the geometric mean QoS/QoE when the number of users is above 30. Max-Min scheduling is shown to maximize the minimum QoS/QoE in the downlink and to outperform proportional fair scheduling in terms of geometric mean QoS/QoE when the number of users in the DL is relatively low (below 30). However, its performance in the UL is not as good due to the limited transmit power of mobile terminals. In fact, in the UL, it is outperformed by PF for all three network QoS/QoE metrics and by Max C/I scheduling for the average and geometric mean QoS/QoE metrics. It is also outperformed by Max C/I for the average QoS/QoE metric when the number of users is below 20.

RRM using Max C/I is extremely unfair in the DL, as shown in Figures 3(a) and 4(a) for all three QoS/QoE metrics, respectively. This is due to the fact that it allocates the subcarriers and power available at the BS to users that are relatively close to the BS, which deprives users that are further away from wireless resources. Due to error propagation in video sequences caused by lost frames, this leads to very low QoS/QoE results, especially for the metrics involving a certain notion of fairness: the min QoS/QoE and geometric mean QoS/QoE. In the UL, Max C/I performs better, mainly due to the fact that the power is now distributed: each user in the UL has its own transmit power, conversely to the DL, where the power is concentrated at a single entity, the BS. However, its performance degradation is fast as the number of users increases, especially for the min QoS/QoE in the network.

Comparing Figure 3 to Figure 4, it can be seen that all combinations of {RRM algorithm, network quality metric} have the same performance trends in both figures. In other words, the same conclusions can be reached in terms of the superiority of a method over another for both QoS and QoE. However, the comparison shows that the performance degradation is faster in the QoE case. For example, the performance gap between the average network metric and the geometric mean metric appears to be reduced in the QoS case (Figure 3), whereas it looks significant in the QoE case (Figure 4). As another example, the average QoS metric is nonzero (although low) with Max C/I scheduling (Figure 3), whereas the average QoE metric goes to zero when the number of users increases with Max C/I (Figure 4). This explains the motivation behind using QoE metrics instead of QoS metrics, since they correspond to a more accurate representation of the quality perceived by the end users. In fact, Figure 5 shows the plot of QoE versus QoS (PSNR) for the football video sequence used in the simulations. It can be seen that when the PSNR is too high or too low, the variation in QoE is unnoticeable. However, the sensitivity increases in the intermediate ranges. For example, a drop of 5 dB from 30 dB to 25 dB in PSNR leads to a dramatic fall from 80% to 40% in QoE.

Therefore, in the next section, we consider the QoE network metrics and propose the use of DAS to enhance the performance and ensure more fairness to the users in the network.

6.3. Video QoE Results with DAS Deployment

In this section, we compare the performance of the two scenarios of Figure 2, using the QoE network metrics of Section 3 along with the RRM utilities described in Section 5. The DL results are presented in Figure 6, whereas the UL results are shown in Figure 7. The deployment of DAS leads to significant enhancements for all the investigated scenarios, except for the DL case with Max C/I scheduling, as shown in Figure 6(a), where the enhancement is minor. As explained in the previous section, Max C/I allocates the subcarriers and power available at the BS to users that are relatively close to the BS, which deprives users that are further away from wireless resources. This leads to very low QoE results for most users, especially due to error propagation in video sequences caused by lost frames.

A major difference between the results of Figure 6 and Figure 4(a) is that the Max-Min scheduler leads to the best results in the DL in the presence of DAS for all three network QoE metrics. In fact, Figure 6(c) shows that the performance for the three metrics is almost perfect: the horizontal curves indicate that the maximum QoE (88.5%) is reached for all users. The only reason for not having 100% QoE is due to source distortion caused by lossy compression of the video sequence and not to loss distortion due to packet losses over the wireless channels. The min QoE decreases only slightly when the number of users exceeds 40, as shown in Figure 6(c). This quasiideal performance was achieved with a DAS deployment using six RAHs throughout the cell.

In the UL, the best performance was achieved by PF scheduling, as shown in Figure 7 (particularly Figure 7(b)). The PF scheduler was also the best UL scheduler in the absence of DAS (Figure 4(b)), although DAS has led to a large performance enhancement for all network QoE metrics. This difference between the best DL (Max-Min) and UL (PF) schedulers is explained by the difference of power distribution between DL and UL. In the UL, the power is distributed over all users, where each user has an individual limited power. The deployment of DAS helps enhance the UL network QoE, but the limited transmit power still prevents worst case users from achieving near-optimal performance, which is captured by the min QoE network metric. On the other hand, the BS is the single source of power in the DL. The significantly larger transmit power at the BS provides better flexibility in power and subcarrier allocations and allows the enhancement of the QoE of the worst case users. Furthermore, the DAS deployment enhances the channel conditions for the users in the cell by providing transmit antennas closer to cell edge users. This allows the achievement of better individual QoE results with a lower transmit power for all users, including worst case users favored by Max-Min scheduling, which enhances the overall performance in the network.

Finally, it should be noted that the enhancements reached with DAS could be reached with other solutions. For example, dense heterogeneous network deployments, where small cell BSs are deployed in large numbers within the coverage area of large macrocell BSs, would lead to the same effects due to providing a complete (small) BS closer to the users instead of an RAH in a DAS deployment. Nevertheless, the DAS deployment, when possible, leads to a more cost effective solution. Furthermore, performance enhancements for indoor users can be reached by deploying femtocell access points (FAPs). Obviously, FAP provides an indoor BS close to the end user, which allows the provision of high QoE by overcoming the penetration losses of the macrocell signal, coming from an outdoor BS located further away. The in-depth investigation of network QoE optimization in these specific scenarios is indeed an interesting topic for further research.

7. Conclusions

Network QoE metrics were proposed in order to capture the overall performance of radio resource management algorithms in terms of video quality perceived by the end users. Metrics corresponding to average, geometric mean, and minimum QoE in the network were measured when Max C/I, proportional fair, and Max-Min radio resource management algorithms were implemented in the network. Both the uplink and downlink directions were studied. Furthermore, the use of distributed antenna systems to enhance the performance was considered. In the absence of distributed antennas, results showed that proportional fair scheduling maximizes the average network QoE both in the uplink and downlink. It also leads to more fairness in the uplink and in the downlink when the number of users increases by maximizing the geometric mean QoE. When distributed antennas are deployed, proportional fair scheduling was able to maximize the uplink performance in terms of network QoE, whereas Max-Min scheduling led to remarkably excellent results in the downlink with all the investigated network QoE metrics.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was made possible by NPRP Grant no. 4-347-2-127 from the Qatar National Research Fund (a member of The Qatar Foundation). The statements made herein are solely the responsibility of the authors.