Abstract

Radio channels induce distortions to the radiation pattern of beamforming systems such as beam broadening as well as sidelobe level and null rising. If these effects are ignored, the system performance is overestimated. This paper proposes the simple concept of an effective radiation pattern (ERP) calculated by optimally fitting the “real-world” radiation pattern to the ERP. The proposed ERP method is incorporated into a multicell bad urban 4G LTE operational scenario which employs beamforming for both the BSs and the RNs. The performed simulations provide evidence that the ideal instead of the real radiation pattern overestimates the SIR and capacity by almost 3 dB and 13 Mbps, respectively, for the reference scenario without RNs. It also proves that the ERP method produces almost identical performance results with the real radiation pattern, and hence it is a simple and viable option for realistic performance analysis. Finally, the network performance is studied as a function of the number of RNs with the help of the ERP method. Results show that a beamforming LTE network with RNs that also employ beamforming provides 3 dB SIR gain with the addition of 1 RN per cell and 15 dB gain with 4 RNs per cell.

1. Introduction

Beamforming and relays are two key techniques used by broadband wireless systems such as the 4th generation LTE wireless system, in order to help them confront interference, multipath, and poor signal strength that limit capacity and hence deliver the promised high data rates to users. Relays are used to extend coverage, fill in dead spots, and increase capacity for cell-edge users and wireless backhauling. Beamforming leverages the spatial dimension resulting from multiple antenna elements by focusing directional beams towards the desired users and reducing interference to and from unwanted users. The combination of beamforming with base station and relay nodes has been studied extensively in the literature and the benefits have been highlighted in several publications [14].

Nevertheless, the majority of the performed analyses for beamforming systems ignores or underestimates the significant effects on the produced “real” radiation patterns, from parameters such as multipath propagation, mutual coupling among the array elements, or other implementation mismatches (calibration, power amplifier linearity, etc.). However, it is known that in several cases (e.g., [59]), these effects dominate the radiation pattern characteristics, since they can broaden the main lobe, bring up the sidelobe level, and fill up the nulls. As a result, when these effects are overlooked, the estimated system performance is overoptimistic.

There are several techniques discussed in the open literature that avoid both the mutual coupling effects and the different implementation mismatches (see [1014]), alas at some increased complexity and cost. On the other hand, it is more difficult to take into account the effects of the radio channel on the real radiation pattern, due to its volatility. This paper focuses on this issue and proposes the simple concept of the effective radiation pattern (ERP), in order to simplify the analysis and yet produce realistic performance estimations. The real radiation pattern, that is, the produced radiation pattern after the effect of the radio channel is taken into account, is approximated by a simple step function with parameters the effective main lobe beamwidth (BW) and the effective average sidelobe level (SLL), (i.e., by a “flat-topped” beam pattern). The word effective is used here to reflect the modified beamwidth and average sidelobe level of an ideal radiation pattern, if multipath is taken into consideration, both calculated via a cost function minimization that best fits the ERP with the real radiation pattern. It must be mentioned here that the ERP concept generally assumes beamforming capability in a real environment, that is, an appropriate array geometry, the necessary number of antenna array elements, and adaptive algorithms that produce radiation patterns with the desired characteristics in terms of beamwidth and average sidelobe level.

Based on the ERP concept, [15] provides a closed form expression for the SIR cumulative distribution function (CDF) of a multitier OFDMA cellular network. That work is extended in this paper in order to prove that wireless system performance is overestimated when the effects of the radio channel to the radiation pattern of beamforming systems are ignored, while the use of the simple ERP concept can resolve this issue. For this reason, comparative simulations between systems employing ideal, real, and ERP radiation patterns are provided in this work. Hence, in the following analysis, the proposed ERP method is incorporated into a multicell, multicarrier beamforming 4G LTE system which employs Relay Nodes (RNs), in order to demonstrate the system level effects (SIR, capacity) from the radio channel, as well as the validity of the proposed method.

The paper is organised as follows: Section 2 provides the analysis on the ERP, Section 3 presents the deployment configuration of the network considered in our study, Section 4 outlines the simulations performed along with the results of our work, and finally Section 5 concludes the paper.

2. Effective Radiation Pattern Concept

According to [6], the measured azimuth () radiation pattern () results from spreading the ideal antenna pattern () over the environment power azimuth pattern (). In other words, the real radiation pattern is determined by the convolution of the ideal radiation pattern with the environment power azimuth pattern: In [16], it was shown that the best way to model the power azimuth spectrum (PAS) around the base station for both urban and rural environments is the Laplacian distribution [17]. Replacing the power azimuth pattern in (1) with a Laplacian distribution leads to where is the angular spread (AS). Let us assume now the array factor of an ideal element linear antenna array with an element distance of and bore-sight radiation: The impact of the environment azimuth power profile on can be calculated using (1)–(3) as follows: Note that when , the ideal antenna array is symmetrical. Hence, without loss of generality, for the analysis in this section, we consider only the “upper part” of the antenna array pattern; that is, .

Our goal is to model the diagram with an ERP given by a simple step function with parameters the effective beamwidth (BW) and sidelobe level (SLL): where is the pointing angle of the main lobe. A graphic representation of the ERP is depicted in Figure 1.

In order to provide the best fit between and the BW and SLL parameters must be defined in a way that the cost function given in (6) is minimized. Consider In (6), is the minimum value of the side lobe level that is used to define the search area for the effective SLL (SLLopt). The limiting case for the minimum value of side lobe level is when the angular spread is very low (almost zero) and the number of array elements is very large. As an example, when the angular spread is zero and the number of linear array elements is 750, the corresponding effective sidelobe level (SLLopt) for the ERP is −30.5 dB. As a result, a value close to  dB can be chosen in order to cover all practical scenarios of cellular antenna systems, as well as cases for very large antenna arrays systems.

In order to demonstrate the validity of the proposed method we focus on the outdoor macro and RN operational environments and channel models in the context of a 4G LTE wireless system. Nevertheless, it must be mentioned here that the ERP method is valid for any channel model and system. Following the conclusions derived in [18], the Angular Spread (AS) at the BS of a bad urban macrocell is 17°, typically, while the AS for a bad urban microcell RN is 33°, typically. Therefore, we employ in the Laplacian distribution these values for the AS of the two network nodes and perform the analysis of (1)–(6) in order to find the appropriate BWopt and SLLopt that will help us model the real radiation patterns with antenna elements.

Figures 2 and 3 show the ideal (3), the real (4), and the ERP (5) for and AS = 17°/33°. The ERP parameters (e.g., °,  dB for Figure 2) were calculated with (6). Note that since the focus of our analysis is on the effects of the propagation environment to the ideal radiation pattern and its modeling with an ERP, a fixed value of −20 dB is considered outside the azimuth area of interest for all the radiation patterns (important for the following system performance analysis). It can be seen from these figures that the environment induces considerable distortion on the ideal antenna diagram. Firstly, the nulls of the original radiation pattern (blue dotted line) are filled up (black solid line), raising the energy allocated to side lobes and hence the interference to/from unwanted users. Secondly, the main beam is broadened, for example, the original 12°3 dB beamwidth almost doubled for the case shown in Figure 2. Both these observations provide an insight into why the effective radiation pattern resulted in the BWopt and SLLopt values shown in Figures 2 and 3.

3. Network Configuration

In the following analysis, we incorporate the ERP concept into the downlink of a multicell, multicarrier beamforming 4 G LTE system which employs RNs, in order to demonstrate the system level effects (SIR, capacity) from the channel AS and the validity of the proposed method. Users are uniformly distributed in the network area and their serving node (SN) is either a BS or an RN, according to the channel gain between the user and the respective network node. The same numbers of RNs per cell are considered with fixed predefined locations (an acceptable assumption due to the uniform user distribution; otherwise, in a real network deployment, the RN locations should be optimized according to the user distribution).

The network nodes employ appropriate adaptive algorithms to perform optimum beam steering towards a user, if they are the SNs. A snapshot of the network layout is depicted in Figure 4 for a network configuration with 1 RN per BS (configurations with up to 4 RNs per BS are studied in the following).

The wanted user () is located in the central cell and at a distance from his server ( is the cell radius). The interference that the wanted user experiences (on each radio channel) depends on the number of downlink cochannel users and on the interfering node radiation pattern steering angle. As Figure 4 suggests, the can either be in or out of the main lobe of an interfering node antenna, receiving higher/lower interference, respectively. A fully loaded network is considered, where each radio channel is reused in every BS (reuse factor 1). Under the fully loaded scenario, the receives six interfering signals from the six interfering nodes (fixed BS or RN positions), and hence the amount of total interference depends only on the radiation patterns.

4. Simulations

The network configuration described above is analyzed via Monte Carlo simulations (104 iterations) in order to produce the empirical cumulative distribution function (ECDF) of the wanted user SIR and his achieved data rate.

4.1. Simulation Description

A simulation flow chart for each Monte Carlo iteration is given in Figure 5, while the simulation parameters are shown in Table 1.

Each simulation run starts with the arrival of a new user, . The new user’s location is randomly selected in the network area and the list of eligible serving nodes is decided. More specifically, all the network BSs along with the RNs in line of sight (LOS) with the new user are considered in this list. For the purposes of this work, the LOS distance is set at 150 m; that is, the new user’s eligible list of SNs includes all the BSs and the network RNs that are less than 150 m away from the new user. This way we avoid unrealistic situations where a user is located close to the BS but is served by the RN.

In step 3 of the simulation flow chart, the channel gains between the new user and the eligible network nodes are calculated based on the channel models and parameters proposed by [18], and the eligible SN is determined (step 4). If the eligible SN is the central cell or a cell that already serves a user, the new user is dropped. We study a system operating at the maximum 100% system load with full buffer traffic model, so for one radio channel this means one user per cell. Otherwise the new user is accepted and his SN steers its radiation pattern (any of the , or ) towards the new user’s location. Finally, the total interference received by the wanted user at the central cell is calculated taking into account the individual radiation patterns.

Each simulation run ends when six users are served from six BSs or RNs. At this point, system level parameters such as the wanted user’s SIR and data rate, are calculated and saved. In terms of the rate calculation we have considered a 4G/LTE system with 20 MHz bandwidth, and hence, Transport Block (TB) sizes for 100 Resource Blocks are taken into account. The data rate calculation methodology is briefly outlined in the flow chart of Figure 6. The detailed description of the rate calculation is out of the scope of this paper and for this reason [19] is cited.

4.2. Simulation Results

Each of the three different radiation patterns described in Section 2 (, , and ) was separately employed at the network nodes and the corresponding SIR ECDFs were derived for every network configuration. Our goal is first to prove that employing the ideal radiation pattern (see in (3)) instead of the real one ( in (4)) in 4G networks like the one considered here produces overoptimistic results for the SIR and the achieved capacity. Then, we want to show that the simple concept of the ERP can produce the same performance results with the real radiation pattern, while it offers much greater flexibility in the analysis of such networks, for example, in [15].

Table 1 gives the basic simulation setup parameters, while the Winner II channel models with the corresponding large scale parameters that can be found in [18] were used for calculating the losses and channel gains between the network nodes and the users. It should be pointed out that cell radius of 0.5 Km was employed in the simulations in order to be in accordance with the Winner II recommendations. As mentioned in [18], the AS median values depend on the cell radius and the recommended values for bad urban scenarios were computed under the assumption of 0.5 Km cell radius.

First, a reference network scenario with no RNs was simulated, and Figure 7 presents the ECDF of the wanted user SIR. The “ideal” line shows the SIR ECDF when of (3) with is used at the BSs. The “real” line shows the achieved SIR when the radiation patterns of the BSs also consider the environment, that is, of (4) with AS = 17°, and the “ERP” line is the SIR ECDF when the effective radiation pattern is employed with ° and  dB (see Figure 2). As shown in Figure 8, employing the ideal radiation pattern instead of the real one leads to ~3 dB higher SIR (50% outage), which in terms of data rate corresponds to ~13 Mbps higher throughput estimation. At the same time, using the ERP method practically leads to the same SIR and data rate results with the real radiation pattern, as Figures 7 and 8 show.

For instance, in Figure 7, the difference between the “real” and the “ERP” radiation pattern for the 50% outage SIR is less than 0.4 dB. The ERP was further evaluated via simulations for different system characteristics (1 Km cell radius) but the results led to the same observations as with the results for the 0.5 Km radius presented herein; for example, for the reference scenario with no RNs the difference between the “real” and the “ERP” radiation pattern for the 50% outage SIR is ~0.3 dB (the difference for the 0.5 Km radius as previously mentioned was 0.4 dB). Hence, the obvious issue arising from the fact that a proper system analysis should take into account that the effect of the operational environment on the radiation pattern characteristics is resolved via the use of the simple method of the ERP.

It should be pointed out that the choice of 104 Monte Carlo iterations was validated for the reference scenario. Increasing the Monte Carlo iterations from 104 to 105 for all the radiation patterns shown in Figure 7 led to less than 0.3 dB difference for the 50% SIR outage. This difference is particularly low in the context of our problem, and hence 104 Monte Carlo iterations were employed in all simulations.

Next, a network configuration with 1 RN per BS is simulated in order to prove the validity of the ERP method in mixed operational scenarios (different AS for BSs and RNs). Figure 9 presents the SIR ECDF for this network configuration. The “ideal” line shows the SIR ECDF when of (3) with is used at the BSs and the RNs. The “real” line shows the achieved SIR when the radiation patterns of the network nodes consider the environment, that is, of (4) with AS = 17° for the BSs and of (4) with AS = 33° for the RNs. The “ERP” line is the SIR ECDF when the ERPs are considered for the network nodes (° and  dB for the BSs and ° and  dB for the RNs; see Figures 2 and 3, resp.). The same observations made for Figure 7 are also confirmed here. Using the ERP method at both the RNs and the BSs produces the same SIR results as with the real radiation patterns.

Figure 10 considers network configurations with 1, 2, and 4 RNs per BS. The “real” lines show the achieved SIR when the radiation patterns of the network nodes consider the environment, that is, of (4) with AS = 17° for the BSs and of (4) with AS = 33° for the RNs. The “ERP” lines are for the SIR when the ERPs are considered for the network nodes (° and  dB for the BSs and ° and  dB for the RNs; see Figures 2 and 3, resp.). It can be easily seen that the ERP method can be used instead of the real radiation pattern with practically no divergence.

Figure 11 shows results for the achieved throughput in a 4G/LTE network when a different number of RNs per BS is employed. These results have been produced with the ERPs employed at the network nodes. The figure shows the ECDFs of the rate that the wanted user in the central cell can reach. Note that the ECDFs have the same peak value since the maximum TB size of TX mode 1 in LTE is ~75 Mbits (see Table of [19]). There is an obvious performance boost in terms of throughput with the increase of RNs per cell.

This is seen more clearly in Figure 12 where the 10% outage rate and SIR are depicted for all networks configurations. Specifically, there is a 70% improvement in the wanted user rate with just the addition of one RN per BS, while there is a 500% improvement for the scenario with 4 RNs per cell (six-fold increase of the wanted user’s throughput). In the same manner, one RN per BS adds 3 dB in the SIR while four RNs per BS lead to more than 15 dB SIR increase.

Comparable results for the performance gain from the deployment of RNs in 4G networks can be found, for instance, in [20] where the average cell throughput gain (5% outage) from one and four omnidirectional RNs per macrocell is shown to be 15% and 40%, respectively, and in [21] where 2, 3 and 6 omnidirectional RNs per cell are considered and the corresponding SIR ECDFs are provided. Specifically, in [21], the SIR gain for 10% outage when 2 omnidirectional RNs are employed is almost 4 dB, while the corresponding gain for our network with also 2 RNs per BS that however employ beamforming is more than 6 dB. At this point, it is worth noting that in the context of our work, we made the same observation with the authors in [21], concerning the performance of the network with 3 RNs per cell. The performance when 3 RNs per cell were used was only slightly better than that when 2 RNs per cell were used.

5. Conclusions

This paper proposes the simple concept of an effective radiation pattern with parameters the effective main lobe beamwidth and the effectiveaverage sidelobe level for realistic performance analysis of wireless systems. After the proposed ERP method has been described, it is incorporated into a 4G LTE beamforming system for both the base stations and the relay nodes, and simulation analysis demonstrates the system level effects (SIR, capacity) from the spatial characteristics of the radio channel, as well as the validity of the proposed method.

The performed simulations provide evidence that the ideal instead of the real radiation pattern overestimates the SIR (almost 3 dB for the reference network with no RNs) and subsequently leads to a significant over-estimation of the achieved data rates (over 13 Mbps for the reference network with no RNs). It is also proved that the ERP method produces almost identical performance results with the real radiation pattern, and hence it is a simple and viable option for designers and researches aiming at realistic performance analyses of wireless systems.

Finally, employing the ERP method the capacity of the network is studied as a function of the number of RNs. The results show that in a beamforming LTE network with RNs that also employ beamforming the usage of only one RN per cell leads to 70% throughput improvement, while increasing the number of RNs to four RNs per cell a throughput gain of 500% can be achieved.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research has been cofinanced by the European Union (European Social Fund ESF) and the Greek National Funds through the Operational Program Education and Lifelong Learning of the National Strategic Reference Framework (NSRF)-Research Funding Program: THALIS NTUA, Novel Transmit, and Design Techniques for Broadband Wireless Networks (MIS 379489).