Abstract

This paper is focused on the problem of optimizing the aggregate throughput of the distributed coordination function (DCF) employing the basic access mechanism at the data link layer of IEEE 802.11 protocols. We consider general operating conditions accounting for both nonsaturated and saturated traffic in the presence of transmission channel errors, as exemplified by the packet error rate . The main clue of this work stems from the relation that links the aggregate throughput of the network to the packet rate of the contending stations. In particular, we show that the aggregate throughput presents two clearly distinct operating regions that depend on the actual value of the packet rate with respect to a critical value , theoretically derived in this work. The behavior of paves the way to a cross-layer optimization algorithm, which proved to be effective for maximizing the aggregate throughput in a variety of network operating conditions. A nice consequence of the proposed optimization framework relies on the fact that the aggregate throughput can be predicted quite accurately with a simple, yet effective, closed-form expression. Finally, theoretical and simulation results are presented in order to unveil, as well as verify, the key ideas.

1. Introduction

DCF represents the main access mechanism at the medium access control (MAC) layer [1] of the IEEE 802.11 series of standards, and it is based on carrier sense multiple access with collision avoidance (CSMA/CA).

Many papers, following the seminal work by Bianchi [2], have addressed the problem of modeling, as well as optimizing, the DCF in a variety of traffic load models and transmission channel conditions.

Let us provide a survey of the recent literature related to the problem addressed in this paper. Papers [37] model the influence of real channel conditions on the throughput of the DCF operating in saturated traffic conditions. Paper [3] investigates the saturation throughput of IEEE 802.11 in presence of nonideal transmission channel and capture effects.

The behavior of the DCF of IEEE 802.11 WLANs in unsaturated traffic conditions has been analyzed in [813], whereby the authors proposed various bidimensional Markov models for unsaturated traffic conditions, extending the basic bidimensional model proposed by Bianchi [2]. In [14], the authors look at the impact of channel-induced errors and of the received signal-to-noise ratio (SNR) on the achievable throughput in a system with rate adaptation, whereby the transmission rate of the terminal is modified depending on either direct, or indirect measurements of the link quality.

The effect of the contention window size on the performance of the DCF has been investigated in [1522] upon assuming a variety of transmission scenarios. We invite the interested reader to refer to the works [1522] and references therein.

In [15], the authors proposed a framework to derive the average size of the contention window that maximizes the network throughput in saturated traffic conditions. Moreover, a distributed algorithm aimed at tuning the backoff algorithm of each contending station was proposed. In [16] (see also [17]), the authors investigated by simulation four schemes to reduce the contention window size of the contending stations in a saturated network. Moreover, the service differentiation mechanism proposed in the IEEE802.11e standard was used to enhance the offered QoS. Paper [18] provided a theoretical framework to optimize the single-user DCF saturated throughput by selecting the transmitted bit rate and payload size as a function of some specific fading channels. However, no packet error constraint was imposed on the transmitted packets. In [19] the authors focused on the optimization of the single-user DCF saturated throughput by varying the PHY layer data rate and the payload size given a packet error rate constraint.

Paper [20] proposed a constant-window backoff scheme in order to guarantee fairness among the contending stations in the investigated network. Comparisons with the binary exponential backoff used in IEEE 802.11 DCF were also given. In [22] the authors proposed a control theoretic approach to adjust the CW of the contending stations based on the average number of consecutive idle slots between two transmissions. Finally, in [21] the authors presented some European projects where cross-layer optimization techniques were under investigation.

As a starting point for the derivations that follow, we adopt the bidimensional model proposed in a companion paper [12] (see also [13]). Briefly, in [12] the authors extended the saturated Bianchi's model by introducing a new idle state, not present in the original Bianchi's model, accounting for the case in which the station queue is empty. The Markov chain of the proposed contention model was solved for obtaining the stationary probabilities, along with the probability that a station starts transmitting in a randomly chosen time slot.

Compared to the works proposed in the literature, the novel contributions of this paper can be summarized as follow. Firstly, we investigate the behavior of the aggregate throughput as a function of the traffic load of the contending stations, and note that the aggregate throughput presents two distinct operating regions identified, respectively, by below link capacity (BLC) region and link capacity (LC) region. Derived in closed-form in this work, and identified by throughout the paper, the link capacity of the considered scenario corresponds to the maximum throughput that the network can achieve when the contending stations transmit with a proper set of network parameters. We show that the network operates in the BLC region when the actual value of the packet rate is less than a critical value , theoretically derived in this work.

The second part of this paper is focused on the optimization of the DCF throughput under variable loading conditions. We propose a cross-layer algorithm whose main aim is to allow the network to operate as close as possible to the link capacity . The proposed optimization algorithm relies on a number of insights derived by the behavior of the aggregate throughput as a function of the traffic load , and aims at choosing either an appropriate value of the minimum contention window , or a proper size of the transmitted packets depending on the network operating region. The optimization algorithm is dynamic in that it has to be reiterated in order to follow the variations of the network parameters. Finally, we derive a simple model of the optimized throughput, which is useful for predicting the aggregate throughput without resorting to simulation.

The rest of the paper is organized as follows. Section 2 briefly presents the Markov model at the very basis of the proposed optimization framework, whereas Section 3 investigates the behavior of the aggregate throughput as a function of the traffic load . The proposed optimization algorithm is discussed in Section 4. Section 5 presents simulation results of the proposed technique applied to a sample network scenario, while Section 6 draws the conclusions.

2. Markov Modeling

The bidimensional Markov Process of the contention model proposed in [12], and shown in Figure 1 for completeness, governs the behavior of each contending station through a series of states indexed by the pair , for all , , whereby identifies the backoff stage. On the other hand, the index , which belongs to the set , identifies the backoff counter. By this setup, the size of the th contention window is , for all , while is the minimum size of the contention window.

An idle state, identified by in Figure 1, is introduced in order to account for the scenario in which after a successful transmission there are no packets to be transmitted, as well as for the situation in which the packet queue is empty and the station is waiting for a new packet arrival.

The proposed Markov model accounts for packet errors due to imperfect channel conditions, by defining an equivalent probability of failed transmission, identified by , which considers the need for a new contention due either to packet collisions () or to channel errors () on the transmitted packets, that is, It is assumed that at each transmission attempt each station encounters a constant and independent probability of failed transmission, independently from the number of retransmissions already suffered.

The Markov Process depicted in Figure 1 is governed by the following transition probabilities ( is short for , , .): The first equation in (2) states that, at the beginning of each slot time, the backoff time is decremented. The second equation accounts for the fact that after a successful transmission, a new packet transmission starts with backoff stage 0 with probability , in case there is a new packet in the buffer to be transmitted. Third and fourth equations deal with unsuccessful transmissions and the need to reschedule a new contention stage. The fifth equation deals with the practical situation in which after a successful transmission, the buffer of the station is empty, and as a consequence, the station transits in the idle state waiting for a new packet arrival. The sixth equation models the situation in which a new packet arrives in the station buffer, and a new backoff procedure is scheduled. Finally, the seventh equation models the situation in which there are no packets to be transmitted and the station is in the idle state.

The stationary distribution of the Markov Model in Figure 1 is employed to compute , the probability that a station starts a transmission in a randomly chosen time slot. First of all, observe that a packet transmission takes place when a station goes in one of the states , for all . Therefore, the probability can be evaluated by adding up the probabilities over the set . Upon imposing the normalization condition (The probabilities must add up to 1 for all , and for all .) on the Markov model in Figure 1, and expressing the probabilities , for all , as a function of the probabilities , the probability can be rewritten as follows: As in Bianchi's work [2], we assume that: (1) the probability is constant across all time slots; (2) the probability is constant and independent of the number of collisions already suffered.

Given , the probability of collision , can be defined as follows: Finally, the term in (3) is the probability of having at least one packet waiting for transmission in the station queue after an average slot duration.

Let us spend few words on the evaluation of . Upon assuming that the packet interarrival times are exponentially distributed with mean (, which is measured in , represents the rate at which the packets arrive into the station queue from the upper layers.), the probability can be well approximated by the following relation in a scenario where the contending stations employ queues of small sizes [9]: where , the expected time per slot, is useful to relate the states of the Markov chain to the actual time spent in each state. As a note aside, notice that in (5) corresponds to the probability that zero packets are received from the upper layers when the packet interarrival times are exponentially distributed.

The other terms involved in are defined as follows [2, 12]: is the duration of an empty time slot; is the probability that there is at least one transmission in the considered time slot, with stations contending for the channel, each transmitting with probability ; is the conditional probability that a packet transmission occurring on the channel is successful; , , and are, respectively, the average times a channel is sensed busy due to a collision, the transmission time during which the data frame is affected by channel errors, and the average time of successful data frame transmission. These times are defined as follows: where is the propagation delay, and accounts for the PHY and MAC header durations. The other times are noticed in Table 1.

3. Throughput Analysis

The computation of the normalized system throughput relies on the numerical solution of the nonlinear system obtained by jointly solving (1) and (3). The solution of the system, which corresponds to the values of and , is used for the computation of the normalized system throughput, that is, the fraction of the time during which the channel is used to successfully transmit payload bits: whereby is the average packet payload length, and can be rewritten as follows [2]: In order to gain insights on the behavior of the aggregate throughput , let us investigate the theoretical behavior of (7) as a function of the packet rate for two different values of the packet error probability , and minimum contention window , in a scenario with contending stations transmitting at the bit rate 1 Mbps.

The two rightmost subplots of Figure 2 show the theoretical behavior of the throughput in (7), as well as simulation results obtained with NS2 by employing the typical MAC layer parameters for IEEE802.11b given in Table 1 [1]. Other parameters are noticed in the label of the figure. Let us spend a few words about the simulation setup in ns-2. In order to account for imperfect channel transmissions, the channel model is implemented using the suggestions proposed in [23], where the outcomes of a binary, equiprobable random variable are used to establish whether each packet is received erroneously. In other words, the random variable is equal to 1 with probability (erroneous transmission), and 0 with probability . We notice in passing that the theoretical model developed is independent on the specific propagation channel. The only parameter needed is , which can be appropriately linked to the specific wireless propagation channel upon specifying a threshold of the signal-to-noise ratio allowing perfect reception at the receiver [24].

We adopted the patch NOAH (NO Ad-Hoc), available on the authors' website http://icapeople.epfl.ch/widmer/uwb/ns-2/noah/, for emulating a wireless network in infrastructure mode. The employed traffic model is implemented by generating an exponentially distributed random variable with expected value , in accordance to the theoretical model developed in Section 2.

Let us focus on the curves noticed in the two rightmost subplots in Figure 2. Basically, there are two different operating regions of the throughput in (7). As , that is, all the contending stations approach unloaded traffic conditions, the throughput can be approximated as a straight line passing through the point : This relation follows from the theoretical throughput noticed in (7) upon approximating the probabilities and in the limit .

Indeed, as , the first relation in (5) yields , whereas the probability in (3) can be well approximated by . Since as (because ), it is . Furthermore, as , (8) can be approximated by the following relation: Upon substituting (10) in (7), the theoretical throughput can be well approximated as follows: This linear model is depicted in the subplots of Figure 2 overimposed to both theoretical and simulated results. The key observation from this result is that the aggregate throughput produced by stations approaching unloaded traffic conditions, is only dependent on the number of stations, as well as on the packet size. No other network parameters affect the aggregate throughput in this operating region. Moreover, the results depicted in Figure 2 denote that the derived linear model is valid up to a critical value of (identified by throughout the paper), above which the aggregate throughput no longer increases linearly with . We note in passing that, given , there is no need to simulate the network to obtain the aggregate throughput: it is very well approximated by the theoretical relation derived in (11) for any .

Once again, let us focus on the results shown in the rightmost subplots of Figure 2. The aggregate throughput gets to a maximum at a proper value of , above which the effect of the collisions among the stations, as well as the propagation channel, let the throughput reach a horizontal asymptote. The maximum value of over the whole range of values of turns to be quite useful for throughput optimization. Obtained by Bianchi [2] under the hypothesis of saturated network, and considering only collisions among the stations, in our more general model such a maximum can be estimated in two steps. First, rewrite the throughput in (7) as a function of by using the relations that define and in terms of the probability . Then, equate to zero the derivative of the throughput in (7) with respect to , and obtain a solution identified by . By doing so, the value of for which the throughput gets maximized is easily obtained: Finally, evaluate the throughput in (7) on the solution noticed in (12). Upon following these steps, the maximum throughput takes on the following formThe critical value of acts as a transition threshold between two operating regions. In the first region, that is, for , the transmissions are affected by relatively small equivalent error probabilities, . In this operating region (In what follows, this operating region will be identified by the acronym BLC, short for Below Link Capacity region.), collisions among stations occur rarely ( is small) because of the reduced traffic load of the contending stations, and the stations experience good channel quality ( is very small). For any , the network is not congested, and the contending stations are able to transmit data below the link capacity limit denoted by . This is the reason for which the aggregate throughput grows linearly with the traffic load .

On the other hand, the aggregate throughput tends to be upperbounded by in (13) for . The reason is simple: the aggregate throughput in this region (In what follows, this operating region will be identified by the acronym LC, short for Link Capacity region.) is affected either by the increasing effect of the collisions among the stations, or by worse channel conditions as exemplified by . Further insights on this statement can be gained by the results discussed below in connection with the curves shown in Figure 4.

The critical value of can be found as the abscissa where the linear model of the throughput in (9) equates to in (13): The procedure for obtaining , is clearly highlighted in all the four subplots of Figure 2.

Let us investigate the behavior of the critical value in (14) against some key network parameters.

Figure 3 shows the behavior of as a function of the packet size , for two different values of both and as noticed in the respective legends. Some observations are in order. decreases for longer packet sizes , as well as for increasing number of contending stations, . The reason relies on the fact that for increasing packet sizes, each station occupies the channel longer. Such a behavior is clearly emphasized in Figure 4, where the aggregate throughput for a sample scenario comprising 10 contending stations transmitting with the packet sizes noticed in the legend, is considered. Moreover, given a payload size , decreases for increasing packet error probabilities, .

As long as the packet size increases, the aggregate throughput shows an increasing slope in the linear region characterized by traffic loads , as suggested by the theoretical model in (11). Moreover, the value of tends to decrease because each station tends to occupy the channel longer. The three lower thick curves, labeled by bytes, are associated to three different values of packet error probabilities, . Note that the value of decreases for increasing values of , thus reducing the region characterized by small equivalent error probabilities. As long as , both and tend to zero, making the aggregate throughput vanishingly small.

Similar considerations may be derived from the behavior of versus noticed in Figure 5. Roughly speaking, for fixed values of the packet size, tends to halve as far as the number of contending stations doubles. The reason relies on the fact that the probability of collision increases as long as more stations try to contend for the channel.

Figure 6 compares the behaviour of versus for the two different data rates 1 and 11 Mbps. From this figure we can easily note that the values of corresponding to the high data rate 11 Mbps are roughly three times the ones related to 1 Mbps for each number of contending stations, , and packet size given the increased network capacity available.

4. Throughput Optimization

The considerations deduced in Section 3 are at the very basis of an optimization strategy for maximizing the aggregate throughput of the network depending on the traffic load . To this end, we define two optimization strategies: Contention Window Optimization and MAC Payload Size Optimization. The first strategy is applied when the contending stations operate in the LC region, approaching the link capacity , whereas the second one is used for optimizing the aggregate throughput when the stations operate within the BLC region.

The next two subsections address separately the two optimization strategies, while Section 5 presents the optimization algorithm jointly implementing the two strategies.

4.1. Link Capacity Region: Contention Window Optimization

The first optimization strategy proved to be effective for improving the aggregate throughput in the LC region, that is, for . The key idea here is to force the contending stations to transmit with a probability equal to the one that maximizes the aggregate throughput . In this respect, the probability in (12) plays a key role.

Upon considering saturated conditions, that is, imposing in (3), the probability can be rewritten as follows: By substituting in (4), can be rewritten as follows: whereby .

Finally, by equating (15) to in (12), and solving for , we obtain the optimal minimum contention window size in terms of the key network parameters: This relation yields the value of the minimum contention window that maximizes the aggregate throughput when the number of contending stations , the packet error rate over the channel , and the number of backoff stages , are given. As a note aside, notice that, using , the maximum throughput equates to the link capacity in (13).

Moreover, we notice that for , that is, no exponential backoff is employed, in (17) can be simplified as follows: Considering as a function of , and neglecting the multiplicative constant terms, it is simple to notice that in (12) goes roughly as when . Therefore, the optimal contention window grows linearly with the number of contending stations (when ) in order to mitigate the effects of the collisions due to an increasing number of contending stations in the network.

In the following, we present simulation results accomplished in NS-2 for validating the theoretical models, as well as the results presented in this section. The adopted MAC layer parameters for IEEE802.11b are summarized in Table 1 [1].

The main simulation results are presented in Figures 2 and 7 in connection to the set of parameters noticed in the respective labels.

Some observations are in order. Let us focus on the results shown in Figure 2. A quick comparison among the leftmost and the rightmost subplots of Figure 2 reveals that the choice in (17) guarantees improved performance for any , thus making the aggregate throughput equal . Throughput penalties due to the use of an suboptimal , are in the order of 100 kbps.

Notice also that the value of is independent from the minimum contention window chosen.

The key observation from the subplots of Figure 2 concerns the fact that the aggregate throughput of the optimized network can be modeled as follows: As emphasized by the curves in Figure 2, this model shows a very good agreement with both theoretical and simulation results. Once again, notice that the aggregate throughput may be predicted quite accurately without resorting to simulation.

Let us focus on the results shown in Figure 7, where the saturation throughput is derived as a function of the minimum contention window size . The curves are parameterized with respect to three different values of and as noticed in the legend. The simulated scenario considers stations transmitting packets of size bytes. Continuous curves represent the saturation throughput associated to the case , the dashed curves refer to the case , and the dotted curves are related to the case .

The saturation throughput in each simulated scenario reaches a maximum corresponding to the abscissas predicted by (17) and noticed beside each specific scenario in the legend. Given a predefined number of contending stations, , we notice that tends to be quite insensitive from the packet error rate , whose main effect corresponds to a reduction of the maximum achievable throughput. This behaviour holds for any given number, , of contending stations in the network. Moreover, notice that the maximum of the throughput, which settles around the value 8.6 × 105 bps irrespective of , tends to flatten for an increasing number of contending stations, thus confirming the weak dependence of the maximum of the throughput on the value of the optimal contention window, .

Similar considerations can be drawn from Figure 8 where the higher data rate 11 Mbps has been employed. By contrasting the curves in Figures 7 and 8, it can be easily noted that the optimal minimum contention window sizes corresponding to 11 Mbps are almost 2.5 times lower than the ones achieved for the lower data rate 1 Mbps.

4.2. Below Link Capacity Region: Payload Size Optimization

The analysis of the aggregate throughput in Section 3 revealed the basic fact that for traffic loads less than , the network is not congested, and each station achieves a throughput roughly equal to (the aggregate throughput is thus ). We note in passing that the throughput does not depend on the minimum contention window in the LC region. Therefore, given contending stations, the throughput can only be improved by increasing the payload size when the traffic load satisfies the relation .

Before proceeding any further, let us discuss two important issues in connection with the choice of the packet size .

As long as the erroneous bits are independently and identically distributed over the received packet (We notice that wireless transceivers make use of interleaving in order to break the correlation due to the frequency selectivity of the transmission channel [24].) the packet error probability can be evaluated as where, In the previous relations, and are, respectively, the sizes (in bits) of the MAC and PLCP (PLCP is short for Physical Layer Convergence Protocol.) headers, and is the size of the data payload in bits. Equation (22) accounts for the fact that a packet containing the useful data, is considered erroneous when at least one bit is erroneously received. Relation (20) is obtained by noting that a packet is received erroneously when the errors occur either in the PLCP part of the packet, or in the information data.

We notice that the relations (20) through (22) are valid either with the use of convolutional coding, whereby the bit error rate performance does not depend on the code block size [24], or for protocols that do not employ channel encoding at the physical layer.

From (22) it is quite evident that the higher the payload size, the higher the packet error rate; and viceversa. As a note aside, we notice that this behavior does not hold when concatenated convolutional channel codes [2527], as well as low-density parity-check codes, are employed as channel codes. Indeed, for these codes the packet error rate depends on the size of the encoded block of data, with longer packets having smaller probability of error compared to shorter packets.

The second issue to be considered during the choice of is related to the critical value , which depends on the packet error probability ( defined in (20)), and consequently on the payload size. Given a traffic load , any change in could affect the network operating region, which might move from the BLC region to the LC region.

Let us discuss a simple scenario in order to reveal this issue. (Where not otherwise specified, we employ the network parameters summarized in Table 1.) Consider a network (identified in the following as scenario A) where contending stations with traffic load  pkt/s transmit packets of size bytes. Assume that the bit error probability due to the channel conditions is , which corresponds to a packet error rate (from (20)). Upon using the network parameters summarized in Table 1, the critical load in (14) corresponds to  pkt/s. Since , the network is in the BLC operating region. If the stations increase the payload size up to bytes, the packet error rate increases to the value , with the side effect of decreasing to 4.71 pkt/s. Therefore, for the given traffic load  pkt/s, the network starts operating into the LC region.

Based on the considerations deduced in the scenario A, the proposed optimization technique aims to optimize the payload size in two consecutive steps. In the first step, we find the size of the payload in such a way that the critical threshold equals the actual traffic load , in order for the network not to operate beyond the BLC region.

The second step verifies whether the packet error rate associated to this payload size, is below a predefined PER-target (identified by ), which defines the maximum error level imposed by the application layer. First of all, given an estimated bit error probability at the physical layer, solve (22) for . Then, solve (20) for , and substitute the relation in place of . Considering , the following maximum payload size follows: whereby is the ceil of the enclosed number.

Let us consider again the scenario A discussed previously with the application of this algorithm. Consider 10 stations transmitting packets of size bytes at the traffic load  pkt/s. Assume that the bit error rate imposed by the specific channel conditions is , yielding and  pkt/s. Moreover, assume a target PER equal to (This is the maximum PER specified in the IEEE 802.11b standard [1], guaranteed at the receiver when the received power reaches the receiver sensitivity.).

Since , after the first step, the algorithm selects the optimal size bytes, which moves the working point near the link capacity and leads to the actual packet error rate . Since the constraint is not satisfied, the proposed method estimates the size in order to attain the constraint. The packet size becomes eventually bytes (from (23)), yielding a packet error rate and a  pkt/s, thus leaving the working point in the BLC region.

Finally, the optimal payload size is where bytes is the maximum packet size imposed by the standard [1].

Simulation results of a sample network employing the algorithm described above, are presented in the next section, along with a sample code fragment summarizing the key steps of the optimization technique.

5. Simulation Results

In this section, we present simulation results for a network of contending stations, employing the optimization strategies described in Section 4.

The basic steps of the proposed optimization algorithm are summarized in Algorithm 1. Let us spend a few words about the implementation of the proposed algorithm in a WLAN setting employing the infrastructure mode, where an Access Point (AP) monitors the transmissions of contending stations.

(1) do
(2)   Estimation of PER and N
(3)   Evaluate from  (14)
(4) while  (IDLE)
(5)  Request to Send from Upper Layers:
(6) if ( )
(7)       Link Capacity Operating Region:
(8)  The station evaluates from  (17)
(9) else
(10)   Below Link Capacity Operating Region:
(11)  compute given
(12)  compute given
(13)  select ,
(14) end
(15) send packet

The evaluation of the critical value in (14) requires the estimation of three key parameters of the network: the number of contending stations , the packet error rate , and the value of the optimal , which depends on . Each station can estimate the number of contending stations by resorting to one of the algorithms proposed in [2830]. On the other hand, the packet error probability , may be evaluated through (20), once the bit error probability at the physical layer is estimated. We recall that can be estimated by employing proper training sequences at the physical layer of the wireless receiver [24].

Finally, the value of the optimal can be evaluated by each station with the parameters and . A similar reasoning applies for the estimation of the optimal packet size.

We have realized a C++ simulator implementing all the basic directives of the IEEE 802.11b protocol with the 2-way handshaking mechanism, namely exponential backoff, waiting times, post-backoff, and so on. In our simulator, the optimization algorithm is dynamically executed for any specified scenario. For the sake of analyzing the effects of the optimization, the instantaneous network throughput is evaluated over the whole simulation. The aggregate throughputs obtained in the investigated scenarios are shown in Figure 9.

In the first scenario, we considered a congested network in which 10 stations transmit packet of fixed size 1028 bytes at the packet load  pkt/s and bit rate 1 Mbps. Other parameters are , and , whereas the channel conditions are assumed to be ideal. As shown in Figure 9 (curve labeled LC region with asterisk-marked points), the aggregate throughput is about 7.6 × 105 bps. After 40 s, 5 out of the 10 stations turn their traffic off. Between 40 s and 80 s, the aggregate throughput increases to about 8.2 × 105 bps because of the reduced effect of the collisions among the contending stations. After 80 s, the 5 stations turn on again and the aggregate throughput decreases to 7.6 × 105 bps.

Consider again the same scenario over a time interval of 120s, whereby the contending stations adopt the optimal contention window. Upon using (17), the optimal contention window is when 10 stations transmit over the network, and during the interval in which only 5 stations contend for the channel.

The aggregate throughput in the optimized scenario is about  bps, as noticed by the star-marked curve in the same figure. We notice that the optimized throughput is quite constant over the whole simulation independently on the number of contending stations in the network, approaching the theoretical maximum throughput  bps obtained from (13). Moreover, notice that the two curves related to both and 10 are almost superimposed.

Consider another scenario differing from the previous one in that the contending stations operate in the BLC region. As above, 10 stations contend for the channel in the time intervals [0,40] s and [80,120] s, while during the time interval [40,80] s 5 out of the 10 stations turn off. The traffic load is  pkt/s, and the packet interarrival times are exponentially distributed.

Let us focus on the aggregate throughput depicted in Figure 9. The curves related to the scenario at hand are identified by the labels  pkt/s. As a result of the application of the proposed algorithm to the choice of the packet size, we obtain bytes for the case when 10 stations are transmitting, and bytes in the other scenario with 5 active stations.

Simulation results show that the proposed algorithm guarantees improved throughput performance on the order of 160 kbps when 10 stations are active, and about 400 kbps when only 5 stations contend for the channel. We notice in passing that, despite the optimization, the aggregate throughput could not reach the maximum because of the low traffic load. Indeed, the capacity link could be achieved by using a packet size longer than either the one imposed by the standard [1], and the value obtained from (23).

For the sake to verify that the optimization of the minimum contention window does not affect the aggregate throughput in the BLC region, Figure 9 also shows the aggregate throughput obtained by simulation using a minimum contention window equal to . Notice that the related curve is superimposed to the one related to the scenario in which the contending stations employ the minimum contention window suggested by the standard [1].

6. Conclusions

This paper proposed an optimization framework for maximizing the throughput of the distributed coordination function (DCF) basic access mechanism at the data link layer of IEEE 802.11 protocols. Based on the theoretical derivations, as well as on simulation results, a simple model of the optimized DCF throughput has been derived. Such a model turns to be quite useful for predicting the aggregate throughput of the DCF in a variety of network conditions.

For throughput modeling, we considered general operating conditions accounting for both nonsaturated and saturated traffic in the presence of transmission channel errors identified by the packet error rate . Simulation results closely matched the theoretical derivations, confirming the effectiveness of both the proposed DCF model and the cross-layer optimization algorithm.