The sharply increasing amount of data, which are transferred by the satellite network, requires the satellite network to provide quality-of-service (QoS). However, the upsurge in the data flow leads to the network congestion, impeding its ability to offer QoS. Congestion control mechanisms, deployed in the ground networks, have been thoroughly studied. But those deployed in the satellite networks have not been studied yet. As satellite networks are now important supplements to the ground backbone networks, this paper carefully analyzes the current challenges of developing the QoS-oriented congestion control mechanism for satellite networks. On this basis, a consequent mechanism QMCC (QoS-oriented mechanism of congestion control for satellite networks) is described in detail. Under that QMCC, the source nodes utilize an equation to compute the sending rate for each data flow; meanwhile, the intermediate nodes keep detecting real-time package-loss rates for timely adjustments. Simulation results indicate QMCC can provide superior congestion control performance, and raise network throughputs without reducing the QoS effects.

1. Introduction

Satellite networks provide broader coverage and easier access than conventional ground networks, so they are deemed as important supplements to ground backbone networks. But they are quite different for their unfixed topologies, stretched link delays, and nonignorable BERs (bit-error rates), which make it impossible to directly transplant developed ground methods into satellite networks [1].

Nowadays, QoS services are required by data flows from many applications [2]. QoS services simultaneously take bandwidths, PLRs (package-loss rates), delay-jitter rates, and other parameters into consideration during transmission instead of delay alone. Different applications declare different QoS restrictions based on their data generating characteristics. For example, video chatting is sensitive to delay and bandwidth constraints but can tolerate a certain amount of packet losses; E-bank applications may have high appetite for longer delays but extreme low appetite for PLRs. Congestions brought by growing data become the bottlenecks of network development, which directly leads to significant degradations on some QoS services. According to places the controls have taken, the proposed mature congestion control mechanisms are usually categorized into two classes—source control and intermediate control. Source control mechanisms, like the TCP/IP protocol suite (Tahoe, Reno, NewReno, and Sack) [3], conduct controls at the source ends. The QoS requests, however, require further complicated control besides sources. The intermediate mechanisms introduce operations in intermediate network nodes like routers or switches. This mechanism is capable of capturing the latest states of network elements (links and nodes). Except for these two classes, some interdisciplinary researches succeed in constructing systematic control models. Regrettably, they do not fit for satellite network environment.

A complete QoS service contains two phases, QoS routing and QoS control (or QoS guarantee). QoS routing finds an optimal path for every data flow and accordingly QoS control guarantees the end-to-end transmission quality. For satellite networks, QoS routing problems are mainly heuristically solved by combination optimization methods such as swarm intelligence or genetic algorithm, which is not relevant to this paper. A lot of QoS control mechanisms have been proposed including IntServ and DiffServ, SCORE (Stateless Core), and MPLS (multiprotocol label switching) [4]. Among these mechanisms, DiffServ is the most deeply studied and widely tested, which is confirmed to partly satisfy satellite environment restricts. Even so, to the best of our knowledge, no published literature has studied a QoS-oriented congestion control mechanism for satellite networks.

This paper concentrates on the following aspects: we first analyze the reasons why mature congestion control mechanism cannot be transplanted to QoS-based satellite networks (Section 2); accordingly, a congestion control mechanism QMCC, taking congestion control at both source and intermediate nodes, is illustrated (Section 3); the QoS control strategies of QMCC are further discussed (Section 4); simulations are conducted and relevant results are diagnosed to verify the superiority of QMCC over some mature mechanisms in satellite environments (Section 5); finally, conclusions and future topics are briefly indicated.

2. Limiting Factors

In this section, some unelectable factors encumbering the developments of QoS-oriented congestion control on satellite are discussed.

2.1. Satellite Environments

Compared with the fixed ground network, the following features of a satellite network impact the mechanism design [5]:(a)longer link delays,(b)higher link BERs,(c)higher delay-jitter rates,(d)asymmetrical data links.

2.2. Data Flow Control

Some elaborately designed mechanisms attempt to evenly allocate the generated data to the whole network to disperse burst or unbalanced data flows; however, the most elementary and effective thought to avoid congestion is to impose restrictions on the amount of injected data. Nowadays, QoS control is usually achieved by Premium Service under DiffServ structure [6], in which the admission controls are usually performed at edge nodes (especially source nodes). Under DiffServ, three topics require deeper research as follows.

(a) Flow Control by Flow Shaping. Under DiffServ structure, the QoS-oriented EF (expedited forwarding) flows [7] need to be shaped before entering the network for coarse-grained resource reservation. But the process of flow shaping may constrain QoS. Figure 1 depicts the impacts on a flow after shaping.

Functions and represent arriving and leaving rates at a source node, respectively.

The delay packet suffers before leaving; that is,

The number of packets in the buffer at the moment   is It is notable that data flows may violate the original QoS requests after shaping. For example, long-term delayed video packets in the buffer may severely degrade the image qualities and finally frustrate the application. Thus, for an EF flow , flow shaping obeys.

The upper bound of packet delays after shaping is tolerable:

The upper bound of flow bandwidth’s reduction value after shaping is tolerable: means the resolution to flow disturbance at destination nodes. The thresholds can be expressed as the functions of QoS parameters together with their estimated values.

Given that (3) is satisfied, the flows are proved to meet the original QoS constraints with certain probabilities, which could be denoted with an error term specified in Section 4.

For other flows, flow packets encounter failures despite the theoretic feasibility on path routing. In conclusion, the source nodes have to take flow shaping into consideration.

(b) Flow Control by Bandwidth Guarantee. Under DiffServ, performance guarantee on PS (premium service) streams is realized thorough EF PHB (per-hop behavior) at each node. As PHBs take only delays and PLRs into consideration on describing next hop, the intermediate nodes could hardly change the arriving or leaving flow rates. Therefore, precise bandwidth guarantees of EF flows are assigned to end nodes, particularly the sources.

(c) Flow Control by Congestion Control. On congestion control, source ends must timely change the injected flow rates according to the latest network states, acquired by capturing network feedback.

2.3. Relationships between QoS Control and Congestion Control

Congestion control and QoS control are often confused in many contexts. The former features the network-layer macroproperties while the latter focuses on flow-based microperformances. Congestion control is usually regarded as the core of QoS control [3], for the necessity of an uncongested network to deploy QoS control strategies like buffer management and queue scheduling. In this paper, congestion control is designed on QoS-basis to promote network throughputs without harming QoS parameters.

This paper will reconsider the above factors and propose a novel QMCC mechanism.

3. QoS-Oriented Mechanism of Congestion Control for Satellite Networks

In this section, we describe the QMCC in details.

3.1. Equation-Based Source Congestion Control

On satellite networks, congestion control strategies could utilize neither feedback ACK signals (e.g., TCP Tahoe) nor real-time RTT (round-trip time) estimations (e.g., TCP Vegas). In our work, an equation-based strategy is carried out at source. Without implicitly adjusting sending rates by slidable congestion windows, the proposed method explicitly changes rates according to the preconfigured equations maintained at source nodes; equations get updated by timely network feedback.

3.1.1. Rate Equation

An efficient rate equation should(a)contain parameters orienting feedback,(b)be comparable to steady-state TCP flow rates.

Network feedback changes the equation-based rates to avoid congestion referring to the latest network states. Condition (b) does not imply that there must be TCP flows in the network but that the mathematical expression of TCP flows is concise and straightforward [8]. Comparable flow rates enable moderate transmission, which reduce the possibilities of quick congestions or prolonged link leisure. Besides, steady-state TCP comparable flows contribute fairly to competition on network resources by every data flow under QMCC.

A typical TCP flow-sending model consists of DA (duplicate acknowledgments) events and TO (time-out) events. Figure 2 shows one. The interval between adjacent DA events is defined as DAP (DA period), and the interval between two adjacent TO event is defined as TOP (TO period). Time horizon could thus be expressed as

Suppose that total packets are sent during Period , which contains DAPs and TOPs. Also assume that packets are transmitted in the th DAP event (lasting for ), during which the maximum congestion window size reaches . We can infer that

The probability of losing a packet in DAP is . Independent of and , the value of obeys Bernoulli distribution. The lower equations are accordingly rewritten as The rate is denoted by Variable PTO represents the probability of encountering a TO event, and . Constant PTO (not PTO) is right the value of function when its variable , the congestion window size, takes the expected value : is the current value of waiting timeout. Notice that the definition of is significantly different from that in TCP protocol suite (whose ) [3]. That is because the delays are overlong in satellite environment, preventing the source node of executing an incremental-interval retransmission; instead, it enforces retransmission after every .

It is relatively simple to consider only DA events; reference [9] provides the derivation equation of and the relationship between , , and . ( is a Markov regenerative process of ): where is the number of packets acknowledged by every ACK signal from destination nodes, usually adopting one or two.

Summing up the above, the sending rate for a steady-state TCP flow is denoted by Equation (16) is also the sending rate equation at source ends under QMCC. Compared with the rates under TCP protocols [9], (16) is obviously accelerated mainly due to quick reactions to TO events.

3.1.2. Feedback Parameters of Control Equation

QMCC uses (16) to explicitly adjust the injected flow rates, given that it accurately reflects the network congestion states. In the equation there are three variables, systematic RTT, waiting-timeout duration , and package-loss rate .

(1) Round-Trip Time. Updating RTTestimation valuethrough destination feedback as what TCP protocol suite does is unfeasible in satellite networks. Average RTT in LEO satellite networks remains between 50 and 200 ms, which is too long for the source nodes to keep up with the state changes in network.

(2) Waiting-Timeout Duration . is a function of RTT. Different TCP protocols set different functions. QMCC retransmits the packets with the assistance of QoS control instead of . Therefore, is used only to compute comparable steady-state TCP flow rates. In this paper, suppose that , as in many TCP protocols [9].

(3) Package-Loss Rate p. Package-loss rate plays important roles in both congestion control and QoS control. Under QMCC, the main obstacles for obtaining useful value are (a) like RTT cannot be updated through ACK signals for their obsolescence; (b) current updating strategies waste too much resources; (c) contributes to not only congestion control but also QoS control at intermediate nodes. Namely, it is necessary to design a PLR maintaining strategy performing in both source and intermediate nodes to serve two interactional mechanisms.

It is obvious that two key parameters of control equation, RTT and ( is the function of RTT), do not get updated adequately under conventional TCP suite strategies. Instead, QMCC balances the protocol supports to both source ends and intermediate nodes, introducing more purposeful and continuous flow operations (specified in Section 5).

3.2. Intermediate Nodes Congestion Control Strategy

The aforementioned source control adjusts sending rates according to the network feedback to avoid or alleviate congestion. However, implementing control alone at source ends is not sufficient in satellite networks. Meanwhile, QoS requests also suggest that intermediate nodes take part in the active control and provide network-layer supplies.

In essence, packet loss usually results from the mismatch between link transmission and node processing. If the processing speeds dominate, packets accumulate in output buffers and get discarded overtime, or else if the transmission speeds dominate, packets accumulate in input buffers and still get discarded overtime. To solve this problem, intermediate nodes are allowed to manage different levels of packets with different priorities.

3.2.1. Strategy Classification

It is commonly accepted to classify intermediate node strategies into three categories according to packet processing algorithm.

(1) Mixed Flow Buffering. Some classical algorithms such as Droptail, RED (random early detection) [10], and adaptive RED [11] do not distinguish packets from different flows. For a QoS-oriented network, specified QoS requests ask for specified flow solutions, and mixed buffering apparently does not fit for the discussion.

(2) Static Diffluence Buffering. The word “static” does not mean that the buffer space for every flow is unchanged. It means a certain space keeps fixed during the packets scheduling process. In another word, nodes do not accept feedback adjustments after the initial space distribution. FQ (fair queuing) and WFQ (weighted FQ) are typical examples. They allocate the buffer space evenly or proportionally to flow injection rates before any data transmission. This static or initial diffluence buffering mismatches the time-varying service requests and thus is not proper either.

(3) Dynamic Diffluence Buffering. Unlike static buffering, dynamic diffluence buffering enables intermediate nodes to accommodate a certain flow’s buffer size to the latest service requests and network states. Based on this idea, congestion control and QoS control are deployed at intermediate nodes simultaneously. QMCC adopts dynamic diffluence buffering in intermediate nodes.

3.2.2. Package-Loss Rate Management Strategy Based on Dynamic Diffluence Buffering

To provide declared diffluence buffering, intermediate nodes have to recognize classified packets and allocate them to different buffers; meanwhile, QoS control is carried out simultaneously among nodes. In this subsection, a PMDB (package-loss rate management strategy based on dynamic diffluence buffering) strategy is proposed. The strategy focuses on PLR, the very parameter combining two mechanisms together.

All the packets are classified into priority levels by the protocol in advance. Every data flow falls into a group on the basis of its requested PLR guarantee value. Flows in the same group constitute an aggregate flow, whose packets are no longer discriminatively buffered along the whole path. Suppose represent dynamic thresholds, subject to . is the total size of node buffer. separated buffer spaces are occupied by different aggregate flows as in Figure 3.

Every intermediate node sets a counter for each aggregate flow and adopts WALI (weighted average loss internal) algorithm to update the PLR estimation. WALI is described briefly as follows: (loss internal) is defined as the number of transmitted packets during two adjacent package loss events, which includes the nearest lost packet but excludes the farther one. So PLR could be expressed by as . Also define as the th away from now; thus, the real-time PLR computed by the counter is This method synthetically takes historical data and recent observations into consideration.

The core concept for PMDB is to dynamically adjust aggregate flows’ buffer spaces by comparing their measured PLRs with guaranteed values. Once a guarantee is not achieved, the buffer space for corresponding aggregate flow tends to occupy space from other buffers. Figure 3 shows a dynamic diffluence buffering model: once the counter for aggregate flow detects its current PLR in excess of the guaranteed value , the corresponding buffer space occupies spaces sized , with a probability of , from other buffers: The SSPF (space shift probability function) denotes the probabilities of space occupying. SSPF should depend on the following variables.

(1) Leisure Degree. It is institutional that an aggregate flow prefers to occupy the spaces from those with emptier buffer space. In other words, aggregate flows with superior PLR performances tend to be deprived of buffer space. The larger the spread, the higher the probability their rooms might be occupied.

(2) Priority. Besides leisure degrees, QoS control implies that higher levels of aggregate flow are entitled to higher qualities, including PLR index. Thus, higher-priority flows are less likely to offer such spaces.

SSPF should be subjected to

There is no analytical relationship between and ; thus, could be decomposed to The first item represents the effects of leisure degrees and the second item represents the effects of priorities. The weighted sum of two effects finally determines the probability by which aggregate flow (with priority) occupies spaces from aggregate flow . is the weight, which could be preset or adaptively updated.

A unique SSPF function is constituted by discrete values, normalized as is then completely determined by the value of and .

The problem is thus simplified to seeking out an appropriate distribution to satisfy the relationship between SSPF function and leisure degrees/priorities. In this paper, two elementary functions are sufficient: They are adopted for three reasons as follows.

(1) Performance Restriction. Intermediate nodes have relatively lower OBP (on-board processing) abilities. Moreover, the update frequency is too high to implement more complicated functions.

(2) Variable Restriction. is an exponential function; implies that the more empty spaces a flow buffer owns, the more likely it could be deprived of some spaces; implies that, with the increase of leisure degrees, the probability increment of space occupying is also increasing. The second-order partial derivative guarantees that a leisured flow buffer is more likely to provide spaces than a saturated one. is a power function; means that the higher the level the aggregate flow attaches to, the less likely it is occupied.

(3) Parameter Restriction. Both functions are restricted by free parameters and to configure their sensitivity towards dependent variables. enable the function more sensitive to leisure degrees by setting larger value. Similarly, enable the function more sensitive to priorities by setting larger value. In extreme cases, if , degrades to uniform distribution, and a random flow is selected to offer spaces.

could be expressed in detail: If the weight is empirically precomputed, the function will be disabled to accommodate real network throughputs well. Instead, an adaptive method computes the mean and standard deviation for every : a large mean indicates that some buffer spaces are comparably spared and available. As a response, the function adds the weights of leisure degrees, namely, . If the mean remains a small but positive value, standard deviation is further compared: smaller deviation suggests saturated and steady states throughout all buffers—no further adjustment is necessary; a larger deviation, on the other hand, suggests simultaneously existence of overflowed spaces together with overleisured spaces, and should immediately be increased. If the mean is negative and the deviation is small, all the buffers are nearly full, and the function moderately reduces value to improve the weight of priorities; if the deviation is large, there are still some spared spaces and could be further augmented.

, the size of spaces occupied by flow after every iteration, is also set variable. When is updated quite frequently through (17), is enough for most situations to ensure the buffer sizes to dynamically match flow rates. But considering high-peak burst flows, it cannot avoid enormous packet losses. So in PMDB, is set variable to strengthen its robustness towards burst flows. The aforementioned provides an efficient tool to detect burst tools: if the changing rate of exceeds a top threshold consecutively, the node alarms the burst flows and increases . On contrary, if the rate stays below a bottom threshold in succession, the node reduces value of till 1.

Algorithm 1 gives the pseudocode of node algorithm for PMDB after receiving packets from aggregate flow .

//pre-computed thresholds in buffers, s.t.
THaverage  THstdeva  THcounter//pre-computed thresholds
//LIs maintained by nodes
 //state matrix for all buffers in a node
(1)while true do
(2)  if average(S) > THaverage  then //Line 2–11 decide according to all buffer states
(3)   ;//increase the weight of leisure degrees under large mean
(4)  else if 0 < average(S)  <=  THaverage  then//small but positive mean
(5)    if stdeva(S)  >  THstdeva  then  ;
(6)    end if
(7)  else if stdeva(S)  >  THstdeva  then//negative mean
(8)     ;
(9)  else ;  //increase the weight of priorities under large deviation
(10)    end if
(11)   end if
(12)   if   >=  THcounter  then  ; //PLR alarm counter exceeds threshold
(13)   else  ;
(14)   end if
(15)   if state == inbuffer(arrive_packet(i)) then  ; //regular receiving
(16)   else if state == drop(arrive_packet(i)) then //when packet-loss detected
(17)     if ()/   then   ; //increase counter value
(18)     else ; //else  reduce counter value
(19)     end if
(20)     ; //updating real-time packet-loss state in buffer
(21)       ; //space of flow is selected with a probability as in equation
(22)     while  >  then//Line 22–30 processes practical space transfer
(23)       if    then//occupying spaces from lower level flows
(24)        for to   do  ;
(25)        end for
(26)       else  if     then //occupying spaces from higher level flows
(27)        for    to    do;
(28)        end for
(29)       end if
(30)     end while
(31)     for   to  1  do  ; //Line 31–33 update LIs in buffers of flow
(32)     end for
(33)     ;
(34)   end if
(35) end while

3.3. QoS Control in QMCC

On the basis of congestion control, QMCC can also provide a certain degree of QoS control. QMCC effectively makes use of the processing capabilities at source ends and intermediate nodes to negotiate QoS guarantees through protocol interactions. For insatiable requests, the mechanism inquires the sender to amend the requests or even deny them. Three important and common parameters are considered in this paper, and the others could be added similarly.

3.3.1. Delay Control

The time delay is unquestionably the most important QoS parameter, which is perceived explicitly by client applications. As the QMCC is based on DiffServ, the control is realized through PHBs (per-hop behaviors), and a general core stateless network structure named VTRS (virtual time reference system) is selected to provide time delay guarantees [12].

VTRS uses virtual timestamps to mark the delays. After entering the network from source ends, packets are modified in their heads by adding packet states. Intermediate nodes do not maintain every flow’s state (that is why it is stateless); they read and update the states from packet heads. The process of updating virtual timestamp item in states is deemed as the process to guarantee per-hop delay. With the assistance of packet scheduling methods, satellite network achieves a certain probability of delay control on the whole system level. The following paragraphs briefly infer the end-to-end delay characteristics in VTRS under QMCC.

First of all, some definitions are given:: the moment when packet (of aggregate flow ) arriving at node ;: the moment when packet leaving node ;: the virtual timestamp of packet arriving at node ;: the virtual timestamp of packet leaving node ;: the length of packet ;: actually reserved flow rates for flow ;: accumulated delay packet suffers.

After flow shaping, the arriving packet has to satisfy

Also the moment packet leaving node is where is computed through (16).

Neglecting the propagation delays, the accumulated processing delay after traversing nodes can be expressed by a recursion formula: It can be inferred that The virtual arriving timestamp must arrive no sooner than the actual moment; namely, .

Thus, the virtual leaving timestamp is The arriving virtual timestamps of adjacent packets should not exceed the restriction of data rates: Error term is defined in [12] to describe the effects of QoS controlling. For any packet in flow satisfying , node is defined to guarantee flow of a promissory flow rate with an error term . So when packet leaves node , the virtual timestamp is modified to where is the propagation delay from node to .

When node receives the packet, it again repeats the same process—reading timestamp, scheduling packet in accordance with the embedded error term, and updating timestamp, until the packet arrives at the destination node. The upper limit of end-to-end delay is denoted by

The error term imposes huge effects on QoS control performance, which are mainly decided by different packet scheduling algorithms. In Section 5, typical scheduling algorithms will be demonstrated on their effects towards the QoS control.

The above inference illustrates the basic guarantee frame of end-to-end delay for QMCC. As concerned in Section 2.1, extra delays caused by flow shaping may lead to failure on promissory guarantee . Therefore, on receiving the packet, the source end estimates the threshold value according to The data packet is allowed to enter the network when satisfying . For those unqualified requests, the source end stops sending data flows and starts a series of backoff retries. Constant retry failures will instantly trigger a negotiation with application to debase QoS requests or even deny of service. Not relevant to QMCC, this topic will not go into details.

3.3.2. Bandwidth and PLR Control

Under DiffServ structure, the QoS are mainly guaranteed through EF PHB during PS streams traversing intermediate nodes. But the design of PHB focuses just on two parameters—delay and PLR. To be more specific, they are loaded by every node to take corresponding measures. The delay characteristic has been mentioned above using error term estimation.

Aggregate flows with different priorities possess different sized buffers. The spreads between actual allocated bandwidth values and promissory values lead to buffer size adjustments in PMDB. Obviously, the allocated value reflects the actual network states more accurately. Moreover, the source ends use equation-based strategy, which requires prompt and precise PLR observation values to feed back the equations. The outstretched delays in satellite networks seriously toughen the process of capturing instant PLR values. These obstacles all stimulate an inspiration that detecting PLRs at intermediate nodes instead of destinations could greatly reduce the difficulties: after all the packets of flow leave node , this node generates a control signal including real-time PLR measurement and sends it to upstream node. While receiving a control message from downstream, the node extracts the PLR to compare it with current maintained value, and if the difference is large enough, the signal is traversed again to upstream node. The source adjusts flow data sending rates using feedback PLR value. For a flow whose feedback PLR value exceeds the negotiated value, the source temporary interrupts the data transmission by backoff retries, negotiations, or even deny of services. This method significantly increases the frequency of updating sending rates—the average updating interval reduces from  RTT to  .

Intermediate nodes cannot explicitly control the arriving rates of packets. After EF packets enter the network, further precise bandwidth controls are guaranteed only via rate variation at source, indicating a close coupling between bandwidth and PLR. Considering flow with QoS requests, source node computes the necessary sending rate . If this rate cannot cover the promissory bandwidth request  , the source directly interrupts the transmission for negotiations. What calls for attention is that even if , the flow shaping may still reduce the actual injection rates, requiring that Once the packets approach congested areas, the buffer space for this aggregate flow reduces with PLR increasing. Utilizing PLR feedback strategy, source nodes successfully perceive the congestion states to adjust sending rates and lower data bandwidths s. Generally speaking, based on congestion control, the PLR and bandwidth are comprehensively guaranteed.

4. An Overview to QMCC

In this section, a summary is made to demonstrate an overall view over related mechanisms and corresponding strategies, as depicted in Figure 4. QMCC contains source control and intermediate control according to the control positions. Congestion control and QoS control mechanisms are conducted in both positions.

Congestion control and QoS control are two parallel mechanisms with high-density protocol interactions. Relatively speaking, congestion control is more basic and can be deemed as one necessary part of QoS control.

QoS guarantee is realized by controlling (16) through two parameters—delay and PLR. Delay control mainly relies on receiving updated RTT feedback from intermediate nodes. This method increases the updating frequency to achieve the latest network states and then adjusts source sending rates to avoid congestion. Bandwidth is a nonadditive parameter, usually affected by the minimum or bottleneck value along the path. To control bandwidth is also analogous to changing promissory bandwidth. PLR control is usually not deployed at source ends.

At intermediate nodes, delay control depends on VTRS system through an expression containing error terms on , based on which updated RTT estimations are fed back to source nodes. Like delay control, bandwidth restriction relies on reserving at sources. The dynamic adjustments of flow diffluence buffers help to provide bandwidth control. Intermediate PLR control is deployed by measuring PLR in diffluence buffers. By cooperating with PMDB (affiliated to congestion control mechanism), the strategy adjusts buffer sizes according to (23) to satisfy PLR restriction.

Congestion control at source node works in flow shaper and sending buffer. Utilizing the feedback RTT values, the source accommodates the sending rates of shaped flows to the demand of (16). Congestion control at intermediate node adopts PMDB strategy, which is also the core part of whole congestion control. By continuously switching the latest PLRs values with packet-loss counters in intermediate nodes, PMDB not only conducts congestion control but also assists in improving the performances of QoS control simultaneously.

5. Simulations and Analysis

In this section, three experiments are designed to conduct performance evaluations, including the following.

Experiment 1. The effects of equation-based rate control at source nodes to the throughputs of satellite network.

Experiment 2. The effects of PMDB at intermediate nodes to the throughputs of satellite network.

Experiment 3. The effect of QMCC QoS control mechanism for satellite network.

Experiment environment is constructed under NS (network simulator) 2 [13] and STK (Satellite Tool Kit) 7.0. The topological model is a Walker-Star polar constellation with 50 satellites evenly distributed in 5 polar orbits [14]. The height of each satellite is 907 KM and the phase difference between adjacent satellites is 72°. Figure 5 shows the topological diagram of constellation.

ISLs (intersatellite links) are set as full duplex with a bandwidth of 150 Mbps. The buffer size of each node is 5 MB, and each packet is fixed as 1000 B. Processing delays in intermediate nodes are tolerated.

5.1. The Effects of Equation-Based Rate Control to the Throughputs of Satellite Network

QMCC uses the equation-based rate control strategy at source ends. Experiment 1 makes a comparison on throughputs between equation-based control strategy and some TCP control strategies. The simulator randomly generates communication data flows between 50 nodes, and the lengths of sessions obey a negative exponential distribution with the mean of 5 mins. All data packets require only ordinary routing services. For simplification, the arriving rate function during every session satisfies the Poisson conditions with expectation value . The source ends use a -smoothed greedy shaper, acting actually as leaping bucket controller, to shape the received data flows and then send them obeying the service function . The whole simulation lasts for 120 mins, during which a series of average network throughputs under different control mechanisms are recorded. Three common TCP (or TCP friendly) protocols act as the references to QMCC: TCP-Vegas, Split-TCP, and STP (satellite transport protocol).

Figure 6 depicts the average throughputs given different average RTT values under each mechanism. QMCC is distinctive for not receiving the feedback from destination nodes. Under the conditions of small RTTs, QMCC cannot guarantee comparative higher throughputs, sometimes even lower than the references. But with the increase of average RTT, the superiority of QMCC is revealed gradually, indicating a tough robustness against long RTT environment. It is obvious under the circumstance of satellites (RTT of 50–100 ms) that QMCC remains a relatively higher throughput performance.

Figure 7 shows the situations under different BERs. Split-TCP and TCP-Vegas adopt the thought of setting a sending window and acknowledging responses by every packet, causing the congestion window to be extremely sensitive to BERs. In contrast, QMCC and STP react to BERs more smoothly. For BERs below the alarming threshold set by STP, QMCC demonstrates an even better performance.

Figure 8 exhibits the throughputs given a set of average arriving rates. When there is no evident congestion in network (0-1 Mbps), the transmitted data grow rivalry in linear pace with injection rates under different mechanisms. Among them, TCP-Vegas achieves such an outstanding throughput value, owning to its radical adjusting method for congestion window, that it shows TCP unfriendly to a certain extent. On the growing of average rates, TCP-Vegas is quickly trapped in serious congestions and sharply declined throughputs. Split-TCP adopts segmented feedback and protocol cheats, that is. keeping high sending rates in spite of possible congestion to maintain high performance, which also degrades the throughputs and finally gives rise to even worse congestions. STP and QMCC both could hold the downtrend throughputs steadily; however, QMCC directly controls the sending equation immediately and rapidly.

In a nutshell, some typical protocols transplanted from ground TCP protocols such as Split-TCP and TCP-Vegas enlarge the window sizes or respond the packet by segment to accommodate satellite environments. But their essential thoughts of controlling through congestion window cannot avoid hindering the throughputs. STP does not follow TCP suite: the source node asks for acknowledgements on its own initiative to reduce interaction overheads on confronting detrimental satellite conditions like long delays and high BERs. It therefore could achieve a comparable performance as QMCC. As in all things, QMCC presents a higher and rapidly accommodated throughput performance due to its equation-based control strategy.

5.2. The Effects of PMDB Strategy to the Throughputs of Satellite Network

In Experiment 2, PMDB strategy is compared with RED [10]. The parameters of network topology and shaper remain the same as in Experiment 1, and equation-based control is uniformly adopted in all source nodes. Apart from the packets formation above, the data flows generated by the session now require QoS services, whose QoS index refers to the values in [15] (simulation background traffic is derived from data requests abstracted from U.S. Abilene Satellite Networks). All flows are categorized into 24 priorities (). In this case, PMDB and RED are compared from the aspect of throughputs.

Designing an adequate SSPF is necessary for PMDB. Parameters and determine the sensitivities to leisure degree and aggregate-flow priority. Figure 9 demonstrates the network throughputs under different combinations of and when . Given a fixed background, the PMDB performs better than RED on throughput under most combinations. It may result from the fact that the sensitivity parameters could be effectively recovered by altering weight when the initial values are not sound to the injected flows (the light color areas). But if the values of and are too slow to adaptively recover, the throughputs are apparently below the value of RED (the dark color areas). Another reason for throughputs’ deterioration is drastic fluctuation of arriving flows, upon which the PMDB also designs an adaptive method for value. Figure 10 shows the results when is set adjustable. Through changing the buffer shift sizes more frequently to deal with burst flows, the average network throughputs are further raised.

5.3. The Effect of QMCC to the Data Flow QoS Control of Satellite Network

Through interactions between source ends and intermediate nodes, QMCC is able to provide some certain degrees of QoS control. Base on Experiment 2, Experiment 3 is conducted to evaluate the QoS control capabilities of QMCC. Different source strategies and intermediate strategies are combined to provide QoS control, which served as comparisons (STP has complete network control strategies).

It can be concluded from the results that QMCC presents a comprehensively superior performance on QoS control due to its efficient interactive protocol. From the aspect of source strategies, equation-based methods usually achieve superior delay characteristic, which mainly result from the fact that intermediate nodes lack reasonable PLR feedback, while, on the other hand, longer delays between satellites greatly amplify the damages to QoS performance. Split-TCP utilizes segmented response to ensure the source ends of more frequent feedback and consequently reduce delays. However, this kind of “protocol cheating” is not able to reflect actual situations in network to control the PLRs, which may lead to disastrous congestions. TCP-Vegas takes only RTT into consideration, resulting in relatively high call blocking rates, hindering its transportability. From the aspect of intermediate strategies, Droptail is straightforwardly realized and regularly adopted, but no distinguish between packets with various priorities cannot satisfy the various requests for PLRs. RED method actively discards packets even before the congestion really happens, and A-RED adaptively adjusts the discarding probabilities. They both guarantee the PLR performances of high level flows at the expense of little bandwidth depletions. WFQ, as a typical static diffluence buffering strategy, allocates the buffer spaces in terms of injection flow bandwidths. Under WFQ, little adjustments could be made during the packets transmission. Thus, it acquires superb bandwidth guarantee abilities by yield to offer extra spaces towards high level flows confronting increasing PLRs. STP is a uniquely designed protocol like QMCC, and the interactive strategies enable a macroscope optimization over delay, bandwidth, and PLR, whose integrated performances approach QMCC (Table 1).

6. Conclusion

This paper mainly proposes a QoS-oriented mechanism of congestion control for satellite networks. First of all, it analyzes the factors restricting the development of corresponding mechanism. Then, based on the discussed factors, three main constitutions of QMCC are, respectively, described, including equation-based source control, intermediate nodes PMDB control, and synthetic QoS control. By deliberate strategy designs and protocol interactions, QMCC is capable of providing larger network throughputs than conventional mechanisms under different congestion conditions. Besides, it could also provide some certain degrees of QoS control on the basis of congestion control. The features above are verified by the simulation results. In the future work, the improvements of PMDB are worth further studying, especially the recovery strategies. For example, the PLR updating method in QMCC could detect congestions quickly, but the recovery process from congestions is yet too long to bear.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This paper is supported by the National Natural Science Foundation of China (Grant no. 61004021).