Review Article | Open Access
Werner Henkel, Khaled Hassan, Neele von Deetzen, Sara Sandberg, Lucile Sassatelli, David Declercq, "UEP Concepts in Modulation and Coding", Advances in Multimedia, vol. 2010, Article ID 416797, 14 pages, 2010. https://doi.org/10.1155/2010/416797
UEP Concepts in Modulation and Coding
First unequal error protection (UEP) proposals date back to the 1960's (Masnick and Wolf; 1967), but now with the introduction of scalable video, UEP develops to a key concept for the transport of multimedia data. The paper presents an overview of some new approaches realizing UEP properties in physical transport, especially multicarrier modulation, or with LDPC and Turbo codes. For multicarrier modulation, UEP bit-loading together with hierarchical modulation is described allowing for an arbitrary number of classes, arbitrary SNR margins between the classes, and arbitrary number of bits per class. In Turbo coding, pruning, as a counterpart of puncturing is presented for flexible bit-rate adaptations, including tables with optimized pruning patterns. Bit- and/or check-irregular LDPC codes may be designed to provide UEP to its code bits. However, irregular degree distributions alone do not ensure UEP, and other necessary properties of the parity-check matrix for providing UEP are also pointed out. Pruning is also the means for constructing variable-rate LDPC codes for UEP, especially controlling the check-node profile.
Source-coded data, especially from scalable video and audio codecs, come in different importance levels. Thus, data has to be protected differently. We discuss different means of achieving unequal error protection (UEP) properties on the physical level and by different coding schemes. In physical transport, we concentrate on multicarrier modulation (OFDM, DMT) presenting bit-allocation options realizing UEP properties, additionally using hierarchical modulation, as well. Modulation-oriented UEP solutions prove to be a suitable and very flexible tool to define arbitrary protection levels, if access to the actual physical transport is possible. Other options are provided by channel coding and in here, we will especially discuss Turbo and LDPC codes providing UEP. The common approach for implementing UEP properties as in standard convolutional codes would certainly be puncturing . Puncturing is simply omitting some of the output bits according to some pattern, thereby changing the denominator of the rate , that is, reducing the . Since puncturing is a well-known procedure, it will not be discussed in here too much. Pruning as an alternative has not been discussed as much except for [2, 3], but would allow for changing the code rate in the opposite direction, that is, modifying in the rate. In its easiest form, pruning would just omit certain input bits to the encoder, thereby eliminating some transitions in the trellis. Some aspects of pruning as an additional tool for UEP Turbo-code construction will be studied.
Pruning in an LDPC context would mean eliminating variable nodes in the bipartite Tanner graph setting these variables to known values, for example, zero. This will in turn modify the check degree of connected check nodes. It will serve as a tool for designing check-node degree distributions for a given UEP profile.
After some more introductory remarks on UEP for video coding in Section 2, this paper provides a tutorial over possible UEP realizations, starting from multicarrier modulation oriented ones in Section 3. We propose a very flexible UEP bit-loading scheme derived from a nonUEP standard method originally proposed by Chow at al. .
A treatment of pruned Turbo codes will follow in Section 4. Pruning and puncturing approaches are utilized to adapt the rate of a mother code in both directions.
We also study LDPC codes with an irregular variable-node profile in Section 5 and outline that the UEP properties depend on the construction algorithm delivering the final check matrix.
Finally, we consider modifications of the check-node profile of LDPC codes by pruning in Section 6. This keeps the graph of the mother code for decoding and adapts it just by known (predefined) information, which is pruning. This offers the same advantages that we know from pruning and puncturing in convolutional codes, namely, keeping the actual decoder unchanged. We conclude with Section 7.
Note that whenever error-ratio performances are shown, they will be over to be able to really observe the UEP properties with varying channel signal-to-noise ratios. For LDPC codes, however, we may still use scales, but since there is no notion of local rates, the overall rate is used, and thus these figures are actually just using renormalized scales preserving the SNR spacing of all performance curves.
2. UEP and Rate Distortion
Before we actually discuss different UEP solutions, we should deliberate shortly how we should relate source coding qualities given by spatial and temporal resolution and signal-to-noise ratio margin separations or error rates. We start referencing a work by Huang and Liang , who relate a distortion measure to error probabilities. However, in the end, we will conclude that for the codes that we will study in here, such a treatment is not suitable. The actual video quality steps (spatial and temporal) to be provided at what SNR steps will be at the discretion of a provider and essentially a free choice.
Huang and Liang  simplify the treatment by relating MPEG I, P, and B frames to protection classes with different error probabilities. This is, of course, only addressing temporal resolution. As distortion measure, the mean squared error is used and is formulated as where is the number of protection classes (layers), is the total number of bits in the source data, where correspond to the numbers in the different classes. is the distortion introduced by the source-coding layer itself, without considering errors added by the channel whereas refers to the influence of channel errors. is the channel bit-error rate and describes the sensitivity of the th source-coding layer to bit-errors.
For a rate-distortion relation, Huang and Liang write the total rate as where and denote the source coding bit-rate and the added redundancy, respectively, for the th layer.
This is a treatment that is reasonable for rate-compatible punctured convolutional codes  that will result in finite error rates. Capacity-achieving codes, however, will lead to a strong on-off characteristic due to the water-fall region in their BER curves. For such coding schemes, the SNR thresholds will define certain quality steps that will be made available to an end device. Equation (1) would then only represent the quality steps provided by the source coding, since could be assumed to almost only assume the extreme values of zero and 0.5. In the case of capacity-achieving codes, it appears to be more suitable to simply relate source coding quality steps to classes and these again to SNR steps of the UEP channel coding. The SNR steps will then be either realized by different code rates of a Turbo or LDPC coding scheme or, alternatively, by bit-allocation and/or hierarchical modulation together with channel-coding with identical code rates. Combinations of modulation-based realizations of different protection levels and those based on codes with different protection levels is, of course, also possible. The quality steps provided by source coding, as well as the SNR steps provided by channel coding are then a choice of service and network providers.
In the following, we will describe options that we investigated to realize UEP in multicarrier hierarchical modulation, Turbo-, and LDPC-coding. These schemes will prove to be very flexible, allowing the realization of arbitrary SNR level increments between quality classes.
3. Achieving UEP with Multicarrier Bit Loading
We begin our treatment with modulation-based UEP realizations, starting from hierarchical modulation without bit-loading, followed by bit-loading, to finally combine both concepts in bit-loaded hierarchical multicarrier modulation. We use different bit-loading algorithms to give a flavor of options that are possible, although space limitations will not allow to study all UEP modifications of known bit-loading algorithms.
3.1. Hierarchical Modulation
In hierarchical modulation, also known as embedded modulation , different symbols with unequal priorities can be embedded in each other thereby creating different Euclidean distances between different priority classes . The margin separations between these classes can easily be adjusted using the ratios of constellation distances , where and are two different classes. There are different hierarchical constellation constructions in literature, for example, [6, 7]. However, for implementation convenience, we have selected the construction in  as shown in Figure 1. In this figure, we assume 3 different classes and the performance priority ratios are assumed to be fixed to 3 dB, hence .
(a) A 4-QAM ( ) is embedded in a 16-QAM ( ) which is embedded in a 64-QAM ( )
(b) A BPSK ( ) is embedded in a 4-QAM ( ) which is embedded in a nonsquare 8-QAM ( )
Figure 2 depicts the bit-error ratios in case of AWGN using a fixed hierarchical modulation 2/4/8-QAM (as defined in Figure 1(b)). Figure 2 also shows the comparison between AWGN and a Rayleigh fading channel (The channel is modeled as independent time-invariant Rayleigh fading composed of different paths (echoes); each path has its own amplitude , delay , and random uniform phase shift , i.e., ; follows an exponentially decaying power profile ), where the 3-dB margin is strictly preserved in the AWGN case. However, in the case of a Rayleigh fading channel, this margin becomes wider, for example, almost 6 dB at a SER (symbol-error ratio) of . Nevertheless, the order of the classes and the relative margin separations are roughly preserved. The overall system performance deteriorates due to the fixed modulation size and the fixed power allocation. Hence, further adaptation to channel conditions, using adaptive modulation and power allocation is a very important measure to keep the margin separation and an acceptable performance, as will be discussed in detail in the next section.
3.2. UEP Adaptive Modulation
Traditionally, bit-loading algorithms have been designed to assure the highest possible link quality achieving equal error probability. This results in performance degradations in case of variable channel conditions (no graceful degradation). In contrast, UEP adaptation schemes  allow for different parts of the same video stream/frame to acquire different link qualities. This can be done by allocating different parts of this stream to different subcarriers (with different bit-rate and error probabilities) according to the required QoS. Therefore, current research efforts [10–12] have been directed towards modifying the traditional bit-loading algorithms, for example, the ones by Hughes-Hartogs , Campello , Chow-Cioffi-Bingham , Fischer-Huber , in order to realize UEP. In , the algorithm by Fischer et al. has been modified in order to allow for different predefined error probabilities on different subcarriers. However, the allocation of subcarriers to the given classes is a computationally complex process. A more practical approach has been described in  using a modified rate-adaptive Chow et al. bit-loading. This one modifies the margin in Shannon's capacity formula for the Gaussian channel in  by dedicating a different for each protection level . The advantage here is the flexibility to adapt the modulation in order to realize any arbitrary margin separations between the priority classes. The modified UEP capacity formula is given by  where is the carrier index. Equation (3) is rounded to with quantization errors The iterative modification of the overall (if the target bit-rate is not fulfilled) is performed in the same way as in the original Chow et al. algorithm, namely applying to one of the margins, for example, to . is the number of actually used carriers amongst the total of carriers (We approximated by in our computations), is the total actual number of bits, denotes the total target number. The margin spacing between the given classes is selected as in dB, such that where here can take on values in and is computed in the iterative process .
As in the original algorithm, the quantization error is used in later fine-tuning steps to force the bit load to desired values if the iterations were not completely successful.
How should now different protection classes be mapped onto the given subcarriers? An iterative sorting and partitioning approach has be proposed in [10, 11]. The core steps of the algorithm have been simplified more in  using a straight-forward linear algebra approach to initialize close to the final solution. The main steps in [11, 18] are given in the following.(1)The subcarriers are sorted in a descending order according to the channel state information; the sorted indices are stored in a vector of size . (2)In [10, 11], is initially set to an arbitrary value (as in ). However, in , of the middle priority class is calculated initially using the average SNR () as , and enhanced more using Thereafter, the noise margins of the other classes are computed according to (7). (3) is calculated as in (4); the number of subcarriers for each priority class are selected to fulfill the individual target bit-rate , using a binary search, as in . (4) If the target bit-rate is not fulfilled, all are adjusted using (6) together with (7), subsequently repeating from (). (5)Else, if the maximum number of iterations is achieved without fulfilling , further tuning based on the quantization error (5) is performed as in .
The main drawback of the previous two methods [16, 18], is the inefficient energy utilization, where energy is wasted in allocating it to weak subcarriers. The algorithm by Hughes-Hartogs  is seen as the energy optimum bit-loading approach, however, it requires lengthy searching and sorting steps and nonlinear operations. Campello's bit-loading , which is a linear representation of the Levin bit-loading algorithm , is a simple alternative in between Hughes-Hartogs and Chow et al.. It achieves almost the same optimum power allocation requiring only a fraction of the complexity due to quantization of the channel-gain-to-noise ratio based, again, on Shannon's formula. However, carriers of similar levels of can be gathered into smaller groups, where . Hence, all carriers in each of these groups can be adapted simultaneously. Therefore, the algorithm can easily allocate bits according to these quantized groups, later it tunes following the Hughes-Hartogs criterion of minimum power increment. In addition to the simplicity, Campello's bit-loading can be thought of as a practical solution for limited (quantized) channel feedback systems . However, in this paper, we will discuss the UEP applications of the Hughes-Hartogs algorithm, only.
Figure 3 shows the performance of the modified Chow et al. algorithms assuming multicarrier modulation with 2048 subcarriers and Rayleigh fading. The performance deteriorated when adding more bits to the first class (see the scenario ). The performance of the adaptive (nonhierarchical) Rayleigh fading case (in Figure 3) exceeds the hierarchical nonadaptive AWGN (in Figure 2). This shows the inefficiency of the hierarchical modulation.
3.3. UEP Adaptive Hierarchical Modulation
For the optimal power bit-loading algorithms (like Hughes-Hartogs), we opt for hierarchical modulation to realize UEP classes  together with bit-allocation instead of carrier grouping, since it realizes different classes more efficiently without tedious binary searches for the carrier groups separation. In this approach, the highest priority class first consumes the good-SNR subcarriers with the minimum incremental power (calculated based on the maximum allowed symbol-error rate (SER) and the channel coefficients). Thereafter, the bits of the following classes are allowed to be allocated to either already used subcarriers in hierarchical fashion if their incremental powers are the minimum ones. However, if the incremental powers are not sufficient to allocate more bits in hierarchical fashion, free subcarriers can instead be used based on the same given margin separation , which is identical to the one given by the hierarchical modulation. Therefore, the only required information to establish our algorithm now is the SER of the first class. The other SER, of the less important data, , are calculated using the given margin and the given , as in [11, 12].
In here, we are going to describe the complete power minimization hierarchical bit-loading algorithm. This algorithm can be considered as a margin-adaptive bit-loading defined as where is the power allocated to the subcarrier, is the given target power, is the accumulated power, is the channel gain () to the noise () ratio, and the “gap” approximation is given by . If the total target rate is tight to a certain value and is still greater than , then the performance can be further enhanced by scaling up the effective power allocation by the ratio . This is called “margin maximization” criterion, where the maximum system margin is defined as The complete algorithm is as follows.(1)Initially, allocate zeros to the bit-loading matrix and zeros to the power-loading vector and the incremental power vector . (2)Set and the maximum allowed number of bits on each class to , such that the summation over all is less than the maximum number of bits per carrier .(3) Compute the incremental power steps , for every subcarrier assuming a single bit addition, using the following approximate equation (as in ): where of the current class is calculated using the previous class probability of error and as follows: which is valid for high constellation order since , as in , and , that is, if is given, the other classes can be computed according to (12).(4)Find the minimum among all subcarriers, then increment such that allowed for each hierarchical level. (5)Increment the power of this subcarrier by the value . (6)If the target bit-rate of the class is not fulfilled and (a)the sum of the powers is less than the target energy , go to (), (b)else, stop and go to () to finalize the margin maximization approach. (7)If the target bit-rate of the class is fulfilled and is less than the number of the given classes (a)if the sum of the energy is less than the target energy , increment such that, , then go to (),(b)else, stop the iterations for this class. (8)Scale-up the allocated energy using (10), then
The matrix has hierarchy levels as its rows. Nonallocation of leading row(s) means that first protection level data have not been put on the corresponding carrier. Nevertheless, lower-priority data may follow and still use a smaller hierarchical signal set.
Figure 4 depicts the performance of the modified Hughes-Hartogs algorithm in the case of allocating 1024 bits for the first priority class, 2048 bits for the second priority class, and 3072 bits for the least priority one. The number of subcarriers are assumed to be 2048, the same as before, and of each modulation layer is 6 bits. It is clear from Figure 4 that the 3 dB spacing is better preserved in the case of the Rayleigh channel in Figure 1 (without bit and power-loading). We also observe the same performance degradation as in the modified Chow algorithm, when adding more bits to the first class. Finally, one can also see from Figure 4 that the performance of the nonhierarchical modified Chow algorithm outperforms the hierarchical Hughes-Hartogs UEP, which is due to the power-inefficiency of hierarchical constellations.
An example of combining hierarchical modulation schemes with Turbo coding of different rates is given in . How such different rates are obtained in a flexible way, is shown in the following section.
In this paper, we only focus on (almost) capacity-achieving codes. Turbo codes are known for their error-floor behavior, nevertheless they are suited for smaller codeword lengths, that is, interleaver sizes. If the error floor is an issue, outer Reed-Solomon codes may be applied. There are, of course, manyfold options with smaller codeword lengths or delays, such as rate-compatible convolutional codes based on puncturing, which we are to some extent addressed inside the following Turbo-code section. Just to mention another example, one may also think of multilevel coded modulation with corresponding rate choices according to the desired SNR steps . Actually, also there, Turbo- and LDPC codes can be chosen for the different layers.
4. Achieving UEP with Convolutional Codes for Applications in Turbo Coding
In this section, we describe methods of achieving unequal error protection with convolutional codes which can later be applied in Turbo codes. A straightforward approach of varying the performance of a convolutional code is puncturing, that is, excluding a certain amount of code bits from transmission and, thus, increasing the code rate , where and are the numbers of information bits and code bits. Another approach is called pruning, which modifies the number of input bits to the encoder , that is, the numerator of the code rate instead of the denominator. In contrast to [2, 3], we present a more flexible way of pruning in the following. In order to modify the number of encoder input bits, certain positions in the input sequence could be reserved for fixed values, that is, 0 or 1 for binary codes. The code rate of a pruned convolutional code can be given as where denotes the number of digits fixed to a certain value and is the pruning period. In (14), we assumed pruning of parities. Pruning in systematic information would also lead to subtracting in the denominator. At the receiver, the pruning pattern is known such that the reliability of the fixed zeros can be set to infinity (or equivalently, the probability can be set to ) and may help decoding the other bits reliably.
A possible pruned input sequence to a 2-input encoder with certain positions fixed to 0 could be where the pruning period is . Thereby, code rates other than that of the mother code can easily be achieved. Using puncturing and pruning, a family of codes with different error correction capabilities may be constructed. Figure 5 shows a set of bit-error rate curves of Turbo codes using pruned and punctured recursive, systematic convolutional component codes.
When performing a computer search for a suitable pruning scheme, it is usually not sufficient to study pruning patterns alone. Additionally, it has to be ensured that at interval boundaries between blocks of different protection levels, the states at joint trellis segments are the same as already required in rate-compatible punctured convolutional codes . With the improved approach shown above, this problem does automatically not arise any more since the decoder is operating on one and the same trellis, namely the mother trellis, only varying certain a-priori probabilities. Thus, trellis structures do not change at transitions between different protection intervals at all.
Concerning the minimum distance of the subcode, it is in either case greater than or equal to the minimum distance of the mother code since, as stated above, both codes can be illustrated by the same trellis. Fixing certain probabilities of a zero to be infinity means pruning those paths corresponding to a one. Either, if the minimum weight path is pruned, the minimum distance of the code is increased or if it is not pruned, the minimum distance stays the same.
The proposed technique is in a way dual to puncturing with comparable complexity. Puncturing increases the rate by erasing output bits, whereas pruning reduces it by omitting input bits (fixing its value). With puncturing, there is no knowledge about the erased bits in the decoding. With pruning, we add perfect knowledge about certain bits and may enhance the decoding performance in iterative decoding through increased extrinsic information. Occasional pruning has also once been used to improve the NASA serial concatenation of convolutional and Reed-Solomon codes in .
We ran an exhaustive computer search in order to find mother codes together with different pruning patterns which behave well in iterative decoding. We used EXIT charts  for the evaluation of the convergence behavior. One assessment criterion was amongst others the convergence threshold, which is the lowest SNR where error-free decoding is theoretically possible, that is, where the tunnel between the EXIT curves opens and the mutual information between the decoded and the transmitted sequence is one (or very near to one). Furthermore, we report the area between the EXIT curves, since it is a measure of how close the waterfall region is to the Shannon limit and how steep it is . Although this has formally only been proved for the binary erasure channel, it has been observed for the additive white Gaussian noise (AWGN) channel, as well. We also give the approximate distance of the convergence point from the Shannon limit in dB. The minimum distance of the mother convolutional codes and their pruned subcodes was determined by evaluating low-weight input sequences. Table 2 shows three convolutional mother codes with constraint lengths , , and with reasonably fast convergence. The convolutional mother code rate is in all cases, such that the Turbo code rate is . The pruning pattern search was performed for pruning periods up to .
The code table shows that the higher the degree of pruning (and the lower the code rate), the larger is the minimum distance. This is natural, since with a large number of constraints, it is more likely that the minimum distance path is erased.
5. Necessary Degree Distribution Properties of UEP-LDPC Codes
Irregular low-density parity-check (LDPC) codes are very suitable for UEP, as well, and can be designed appropriately according to the requirements. Irregular LDPC codes provide UEP simply by modification of the parity-check matrix and a single encoder and decoder may still be used for all bits in the codeword. The sparse parity-check matrix of an LDPC code may be represented by a Tanner graph, introduced in , which facilitates the description of a decoding algorithm known as the message-passing algorithm . Such a code may be described by variable node and check-node degree distributions defined by the polynomials  and , where and are the maximum variable and check-node degree of the code, respectively. The coefficients of the degree distributions describe the proportion of nodes with a certain degree. Within this section, we concentrate on irregular LDPC codes, where the UEP is due to the irregularity of the variable nodes, and the check-node degrees are mostly concentrated. UEP is usually obtained by assigning important bits to high-degree variable nodes and less important bits to the lower degrees, [29–31]. Information bits may be grouped into protection classes according to their error protection requirements or importance and the parity bits are grouped into a separate protection class with least protection. Generally, the average variable node degrees of the classes are decreasing with importance. Good degree distributions are commonly computed by means of density evolution using a Gaussian approximation .
Based on an optimized degree distribution pair of and , a corresponding parity-check matrix may be constructed. Several construction algorithms can be found in literature. The most important ones are the random construction (avoiding only length-4 cycles between degree-2 variable nodes), the ACE (approximate cycle extrinsic message degree) algorithm , the PEG (progressive edge-growth) algorithm , and the PEG-ACE algorithm . It is widely believed that an irregular variable node degree distribution is the only requirement to provide UEP, see for example [29, 30]. Surprisingly, we found that constructing parity-check matrices using these different algorithms, based on the same degree distribution pair, results in codes with very different UEP capabilities: The random and the ACE algorithms result in codes which are UEP-capable, whereas the PEG and the PEG-ACE algorithms result in codes that do not provide any UEP .
Since the degree distribution pairs are equal for all algorithms, a more detailed definition of the degree distribution is necessary. The multi-edge type generalization  may be used, but is unnecessarily detailed for our purpose. Instead, a subclass of the multi-edge type LDPC codes is considered. Let be the detailed check-node degree distribution, where the coefficients correspond to the fraction of check-nodes which have edges to variable nodes in protection class , regardless of the other edges.
Figure 6 shows the coefficients of the detailed check-node degree distribution for codes constructed by the ACE and the PEG-ACE algorithm and for three protection classes. The results can also be seen as histograms of the number of edges from check-nodes to the protection classes. It can be seen that the histograms corresponding to the nonUEP algorithm (PEG-ACE) are much “peakier” than those corresponding to the UEP-capable algorithm (ACE). Knowing that the overall check-node degrees are concentrated around , this means that for the PEG-ACE code, a large fraction of check-nodes has the same number of edges to the different classes, that is, most check-nodes have 4 edges to , 3 edges to , and 2 edges to , reasoned by the different variable node degrees of the classes. In the case of the ACE code, the number of edges to different protection classes vary much more and there are many different types of check-nodes. Based on this detailed check-node degree distribution, one may perform a detailed mutual information evolution of the messages over the decoding iterations .
Figure 7 shows the mutual information of messages going from check-nodes to variable nodes of different protection classes (denoted by ) as a function of the number of iterations for an ACE code and a PEG-ACE code. It is obvious that the ACE code does provide different protection levels even after the check-node update operation, while the mutual information values of the PEG-ACE code are almost identical for all protection classes. The reason is that all PEG-ACE check-nodes obtain similar values for their updates and average the UEP coming from the variable nodes (due to their different degrees). In contrast, the check-nodes of the ACE code produce different updates for different protection classes, leading to UEP even after a high number of iterations. Based on this, the resulting a posteriori mutual information values of the variable nodes from different protection classes (denoted by ) are depicted in Figure 8. The figure shows the difference between the mutual information and its maximum value 1 on a logarithmic scale. For the UEP-capable ACE code, protection class converges much faster than the other protection classes. On the other hand, the different classes of the nonUEP PEG-ACE code have more equal convergence rates.
To confirm that the detailed check-node degree distribution is the key to the UEP capability of a code, a modification of the nonUEP PEG-ACE algorithm, which makes it UEP-capable, is presented. By constraining the edge selection procedure to allow only certain check-nodes to be connected, the resulting detailed check-node degree distribution is made similar to that of the ACE code. The bit-error rates of the codes constructed by the modified PEG-ACE, the original PEG-ACE and the ACE algorithm are shown in Figure 9. The figure shows that the original PEG-ACE code does not provide any UEP to its code bits, whereas the ACE code is UEP-capable. Surprisingly, the code constructed by the modified PEG-ACE algorithm offers even more UEP than the ACE code. The UEP capability provided by the modified PEG-ACE algorithm confirms that the detailed check-node degree distribution is crucial to the UEP capability of a code.
6. Achieving UEP with LDPC Codes with an Irregular Check-Node Profile
In Figure 6, we observed that a noncompressed detailed check-node distribution was an essential ingredient to obtain UEP properties, which are even preserved after many iterations, even if an overall compressed distribution was chosen to optimize the overall average performance (according to results in ). In the following, we even refrain from the overall concentrated form and design UEP properties by controlling the check-node degree distribution, possibly keeping a regular variable-node degree distribution. It is well-known that the quality of a variable-node is increased with the number of edges connected to it. Regarding the check-node side, a connected variable node profits from a lower connection degree of that check-node. Thus, the quality of variable nodes is increased by lowering the (average) check-node degree of all check-nodes connected to it.
We consider a check-node to belong to a certain bit-node (priority) class if there is at least one edge of the Tanner graph connecting the check-node with one bit node of that class. By studying the mutual information at the output of a check-node of a priority class compared to the average mutual information, we get a measure of unequal protection of the priority class: the higher the difference, the more the class is protected compared to other bits in the codeword. It is also possible to link this difference in mutual information to the average check connection degree of class , where and are the minimum and maximum check connection degrees, respectively. is the relative portion of check-nodes with connection degree that belong to class . To maximize the performance of class , has to be minimized. In other words, the most protected classes have the lowest average check-node degrees.
Using the detailed representation of the LDPC code , we optimized the irregular check-node profiles for each priority class with Density Evolution. Once the irregularity profile has been optimized, there are some specific parity check matrix constructions that allow to follow the fixed profile. We depict in the following a method based on pruning, which has the advantage of being efficient and flexible, just as in the case of UEP Turbo codes in Section 4. With a single fixed (mother) encoder and decoder, the protection properties for different priority classes can be modified by suitable pruning. With pruning, we control the check-node distribution of the classes. Let be the length and the number of information bits, respectively, of the mother code. Pruning in Section 4 meant simply omitting information bits according to some pruning pattern, that is, fixing them to some known values. Although this can be further generalized by adding a precoder to a mother code, which also offers suitable LDPC UEP solutions, we will stick to this simple pruning concept also in here. Presetting certain information to zero, means the creation of a subcode of dimension by eliminating columns from the parity-check matrix . The subcode has length . This would be comparable to the length change in the case of pruning a systematic convolutional code. We use systematic LDPC codes, that is, LDPC codes for which the parity-check matrix has an upper triangular structure. The pruning is then performed by just omitting an information bit of the mother code, or equivalently, by removing the corresponding column in the information part of the parity check matrix (the part which is not upper triangular). By doing so, the dimensions of the subcode matrices and will be and , respectively. The code rate is obtained as Only the indices of the pruned columns of the mother code need to be known at the transmitter and the receiver in order to be able to encode and decode the pruned code. Thus, there is almost no complexity increase for realizing different UEP configurations with the same mother LDPC code. This shows that the specific matrix construction that we advice, based on a mother code and pruning, is very flexible and can be implemented in practice with low complexity.
Figure 10 illustrates the pruning in the graph of a short code. Note that the protection level is determined by the average connection degree of the check-nodes connected to the variable nodes of a certain class.
In the following, we describe the iterative pruning procedure in some more detail.
Let the relative portion of bits devoted to a class be denoted by , with . An iterative pruning is performed. The procedure is controlled by the two key parameters of the th class, and . The first is the average check connection degree of the th class defined in (16). The proportion relation is obtained where the second sum starts at a connection degree of 2 since a check-node should at least be connected to two variable nodes. The upper limit is the maximum possible check degree. The protection in class can be improved by minimizing the average check connection degree , which requires to minimize , as well. For each considered class, is lowered as much as possible minimizing step by step, too. For a chosen , one would try to put a maximum number of check-nodes with the minimum degree in order to decrease . Although this may be interpreted to keep the degree distribution concentrated inside a certain class, this is not necessary (cf. the results in Section 5). The reduction of may be realized in different ways. However the steps and the succession in pruning are chosen, including possible reallocations of variable nodes to classes, the following constraints need to be fulfilled and checked every time. (1)Any pruned bit must not be linked with a check-node of degree already identical to the lower limit of a priori chosen degree distributions. (2)Unvoluntary pruning shall be avoided, meaning that a column of the parity-check matrix H becomes independent from all the others and then it would not define a code any more. (3)The chosen code rate must still be achieved given by the total number of checks and the number of bit nodes . (4)Convergence at a desired signal-to-noise ratio (near the Shannoncapacity limit) must be ensured, typically by investigating EXIT charts . (5)A stability constraint  has to be ensured, which is formulated as a rule for , which is the proportion of edges of the graph connected to bit nodes of degree 2 where denotes the proportion of edges of the graph connected to check-nodes of degree .
In an iterative procedure, may be further reduced after ensuring that the listed constraints are fulfilled (if the lower limit of allowed degrees is not yet reached). A further pruning process is used to reduce .
Figure 11 shows an exemplary result obtained by iterative pruning. The curves are based on a regular LDPC mother code of length and a code rate of . The subcode has a length of and code rate . The classes to be optimized are defined by the proportions for (the number of info bits in the class is if , and , and in the last one which then contains the whole redundancy). The optimization is done for classes with , . The mother code has parameters (2000,3,6).
Optimizations to obtain unconcentrated (degrees for checks between 2 and 6) and almost concentrated (degrees for checks between 4 and 6) degrees codes were performed to compare the performances.
The decoder is using the pruned parity-check matrix of the mother code. The check-node profiles are given in Table 1. The variable-node degree was three.
|(a) Check profile of the almost concentrated code|
|(b) Check profile of the unconcentrated code|
7. UEP in Physical Transport or in Coding?
This paper has pointed out manifold options for realizing unequal error protection, especially new concepts developed recently. UEP in multicarrier physical transport is very easy to realize and the design is very flexible allowing for arbitrary SNR margins. In UEP Turbo or LDPC coding, the coding scheme has to be optimized in advance, that is, a code search is necessary and the performances have to be investigated beforehand (EXIT charts, simulations). Pruning and puncturing also offer quite some flexibility in choosing the code rate, but the actual performances are only obtained after the code-design and evaluation steps. However, in digital transport without access to the physical channel, the only option is UEP coding.
When the channel changes its frequency characteristic, the margins between the priority classes will be modified in UEP bit allocation, even if a more robust SNR sorting is used. In UEP Turbo or LDPC coding, the margins will more or less be preserved due to the large interleaver.
Parameters of Figure 5
Generator matrix of the mother code: The code rates given in the figure are the ones of the Turbo code, that is, the rate- convolutional code results in a rate- Turbo code. The interleaver size was 2160.
Puncturing and pruning pattern: (punctured) (punctured) (pruned) (pruned)
Some of this work was part of the FP6/IST Project M-Pipe and was cofunded by the European Commission. Furthermore, The authors appreciate funding by the German national research foundation DFG. Some results of this paper have been prepublished at conferences [10, 17, 41] or will appear in . UEP LDPC codes for higher-order modulation, which were not presented in here, have recently been published in ; for results on UEP multilevel codes, the reader is referred to .
- J. Hagenauer, “Rate-compatible punctured convolutional codes (RCPC Codes) and their applications,” IEEE Transactions on Communications, vol. 36, no. 4, pp. 389–400, 1988.
- C.-H. Wang and C.-C. Chao, “Path-compatible pruned convolutional (PCPC) codes: a new scheme for unequal error protection,” in Proceedings of the International Symposium on Information Theory, Cambridge, Mass, USA, February 1998.
- W. Henkel and N. von Deetzen, “Path pruning for unequal error protection turbo codes,” in Proceedings of the IEEE International Zurich Seminar on Digital Communications, pp. 142–145, Zürich, Switzerland, February 2006.
- P. S. Chow, J. M. Cioffi, and J. A. C. Bingham, “Practical discrete multitone transceiver loading algorithm for data transmission over spectrally shaped channels,” IEEE Transactions on Communications, vol. 43, no. 2, pp. 773–775, 1995.
- C.-L. Huang and S. Liang, “Unequal error protection for MPEG-2 video transmission over wireless channels,” Signal Processing: Image Communication, vol. 19, no. 1, pp. 67–79, 2004.
- P. K. Vitthaladevuni and M.-S. Alouini, “A recursive algorithm for the exact BER computation of generalized hierarchical QAM constellations,” IEEE Transactions on Information Theory, vol. 49, no. 1, pp. 297–307, 2003.
- “DVB-T standard: ETS 300 744, Digital Broadcasting Systems for Television, Sound and Data Services: Framing Structure, Channel Coding and Modulation for Digital Terrestrial Television,” ETSI Draft, Vol. 1.2.1, No. EN300 744, 1999–2001.
- B. Barmada, M. M. Ghandi, E. V. Jones, and M. Ghanbari, “Prioritized transmission of data partitioned H.264 video with hierarchical QAM,” IEEE Signal Processing Letters, vol. 12, no. 8, pp. 577–580, 2005.
- P. Hoeher, “Statistical discrete-time model for the WSSUS multipath channel,” IEEE Transactions on Vehicular Technology, vol. 41, no. 4, pp. 461–468, 1992.
- W. Henkel and K. Hassan, “OFDM (DMT) bit and power loading for unequal error protection,” in Proceedings of the OFDM-Workshop (InOWo '06), Hamburg, Germany, August 2006.
- K. Hassan and W. Henkel, “Unequal error protection with eigen beamforming for partial channel information MIMO-OFDM,” in Proceedings of the IEEE Sarnoff Symposium (SARNOFF '07), pp. 1–5, Princeton, NJ, USA, May 2007.
- K. Hassan and W. Henkel, “UEP with adaptive multilevel embedded modulation for MIMO-OFDM systems,” in Proceedings of the OFDM-Workshop (InOWo '08), vol. August 2008, Hamburg, Germany, August 2008.
- D. Hughes-Hartogs, “Ensemble Modem Structure for Imperfect Transmission Media,” U.S. Patents, no. 4,679,227, July 1987, no. 4,731,816 March 1988, and no. 4,833,706, May 1989.
- J. Campello, “A practical bit loading for DMT,” in Proceedings of the IEEE International Conference on Communications (ICC '99), vol. 2, pp. 801–805, Vancouver, Canada, June 1999.
- R. F. H. Fischer and J. B. Huber, “New loading algorithm for discrete multitone transmission,” in Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM '96), pp. 724–728, November 1996.
- F. Yu and A. N. Willson Jr., “DMT transceiver loading algorithm for data transmission with unequal priority over band-limited channels,” in Proceedings of the Annual Conference on Signals, Systems, and Computers, vol. 1, pp. 685–689, Pacific Grove, Calif, USA, October 1999.
- W. Henkel, N. von Deetzen, K. Hassan, L. Sassatelli, and D. Declercq, “Some UEP concepts in coding and physical transport,” in Proceedings of the IEEE Sarnoff Symposium (SARNOFF '07), Princeton, NJ, USA, April 2007.
- K. Hassan, G. Sidhu, and W. Henkel, “Multiuser MIMO-OFDMA with different QoS using a prioritized channel adaptive technique,” in Proceedings of the IEEE International Conference on Communications Workshops (ICC '09), Dresden, Germany, 2009.
- H. E. Levin, “A complete and optimal data allocation method for practical discrete multitone systems,” in Proceedings of the IEEE Global Telecommunicatins Conference (GLOBECOM '01), pp. 369–374, San Antonio, Tex, USA, November 2001.
- S. Pfletschinger, Multicarrier modulation for broadband return channels in cable TV networks, Ph.D. thesis, University of Stuttgart, Stuttgart, Germany, 2003.
- B. Barmada, M. M. Ghandi, E. V. Jones, and M. Ghanbari, “Combined turbo coding and hierarchical QAM for unequal error protection of H.264 coded video,” Signal Processing: Image Communication, vol. 21, no. 5, pp. 390–395, 2006.
- N. von Deetzen and W. Henkel, “On code design for unequal error protection multilevel coding,” in Proceedings of the 7th ITG Conference on Source and Channel Coding (SCC '08), Ulm, Germany, January 2008.
- O. M. Collins and M. Hizlan, “Determinate state convolutional codes,” IEEE Transactions on Communications, vol. 41, no. 12, pp. 1785–1794, 1993.
- S. T. Brink, “Convergence behavior of iteratively decoded parallel concatenated codes,” IEEE Transactions on Communications, vol. 49, no. 10, pp. 1727–1737, 2001.
- A. Ashikhmin, G. Kramer, and S. ten Brink, “Extrinsic information transfer functions: model and erasure channel properties,” IEEE Transactions on Information Theory, vol. 50, no. 11, pp. 2657–2673, 2004.
- R. M. Tanner, “A recursive approach to low complexity codes,” IEEE Transactions on Information Theory, vol. IT-27, no. 5, pp. 533–547, 1981.
- F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 498–519, 2001.
- T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, “Design of capacity-approaching irregular low-density parity-check codes,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 619–637, 2001.
- C. Poulliat, D. Declercq, and I. Fijalkow, “Enhancement of unequal error protection properties of LDPC codes,” Eurasip Journal on Wireless Communications and Networking, vol. 2007, Article ID 92659, 9 pages, 2007.
- N. Rahnavard, H. Pishro-Nik, and F. Fekri, “Unequal error protection using partially regular LDPC codes,” IEEE Transactions on Communications, vol. 55, no. 3, pp. 387–391, 2007.
- H. Pishro-Nik, N. Rahnavard, and F. Fekri, “Nonuniform error correction using low-density parity-check codes,” IEEE Transactions on Information Theory, vol. 51, no. 7, pp. 2702–2714, 2005.
- S.-Y. Chung, T. J. Richardson, and R. L. Urbanke, “Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 657–670, 2001.
- T. Tian, C. R. Jones, J. D. Villasenor, and R. D. Wesel, “Selective avoidance of cycles in irregular LDPC code construction,” IEEE Transactions on Communications, vol. 52, no. 8, pp. 1242–1247, 2004.
- X.-Y. Hu, E. Eleftheriou, and D. M. Arnold, “Regular and irregular progressive edge-growth tanner graphs,” IEEE Transactions on Information Theory, vol. 51, no. 1, pp. 386–398, 2005.
- D. Vukobratović and V. Šenk, “Generalized ACE constrained progressive edge-growth LDPC code design,” IEEE Communications Letters, vol. 12, no. 1, pp. 32–34, 2008.
- N. von Deetzen and S. Sandberg, “On the UEP capabilities of several LDPC construction algorithms,” to appear in IEEE Transactions on Communications.
- T. Richardson and R. Urbanke, Modern Coding Theory, Cambridge University Press, Cambridge, UK, 2008.
- T. J. Richardson and R. L. Urbanke, “Multi-edge Type LDPC Codes,” http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.7310&rep=rep1&type=pdf.
- G. Liva, S. Song, L. Lan, Y. Zhang, S. Lin, and W. Ryan, “Design of LDPC codes: a survey and new results,” Journal of Communications Software and Systems, vol. 2, no. 2, 2006.
- K. Kasai, T. Shibuya, and K. Sakaniwa, “Detailedly represented irregular LDPC codes,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E86, no. 10, pp. 2435–2444, 2003.
- L. Sassatelli, W. Henkel, and D. Declercq, “Check-irregular LDPC codes for unequal error protection under iterative decoding,” in Proceedings of the 4th International Symposium on Turbo Codes & Related Topics in connection with the 6th International ITG Conference on Source and Channel Coding, Munich, Germany, April 2006.
- S. Sandberg and N. von Deetzen, “Design of bandwidth-efficient unequal error protection LDPC codes,” IEEE Transactions on Communications, vol. 58, no. 3, pp. 802–811, 2010.
Copyright © 2010 Werner Henkel et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.