Iterative Decoding and CrossLayering Techniques for Multimedia Broadcasting and Communications
View this Special IssueResearch Article  Open Access
A Simple Scheme for Belief Propagation Decoding of BCH and RS Codes in Multimedia Transmissions
Abstract
Classic linear block codes, like BoseChaudhuriHocquenghem (BCH) and ReedSolomon (RS) codes, are widely used in multimedia transmissions, but their softdecision decoding still represents an open issue. Among the several approaches proposed for this purpose, an important role is played by the iterative belief propagation principle, whose application to lowdensity paritycheck (LDPC) codes permits to approach the channel capacity. In this paper, we elaborate a new technique for decoding classic binary and nonbinary codes through the belief propagation algorithm. We focus on RS codes included in the recent CDMA2000 standard, and compare the proposed technique with the adaptive belief propagation approach, that is able to ensure very good performance but with higher complexity. Moreover, we consider the case of long BCH codes included in the DVBS2 standard, for which we show that the usage of “pure” LDPC codes would provide better performance.
1. Introduction
In spite of their age, classic families of linear block codes, like BoseChaudhuriHocquenghem (BCH) and ReedSolomon (RS) codes, continue to be adopted in many telecommunication standards. For example, the most recent European standard for satellite digital video broadcasting (DVBS2) includes an error correction scheme based on the concatenation of an outer BCH code followed by an inner lowdensity paritycheck (LDPC) code [1]. Classic coding schemes are adopted also for broadcast services implemented over different networks, like packetswitched mobile networks: the American CDMA2000 standard includes RS codes for the deployment of highrate broadcast data services [2].
Encoding and decoding of BCH and RS codes can be accomplished through very simple circuits that implement operations over finite fields. However, classic decoding techniques rely on harddecision decoders that allow the correction of up to errors, where d is the code minimum distance and the greatest integer smaller than or equal to x. On the contrary, the use of channel measurements in softdecision decoders can improve significantly the error correction capability, thus approaching, for high signaltonoise ratios, the theoretical limit of correcting errors [3].
A good review of softdecision decoding algorithms applied to linear block codes, and RS codes in particular, can be found in [4], where a new approach is also proposed, based on the iterative belief propagation (BP) algorithm. Thanks to the adoption of BP, LDPC codes can approach the maximum likelihood (ML) performance, while maintaining low decoding complexity [5].
The BP algorithm works on Tanner graphs that are bipartite graphs with variable nodes and check nodes corresponding to code bits and parity equations, respectively. An edge connecting the variable node v_{i} with the check node z_{j} exists if and only if the paritycheck matrix associated with the Tanner graph has a 1 at position ().
In order to achieve a good performance, BP decoding needs a paritycheck matrix with the following characteristics: (i) sparsity (that is, in fact, inherent in LDPC codes), (ii) absence of short cycles in the associated Tanner graph, and (iii) regular or optimized irregular row and column weight distributions. Such properties are rarely ensured by paritycheck matrices of binary cyclic codes. For example, it can be shown that BCH codes, where is the codeword length and k the number of information bits, with rate greater than or equal to 1/2 and , cannot have Tanner graphs free of length4 cycles [6].
For these reasons, many alternative solutions have been proposed in the literature for effectively applying BP decoders to generic linear block codes, binary cyclic codes, or specific classes of cyclic codes [7–15]. All these techniques aim at finding, through different approaches, a graph representation for the code that is well suited for BP decoding.
In [7, 8], for example, the generalized paritycheck matrix (GPCM) is adopted to reduce the number of short cycles. Such approach has been further investigated in [9], where an algorithm is presented that achieves a representation free of length4 cycles. All techniques based on GPCMs, however, require the introduction of auxiliary bits that do not correspond to transmitted bits and, therefore, do not yield information on the channel status; this fact, in turn, may cause performance degradation. In [10], it is demonstrated that Vardy’s technique can be used to find sparse paritycheck matrices for ReedSolomon codes.
Maybe the best technique for softdecoding of linear block codes characterized by dense paritycheck matrices is the “adaptive belief propagation” algorithm [4, 11]. The rationale of this method lies in varying the paritycheck matrix at each iteration, according to the bit reliabilities, such that the unreliable bits correspond to a sparse submatrix, suitable for the BP algorithm. Actually, significant performance improvements with respect to harddecision decoding and standard BP decoding can be achieved through this method. As a counterpart, its complexity is rather high, and often unsuitable for implementation in realtime (or almostrealtime) applications, as those required in many multimedia transmissions. As described in [4], this method requires to implement a Gaussian elimination, at each iteration of the decoding algorithm, that generally yields a great amount of operations. Complexity can be somehow reduced by combining this approach with the KoetterVardy algebraic softdecision decoding algorithm [12], but it remains, in any case, rather high.
In [13], instead, a different approach is tempted: the author proposes to use the socalled extended paritycheck matrix (EPCM) in order to obtain a regular Tanner graph associated with the code. The notion of EPCM will be reminded in Section 2.2; the method is very simple and allows to obtain matrices more suitable, in principle, for applying BP decoding. Unfortunately, however, for most codes, the performance achievable through this method is very poor. Examples will be given in Section 4.
Keeping in mind, on one hand, the simplicity of the EPCMbased techniques and, on the other hand, the astonishing results of adaptive BP, in this paper, we extend an alternative approach we have recently presented [14, 15], based on “spread” paritycheck matrices. We improve such approach through the adoption of an adaptive version of the algorithm, where adaptation, however, is much simpler than in [4].
At first, we apply the new method to the case of short BCH codes where, we show, it is able to achieve very good performance if compared with EPCMbased techniques. Short codes are often used in multimedia communications with very rigorous requests on delay and complexity [16]. On the other hand, as mentioned, some important telecommunication standards adopt nonbinary cyclic codes or very long codes for matching the length of LDPC codes in concatenated schemes. For this reason, we also study the applicability of the proposed procedure to RS codes, like those included in the CDMA2000 standard, and to long BCH codes, like the outer codes in the DVBS2 standard.
The paper is organized as follows. In Section 2 we analyze the paritycheck matrix of the considered codes and present some options for its modification. In Section 3 we describe the standard decoding algorithm and the new version working on the spread code. In Section 4 the proposed technique is assessed through numerical simulations. Finally, Section 5 concludes the paper.
2. ParityCheck Matrices of Linear Block Codes
In order to optimize the paritycheck matrix for application of belief propagation decoding algorithms, we consider first binary cyclic codes that represent particular cases of linear block codes. We obtain an alternative representation of their paritycheck matrix by considering its cyclic nature. The proposed technique can be applied to BCH codes and can be extended to other families of codes, as will be shown in the following sections.
Given a binary cyclic code with length n, dimension k, and redundancy , each codeword c can be associated to a polynomial over . Moreover, all the shifted versions of , that is, x^{i}c(x), are valid codewords, due to the cyclic property of the code. Within the set of code polynomials in C, there is a unique monic polynomial g(x), with minimal degree , called the generator polynomial of C. Every codeword polynomial can be expressed uniquely as , where is a polynomial of degree <k. The generator polynomial g(x) of C is a factor of (), and there exists a parity polynomial with degree k, h(x), such that . Moreover, since g(x) divides c(x), the following relationship is satisfied:
2.1. Standard ParityCheck Matrix
The standard form of the paritycheck matrix (PCM) of a binary cyclic code is as follows [17]: where , are the binary coefficients of h(x).
The form (2) of the paritycheck matrix is not suitable for BP decoding: it contains many length4 cycles and it has irregular and nonoptimized column weights.
2.2. Extended ParityCheck Matrix
The paritycheck matrix (2) is a (nonsingular) submatrix of the extended paritycheck matrix (EPCM) of a cyclic code that has the following form [13]: is a binary circulant matrix, where each row is obtained through a cyclic shift of the previous row. The form (3) of the paritycheck matrix corresponds to a regular Tanner graph, so, at least in principle, it is more suitable for BP decoding.
However, such form of the paritycheck matrix contains a number of short cycles even higher than matrix (2). If the number of nonnull coefficients of h(x) increases (e.g., when long or highrate codes are considered, like in the DVBS2 standard [1]), has an extremely high number of short cycles that deteriorate performance.
We also observe that has the same density of H, but its Tanner graph contains a larger number of edges; therefore, the decoding complexity is increased by a factor of n/r.
2.3. Reduced ParityCheck Matrix
In order to find a sparser representation for the code paritycheck matrix, it is possible to adopt a very simple iterative algorithm that aims at deriving, from the EPCM, a “reduced paritycheck matrix” (RPCM), , whose density is lower than that of . This can be done by combining linearly (that is, summing up) couples of rows in . The algorithm relies on the observation that, for a circulant matrix, the number of overlapping 1’s between its first row and each other row can be easily computed in terms of the periodic autocorrelation function of the first row.
As an example, Figure 1 shows the periodic autocorrelation function of the first row of (denoted as in the following) for the (127, 71)BCH code. We observe that, for a null shift, the periodic autocorrelation function takes the (maximum) value of 48 that coincides with the Hamming weight of , denoted as w_{1} in the following. We also notice that, for a shift value equal to 4, the periodic autocorrelation function assumes its maximum outofphase (that is for a nonnull shift) value, which is equal to 32. It follows that, by summing up the fifth row of to its first row, we obtain a new vector, , with Hamming weight .
The new vector provides a valid paritycheck equation for the original code, since it is obtained as a linear combination of paritycheck vectors. Due to the cyclic nature of the code, any cyclically shifted version of is a paritycheck vector as well. Therefore, can be used to obtain a new paritycheck matrix in circulant form, with reduced density with respect to . In general, given the vector , it is possible to reduce its density through this procedure if its periodic autocorrelation function has a maximum value (out of the null shift) greater than half of its Hamming weight, w_{i}/2. So, we can apply an iterative density reduction algorithm as follows.
(1)Set ; initialize as the first row of and w_{1} as its Hamming weight.(2)Calculate the periodic autocorrelation function of and its maximum value a for a shift . If , go to step (3), otherwise, stop and output .(3)Calculate (where represents the cyclically shifted version of by v positions), and its Hamming weight . Increment i and go back to step (2). When the algorithm stops, it outputs a binary vector with density less than or equal to that of . is used to obtain the reduced paritycheck matrix in the form of a circulant matrix having as its first row.
We say that the algorithm is successful when the RPCM has a reduced density with respect to the EPCM, that is, the algorithm has executed step (3) at least once.
2.4. Spread ParityCheck Matrix
After having derived the reduced paritycheck matrix , the effectiveness of BP decoding can be further improved by “spreading” the code at the decoder by means of a simple stimes repetition of each codeword (affected by channel noise) of the original code. Obviously, the “spread code” must have a valid paritycheck matrix. For this purpose, we identify a set of s binary circulant matrices, , that sum into . In formula If c is an nbit codeword of the original code, it must be where superscript T denotes vector transposition, and 0 represents the null vector. Let us consider the following “spread paritycheck matrix” (SPCM): and the following spread codeword, obtained by repeating s times the generic codeword c: It follows from these definitions that Therefore, is a valid paritycheck matrix for the spread code, and it is used by the modified decoding algorithm to work on a more efficient graph.
In order to minimize the density of 1 symbols in , we choose particular sets where, according to (4), the blocks have Hamming weights that sum into the Hamming weight of . This way, the density of 1 symbols in is reduced by a factor s with respect to that of . We observe that, in this case, the number of edges in the Tanner graph relative to is the same in the Tanner graph relative to ; therefore, the decoding complexity is practically unchanged.
The spreading criterion we adopt corresponds to spreading the ith column of into s columns of (those at positions ) whose supports are contained in the support of the original column.
In other terms, we spread the 1 symbols in the ith column of among its corresponding s columns in . If we denote as d_{i} the Hamming weight of the ith column of , the Hamming weights of the corresponding set of columns in the spread matrix, at positions , must take values such that , where denotes the Hamming weight of the lth column of . As for the values , they are chosen in a nearly uniform way, that is, . More precisely, we fix when s divides d_{i}; otherwise, the values may be slightly different in order to ensure that they sum up to d_{i}.
It is important to observe that the original code and its transmission rate are not altered by the spreading: the spread code is used only inside the decoder, with the aim of decoding better the original code.
It should be also noted that the proposed procedure for spreading the paritycheck matrix represents a special case of column splitting, presented in [18]; the target of column splitting, however, is to design new finitegeometry LDPC codes, while our aim is to use the spread code to improve decoding of the original code.
2.5. Adaptive Spread ParityCheck Matrix
Inspired by the adaptive belief propagation approach [4], we have also implemented an adaptive version of our spread paritycheck matrix that evolves during decoding iterations on the basis of the values of the bit reliabilities.
Adaptation of the SPCM consists in dynamically changing the “spreading profile” that is the set of values in such a way to produce unitary weight columns in the spread Tanner graph that correspond to the least reliable bits.
This only implies rearranging of some edges in the Tanner graph (i.e., changing the variable nodes these edges are connected to); thus, it does not require sums of rows and does not alter the total number of 1 symbols in the paritycheck matrix that remains sparse. For these reasons, the adaptation technique we propose has very low complexity, contrary to that used in adaptive belief propagation that is based on Gaussian elimination.
For adapting the SPCM at each iteration, we propose the following criterion: the 1 symbols in each column of the RPCM corresponding to the r least reliable bits are spread in a 1weight column in each block of the SPCM, except the last block, in which a column with weight greater than one can appear (due to the fact that it must contain all the remaining 1 symbols that are present in the RPCM column). In formulae: For the remaining bits, instead, we adopt again a uniform spreading profile, that is, . The spreading profile is updated at the end of each decoding iteration and the new SPCM, for the subsequent step, is obtained from the RPCM.
In the following, we will denote as ASPCM the adaptive version of the SPCM.
2.6. Application to ReedSolomon Codes
ReedSolomon codes are nonbinary BCH codes, included in many telecommunication standards and in a huge variety of applications. Each RS code is defined over the finite field , with q a positive integer, and has length , dimension K, and redundancy . Its correction capability is [19]. Shortened RS codes are often used to adapt the code length to the values required in practical applications.
Given a primitive polynomial, p(x), with degree q, and one of its roots, α, the latter is a primitive element of and, hence, any other element can be expressed as a power of : . The paritycheck matrix of an RS code is an matrix defined over : where each represents the power α must be raised to for obtaining its corresponding element.
Although defined over , RS codes can be seen as binary codes by using their binary expansions that can be obtained on the basis of the primitive polynomial adopted. In order to derive a valid paritycheck matrix for the binary expansion of an RS code, we can use the companion matrix, C, of the primitive polynomial. For a qdegree polynomial, the companion matrix is a matrix whose eigenvalues coincide with the roots of the polynomial. So, in the case of a monic binary polynomial , the companion matrix assumes the form When p(x) is a primitive polynomial, we have and C is a fullrank matrix.
A valid paritycheck matrix for the binary expansion of an RS code can be obtained as follows: Matrix H expressed by (12) is an binary matrix (with and ) that can be used for decoding the binary expansion of the RS code. We will denote it as the “binary expansion paritycheck matrix” (BXPCM) in the following.
In order to apply the proposed softdecision decoding technique also to RS codes, we adopt the BXPCM in place of the EPCM used for binary cyclic codes. However, due to the lack of cyclic structure in the BXPCM, the density reduction algorithm must be slightly changed. The BXPCM, in fact, is not a circulant matrix; so, the number of overlaps between couples of rows cannot be obtained by means of the periodic autocorrelation function, but must be calculated by directly resorting to the dot product among couples of rows. A single row is replaced every time a sparser version of the same row is found, since it is not possible to rebuild the whole paritycheck matrix through cyclically shifted versions of a row.
Finally, the SPCM is derived from the RPCM by “spreading” its 1 symbols in s blocks, each with size , in such a way to minimize the number of short cycles in the associated Tanner graph.
3. The Decoding Algorithm
We consider the sumproduct algorithm with loglikelihood ratios (LLRsSPA) [20] that is very common for decoding LDPC codes. This algorithm is well known, and its main steps are reminded next only for the sake of convenience.
Decoding is based on the exchange of messages between variable and check nodes: information on the reliability of the ith received bit c_{i} is sent as a message from the variable node v_{i} to the check node z_{j}, then elaborated, and sent back as a message from the check node z_{j} to the variable node v_{i}.
The algorithm starts by initializing both sets of messages, that is, for which an edge exists between nodes v_{i} and z_{j}, we set where is the initial reliability value based on the channel measurement information, and , is the probability that the codeword bit c_{i} at position i is equal to x, given a received signal y_{i} at the channel output.
After initialization, the LLRSPA algorithm starts iterating. During each iteration, messages sent from the check nodes to the variable nodes are calculated by means of the following formula: where represents the set of variable nodes connected to the check node z_{j}, with the exclusion of node v_{i}.
Messages sent from the variable nodes to the check nodes are then calculated as follows: where represents the set of check nodes connected to the variable node v_{i}, with the exclusion of node z_{j}. In addition, the following quantity is evaluated: where B(i) is the whole set of check nodes connected to v_{i}. Equation (16) is used to obtain an estimate () of the received codeword (c) as follows: The estimated codeword is then multiplied by the paritycheck matrix associated with the Tanner graph. If the paritycheck is successful, the decoding process stops and gives the estimated codeword as its result. Otherwise, the algorithm reiterates using updated messages. In this case, a further verification is made on the number of decoding iterations: when a maximum number of iterations is reached, the decoder stops the estimation efforts and outputs the estimated codeword as its result. In this case, however, decoding is unsuccessful and the error is detected.
3.1. Adaptation to the Spread Code
In order to take advantage of spread paritycheck matrices, we adopt a modified version of the standard BP decoding algorithm.
The demodulator and demapper block produces, for each received bit, the L(c_{i}) used to initialize the decoding algorithm (see (13)). Then, the vector containing the L(c_{i}) values is repeated s times to form the new vector of values, valid for the spread code. This is used to initialize the LLRSPA algorithm that works on the spread paritycheck matrix; the algorithm starts iterating and, at each iteration, produces updated versions of the extrinsic and a posteriori messages. While the former are used as inputs for the subsequent iteration (if needed), the latter represent the decoder output, and serve to obtain an estimated codeword that is subject to the paritycheck test. In addition, this version of the algorithm produces a posteriori messages also for the original codeword as follows:
Two estimated codewords, and , are derived on the basis of the sign of and , respectively, and their corresponding paritycheck tests are executed (based on and ). The test on is passed if and only if the test is passed by all submatrices, while the test on is passed if the test is passed by the sum of the a posteriori messages for all the replicas of each bit. When both tests are successful, the decoder stops iterating and outputs as the estimated codeword; otherwise, decoding continues until a maximum number of iterations is reached. This double paritycheck test permits to reduce significantly the number of undetected errors (decoder failures), as we have verified through numerical simulations.
4. Numerical Simulations
In order to assess the benefits of the proposed approach, we have simulated transmission over the additive white Gaussian noise (AWGN) channel, in conjunction with binary phase shift keying (BPSK) modulation for different BCH and RS codes. In all simulations, we have used a maximum number of iterations equal to 100.
4.1. Short BCH Codes
We consider two examples of short BCH codes with different length and dimension, namely, (n, k) = (63, 57) and (n, k) = (127, 71).
For the first code, the density reduction algorithm is unsuccessful. So we apply the spreading technique directly to the extended paritycheck matrix. For the (127, 71)BCH code, instead, the density reduction algorithm is successful and, starting from with Hamming weight 48, a vector is obtained with Hamming weight 32, thus reducing by 1/3 the paritycheck matrix density. Hence, spreading has been applied to the reduced paritycheck matrix. The main features of the considered BCH codes are summarized in Table 1. The number of length4 cycles has been calculated exhaustively by considering the overlapping ones between each couple of rows (or columns).

We notice that, for the (63, 57)BCH code, the spread paritycheck matrix has a number of length4 cycles higher than that of the classic paritycheck matrix. This is because such code is characterized by a very small r, and this reflects in a matrix (2) with the smallest number of length4 cycles. Figures 2 and 3 show the bit error rate (BER) and frame error rate (FER) as a function of the signaltonoise ratio E_{b}/N_{0}. The curves have been obtained, through numerical simulations, for the considered codes when decoding with the classic paritycheck matrix (PCM), the reduced paritycheck matrix (RPCM, in Figure 3 only, for the reasons explained above), the extended paritycheck matrix (EPCM), and the spread paritycheck matrix (SPCM). The figures report also curves for the union bound (UB) [21] that can be used as a reference for the error rate under ideal (maximum likelihood) decoding.
(a)
(b)
(a)
(b)
We observe from Figure 2 that, for the (63, 57)BCH code, the new technique outperforms those based on the classic PCM and EPCM, with a gain of more than 1 dB over the PCM and more than 1.5 dB over the EPCM. Furthermore, the curves obtained through the SPCM approach are practically overlaid to the union bound, and the SPCM decoder achieves almost ideal performance.
In the case of the (127, 71)BCH code, we have reported also the performance achieved by the ASPCM that offers the best result, at least in the region of explored BER and FER values, with a gain of more than 2 dB over the PCMbased algorithm and more than 3 dB over the EPCM approach.
However, for the (127, 71)BCH code, the curves are rather distant from the union bound, showing that further coding gain could be achieved, in principle. Actually, techniques based on the adaptive belief propagation can show better performance for the same code parameters. Figure 3 reports also the BER and FER curves obtained by using the software made publicly available in [22], showing that the adaptive belief propagation can achieve about 2 dB of further gain, though still not reaching the union bound. Moreover, as a drawback, such approach exhibits a much higher complexity than the one proposed here.
4.2. CDMA2000 ReedSolomon Codes
As an example of application of the proposed technique to RS codes, we have considered the codes included by the “thirdgeneration partnership project 2” (3GPP2) in the CDMA2000 specification for broadcast services in highrate packet data systems [2].
The CDMA2000 standard adopts systematic RS codes defined over GF_{256} with the following choice of the parameters : (16, 12), (16, 13), (16, 14), (32, 24), (32, 26), and (32, 28).
We have focused on the (16, 12) and (32, 28) RS codes that are characterized by the following paritycheck matrices over GF_{256} [2]: with where “” represents the null element ().
From (19), the BXPCMs for the (16, 12) and (32, 28) RS codes can be easily obtained, as explained in Section 2.6, in the form of a and a binary matrix, respectively. The density reduction algorithm has been applied to the BXPCMs, thus obtaining two RPCMs with a reduced number of symbols 1. Finally, the RPCMs have been used as the starting point for the adaptive spreading algorithm that has been applied with . The features of the paritycheck matrices for the considered RS codes are summarized in Table 2.

We observe that the density reduction algorithm is able to produce, in the RPCMs, a density reduction of about 6% for the (16, 12)RS code and 11% for the (32, 28)RS code, with respect to the corresponding BXPCMs. This reflects on a lower number of short cycles in the associated Tanner graphs and in a more favorable performance, as shown in Figures 4 and 5. The ASPCM has a further reduced density of 1 symbols and, jointly with the spread version of the decoding algorithm, it is able to ensure the best performance. In particular, the BER curve in Figure 4(a), referred to the (16, 12)RS code, exhibits a coding gain of more than 1 dB due to the adoption of the proposed approach, based on spread matrices, in comparison with the more conventional BXPCM approach. Instead, the coding gain for the (32, 28)RS code is less than 1 dB (see Figure 5(a)).
(a)
(b)
(a)
(b)
In comparison with the algorithm based on adaptive belief propagation, the approach based on the ASPCM exhibits, for the considered codes, a loss of more than 2 dB. This is not surprising, as the method proposed in [4] is significantly more involved than the approaches we have proposed.
4.3. DVBS2 BCH Codes
The second revision of the European standard for satellite digital video broadcasting (DVBS2) adopts a forward errorcorrection (FEC) scheme based on the concatenation of BCH and LDPC codes [1]. The data stream is divided into k_{bch}bit frames that are used as inputs for a systematic BCH encoder. This produces n_{bch}bit frames by appending redundancy bits to the input frame. According to the standard, r_{bch} can assume the following values: 128, 160, and 192 for normal frames, 168 for short frames. The output of the outer BCH encoder is given as input to an inner systematic LDPC encoder that produces n_{ldpc}bit frames by appending further redundancy bits to each BCHencoded frame.
The interest for applying an iterative softdecision decoding to the BCH code, too, is in the possibility of uniforming the decoding procedure to that of the inner LDPC code, with expected hardware and software advantages. The result should be a significant reduction of the complexity that, even adopting hard decoding, is a critical issue for BCH codes of so large sizes. In addition, a performance improvement should also be expected, although we show that it is not simple to achieve it with the method proposed in the previous sections.
We consider, in our simulations, the short frame format that is characterized by , but the proposed techniques can also be applied to normal frames, with . The standard FEC parameters for short frames, together with encoding details, are reported in [1] and are omitted here for the sake of brevity.
BCH codes used for short DVBS2 frames are able to correct errors and have code length ranging between 3240 and 14400 bits. Actually, the standard adopts shortened BCH codes, all defined by the same generator polynomial that can be obtained as ; the structure of is also given in [1]. Each factor has degree and can be seen as the generator polynomial of a Hamming code with length and redundancy . The corresponding paritycheck polynomial can therefore be obtained as .
Each BCH code can be seen as a shortened version of a “mother” BCH code with length , redundancy , and dimension . In fact, it can be easily shown that g(x) divides and h(x) can be derived as follows: Once having obtained h(x), the first row of , becomes available; it has Hamming weight . Starting from this dense vector, it is possible to execute 7 iterations of the reduction algorithm described in Section 2.3, thus obtaining a new vector, , with Hamming weight . It must be said, however, that in the present case that considers a long code with very high rate, the density reduction algorithm is not able to produce immediately an excellent result: the reduced paritycheck matrix has a density that is only 2.4% smaller than that of the extended paritycheck matrix.
For each iteration of the algorithm, the shift v has taken the following values: 213318, 215694, 106013, 171879, 40909, 85749, 761. When multiple choices for v were possible (due to the fact that the autocorrelation function can assume its maximum outofphase value for more than one shift v), a random selection criterion has been adopted, and the experiment has been repeated several times in order to find the best sequence among the selected ones.
Vector has been used as the first row of the RPCM for the cyclic mother code. A valid paritycheck matrix for each shortened code derived from the mother code can be obtained by selecting the first rows and the first columns of the RPCM so found. The shortened RPCM is then used as the starting point for the spreading algorithm that produces the SPCM.
We have considered the case of and applied the spreading algorithm with . The results of numerical simulations based on the spreading technique are shown in Figure 6. Actually, performance is not particularly good: even by adopting the ASPCM, that outperforms the SPCM, the simulated curves are worse than those referred to a harddecision decoder able to correct errors. However, we guess that the unsatisfactory result is due to the difficulty in reducing the weight of the paritycheck matrix when starting from a so dense paritycheck matrix. Also in this case, the adoption of adaptive belief propagation permits to achieve better performance (with more than 2 dB of further gain) at the cost of increased complexity.
(a)
(b)
In Figure 6, we also show the performance of an LDPC code having the same parameters of the BCH code. It has been designed through the socalled LCO technique [23], that permits to avoid the presence of length4 cycles but, except for this, the paritycheck matrix has not been further optimized. So, we see that wide margins for improving performance should exist, on condition to find more effective representations of the paritycheck matrix than those considered so far. Work is in progress in such a direction.
5. Conclusion
We have studied the application of some new iterative softdecision decoding algorithms based on belief propagation to BCH and RS codes. The essence of the method is in the possibility to overcome the drawbacks of the paritycheck matrix of these codes, namely, the high density of 1 symbols and the presence of short length cycles in the Tanner graph that prevent effective application of the BP decoding algorithm. The naive idea of matrix extension, already proposed in the literature, has been refined through the introduction of additional “reduction” and “spreading” operations, the latter, eventually, in an adaptive implementation.
The procedure is very simple and quite suitable for application in multimedia transmissions. If applied to binary short codes, like those required in presence of stringent requirements on the decoding delay, the method achieves improved performance with respect to classic paritycheck matrices. The proposed approach is still outperformed by adaptive belief propagation, particularly in the case of very long and highrate codes. Its complexity, however, is always lower.
References
 ETSI EN 302 307 v1.1.2, “Digital Video Broadcasting (DVB); Second generation framing structure, channel coding and modulation system for Broadcasting, Interactive Services, News Gathering and other broadband satellite applications,” June 2006. View at: Google Scholar
 3GPP2 C.S0054A v1.0, “CDMA2000 High Rate BroadcastMulticast Packet Data Air Interface Specification,” February 2006. View at: Google Scholar
 D. Chase, “Class of algorithms for decoding block codes with channel measurement information,” IEEE Transactions on Information Theory, vol. 18, no. 1, pp. 170–182, 1972. View at: Publisher Site  Google Scholar
 J. Jiang and K. R. Narayanan, “Iterative softinput softoutput decoding of ReedSolomon codes by adapting the paritycheck matrix,” IEEE Transactions on Information Theory, vol. 52, no. 8, pp. 3746–3756, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 T. J. Richardson and R. L. Urbanke, “The capacity of lowdensity paritycheck codes under messagepassing decoding,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 599–618, 2001. View at: Publisher Site  Google Scholar  MathSciNet
 T. R. Halford, A. J. Grant, and K. M. Chugg, “Which codes have 4cyclefree Tanner graphs?,” IEEE Transactions on Information Theory, vol. 52, no. 9, pp. 4219–4223, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 J. S. Yedidia, J. Chen, and M. P. C. Fossorier, “Generating code representations suitable for belief propagation decoding,” Mitsubishi Electric Research Laboratories, Cambridge, Mass, USA, September 2002. View at: Google Scholar
 J. S. Yedidia, J. Chen, and M. P. C. Fossorier, “Representing codes for belief propagation decoding,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT '03), p. 176, Yokohama, Japan, JuneJuly 2003. View at: Publisher Site  Google Scholar
 S. Sankaranarayanan and B. Vasic, “Iterative decoding of linear block codes: a paritycheck orthogonalization approach,” IEEE Transactions on Information Theory, vol. 51, no. 9, pp. 3347–3353, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 B. Kamali and A. H. Aghvami, “Belief propagation decoding of ReedSolomon codes; a bitlevel soft decision decoding algorithm,” IEEE Transactions on Broadcasting, vol. 51, no. 1, pp. 106–113, 2005. View at: Publisher Site  Google Scholar
 A. Kothiyal and O. Y. Takeshita, “A comparison of adaptive belief propagation and the best graph algorithm for the decoding of linear block codes,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT '05), pp. 724–728, Adelaide, Australia, September 2005. View at: Publisher Site  Google Scholar
 M. ElKhamy and R. J. McEliece, “Iterative algebraic softdecision list decoding of ReedSolomon codes,” IEEE Journal on Selected Areas in Communications, vol. 24, no. 3, pp. 481–490, 2006. View at: Publisher Site  Google Scholar
 R. H. MorelosZaragoza, “Architectural issues of softdecision iterative decoders for binary cyclic codes,” Sony ATL, Atlanta, Ga, USA, August 2000. View at: Google Scholar
 M. Baldi, G. Cancellieri, and F. Chiaraluce, “Iterative softdecision decoding of binary cyclic codes based on spread paritycheck matrices,” in Proceedings of the 15th International Conference on Software, Telecommunications and Computer Networks (SoftCOM '07), Dubrovnik, Croatia, September 2007, Paper 7069. View at: Publisher Site  Google Scholar
 M. Baldi, G. Cancellieri, and F. Chiaraluce, “Iterative softdecision decoding of binary cyclic codes,” submitted to Journal of Communications Software and Systems. View at: Google Scholar
 L. Zhang, V. O. K. Li, and Z. Cao, “Short BCH codes for wireless multimedia data,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC '02), vol. 1, pp. 220–222, Orlando, Fl, USA, March 2002. View at: Publisher Site  Google Scholar
 S. B. Wicker, Error Control Systems for Digital Communication and Storage, PrenticeHall, Englewood Cliffs, NJ, USA, 1994.
 Y. Kou, S. Lin, and M. P. C. Fossorier, “Lowdensity paritycheck codes based on finite geometries: a rediscovery and new results,” IEEE Transactions on Information Theory, vol. 47, no. 7, pp. 2711–2736, 2001. View at: Publisher Site  Google Scholar  MathSciNet
 S. B. Wicker and V. K. Bhargava, Eds., ReedSolomon Codes and Their Applications, S. B. Wicker and V. K. Bhargava, Eds., WileyIEEE Press, Piscataway, NJ, USA, 1999.
 J. Hagenauer, E. Offer, and L. Papke, “Iterative decoding of binary block and convolutional codes,” IEEE Transactions on Information Theory, vol. 42, no. 2, pp. 429–445, 1996. View at: Publisher Site  Google Scholar
 R. H. MorelosZaragoza, The Art of Error Correcting Coding, John Wiley & Sons, New York, NY, USA, 2002.
 J. Jiang, “Software simulator for the adaptive iterative RS decoding algorithm,” http://www.ece.tamu.edu/~jjiang. View at: Google Scholar
 M. Baldi and F. Chiaraluce, “On the design of punctured low density parity check codes for variable rate systems,” Journal of Communications Software and Systems, vol. 1, no. 2, pp. 88–100, 2005. View at: Google Scholar
Copyright
Copyright © 2008 Marco Baldi and Franco Chiaraluce. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.