Abstract
Two iterative decoding algorithms of 3D-product block codes (3D-PBC) based on genetic algorithms (GAs) are presented. The first algorithm uses the Chase-Pyndiah SISO, and the second one uses the list-based SISO decoding algorithm (LBDA) based on order- reprocessing. We applied these algorithms over AWGN channel to symmetric 3D-PBC constructed from BCH codes. The simulation results show that the first algorithm outperforms the Chase-Pyndiah one and is only 1.38 dB away from the Shannon capacity limit at BER of for BCH (31, 21, 5)3 and 1.4 dB for BCH (16, 11, 4)3. The simulations of the LBDA-based GA on the BCH (16, 11, 4)3 show that its performances outperform the first algorithm and is about 1.33 dB from the Shannon limit. Furthermore, these algorithms can be applied to any arbitrary 3D binary product block codes, without the need of a hard-in hard-out decoder. We show also that the two proposed decoders are less complex than both Chase-Pyndiah algorithm for codes with large correction capacity and LBDA for large parameter. Those features make the decoders based on genetic algorithms efficient and attractive.
1. Introduction
Among the proposed codes in the history of error correcting, there are those who have performance very close to the Shannon limit, like Turbo codes [1] and LDPC codes [2]. Nevertheless, the remarkable reduction of BER is performed at the expense of their decoders complexity. The current challenge for researchers in this field is to find a compromise between performance and decoding complexity. Thus, several optimization works of decoding algorithms have emerged, in particular, those associated to product codes. These codes were first introduced in 1954 by Elias [3]. In 1981 and 1983, an iterative decoding method hard-in hard-out (HIHO) of these codes has been described, respectively by Tanner [4] and Lin and Costello [5]. In 1994, a soft-in soft-out (SISO) iterative decoding of the product block codes (PBC) was proposed by Pyndiah et al. [6], using the Chase algorithm as the elementary decoder [7]. This algorithm does not work alone, but together with another decoder HIHO which is not always easy to find for some codes, like quadratic residue (QR). Later, in 2004, an enhanced SISO iterative decoding algorithm of PBC, based on order reprocessing decoding, was developed by Martin et al. [8].
Recently, the researchers in the field of channel coding were inspired from artificial intelligence techniques to develop very good decoders for linear block codes. We quote from the first works in this sense, the decoding of linear block codes using algorithm A* [9], genetic algorithms [10], and neural networks [11].
We were interested in this work in decoders based on genetic algorithms (GAD) [10] applied to the 3D-product block code (3D-PBC). It was shown in [12], that these decoders applied to BCH codes outperform the Chase-2 algorithm and present a lower complexity for BCH codes with large block lengths. We note that their performances can be improved further by optimizing some parameters such as the population size and the number of generations.
In this paper, which is the continuation of the work [13], we introduce and study two iterative decoding algorithms of an arbitrary 3D binary product block code based on GAD. The extrinsic information is computed in the first proposed algorithm according to the Chase-Pyndiah formulas [6] and is computed in the second one according to the list-based SISO decoding algorithm (LBDA) [8]. A comparison at the level of complexity of the proposed algorithms versus Chase-Pyndiah and LBDA algorithms was made.
This paper is organized as follows. Section 2 defines the 3D-PBC code. Then, we explain in Section 3, the elementary decoding based on GAD. The presentation and complexity study of our iterative decoding algorithms using genetic algorithms IGAD, will be given in Section 4. Section 5 illustrates, through simulations, the IGAD performances and the effect of some parameters on these performances. It also presents a comparison of performances between the two proposed algorithms. Finally, Section 6 presents the conclusion and indicates how the performances of our decoders can be improved further.
2. 3D-Product Block Code (3D-PBC)
The product codes (or iterative codes) are a particular case of serial concatenated codes. They allow to construct codes of great length by concatenating two or more arbitrary block codes with short lengths. In our case, we considered two symmetric 3D-PBC, (16, 11, 4)3 and (31, 21, 5)3, which consists of three identical codes BCH each one.
Let , , and , three linear block codes. We encode an information block, using 3D-PBC given in the Figure 1, by(1)filling a cube of rows, columns and as the depth by information bits;(2)coding the rows (the cube contains lateral plans which are composed from rows each one) using code . The check bits are placed at the right, and we obtain a new cube with bits;(3)coding the columns of the cube obtained in the previous step using code . This means that the check bits will be also encoded (the previous cube contains transverse plans which are composed from columns each one). The check bits are placed at the bottom of the cube obtained in step 2, and we get a new cube with bits;(4)Coding, finally the obtained cube in step 3 from the front to the behind, that is, coding the columns, using code (the previous cube consists of horizontal plans which contains columns). The check bits are placed at the behind. So, the last cube which has bits is the codeword.
We can show by similar reasoning in [14] that the parameters of the 3D-PBC are(i)length: ;(ii)dimension: :(iii)minimum Hamming distance; .(iv)rate: .
This shows one of the best advantages of product block codes: building very long block codes with large minimum Hamming distance by concatenating short codes with small minimum Hamming distance.
3. Elementary Decoding of Linear Codes
Let be the received sequence at the decoder input of a binary linear block code with a generator matrix .
3.1. Hard-Input Soft-Output Decoder
Step 1
Sort the elements of received vector in descending order of magnitude. This will put reliable elements in the first ranks, since using an AWGN channel. Then, the vector is permuted such that its first coordinates are linearly independent. We obtain a vector such that . Let be the permutation of by , that is, .
Step 2
Quantize the first bits of to obtain vector and randomly generate information vectors of bits each one. This vectors form with vector the initial population of individuals .
Step 3
Encode individuals of the current population, using to obtain codewords: . Then, compute individuals fitness, defined as Euclidian distance between and . Sort individuals in ascending order of fitness.
Step 4
Place the first individuals (: elite number ) to the next population, which will be completed by offsprings generated using reproduction operators: selection of two best individuals as parents using the following linear ranking:
where is the th individual weight, and weight is assigned to the fittest (nearest) individual.
Reproduce the remaining individuals of the next population by crossover and mutation operations. Let , and be respectively, probabilities of crossover and mutation, and a uniformly random value between 0 and 1, generated at each time.
if, then :
and then,
elseend if
Repeat steps 3 and 4 for generations.
Step 5
The first (fittest) individual of the last generation is the nearest to . So, the decided codeword is .
3.2. Soft-Input Soft-Output Decoder
In this section, we present the SO_GAD decoders (soft-output GAD) used as the elementary decoder in our iterative decoding algorithms.
Let denote the GAD decision of the input sequence and the extrinsic information.
Let be the competitor codeword of corresponding to the th bit defined by where is the th codeword of the last generation, and are the th bits of and, is the Euclidean distance.
Algorithm 1. . Algorithm SO_GAD accepts as input , the coefficient . This coefficient is optimized according to the chosen code and SNR to enhance the algorithm performance.
For to do
if exists, then
else
where .
end if
End for
Algorithm 2. . Let be the LBDA parameter () enhancing the decoding performances [8]. The algorithm SO_GAD accepts as input , and . This parameter is usually chosen to be or .
For to do
where denotes the set of positions where exists.
End for
For to do
if exists, then
else
where denotes the set of positions where exists.
end if
End for.
3.2.1. Decoding
The SO_GAD algorithm uses GAD for decoding the input sequence . The decision codeword is the top of the th generation sorted in ascending order of fitness, and the competitor codeword corresponding to the th bit of , if it exists, is the first member of the last generation which have the different th bit .
3.2.2. Extrinsic Information
The decision codeword and the associated competitor codewords are used to calculate the extrinsic information from the formulas (6) and (7) for the first algorithm and (8) and (9) in the case of the second one.
4. Iterative Decoding Algorithm and Complexity
In this section, we describe the iterative decoding algorithm of PBC based on GAD (IGAD), then we show that IGAD has a polynomial time complexity.
Let denotes three binary linear block codes of length , dimension , minimum Hamming distance , and generator matrix .
4.1. Iterative Decoding Algorithm
Let be the received codeword. Figures 2 and 3 show the iterative decoding schemes of PBC based on GAD for the proposed algorithms. The following is an outline of IGADs.
Algorithm 3. ,
,
,
,
,
,
,
,
,
,
.
Algorithm IGAD accepts as input , , , , , , , , , , the iterations number , the coefficients . In the case of the first algorithm, we use the coefficients , and in the second, we use the parameter. The and coefficients are optimized by simulation step by step for each code. For the second algorithm, we choose to be 0.5.
Step 1
Extrinsic information initialization
, .
Let is the extrinsic information given to th elementary decoder by the other decoder:
Step 2
Row, column, and depth decoding:
While do
Step 2.1.
Decoding with SO_GAD the th column and estimating the extrinsic information , using (6) and (7), of each vector at the input of the elementary decoder:
Step 2.2. and 2.3.
Repeat step 2.1 for decoding the rows and depths and estimating the extrinsic information. Let and be respectively the cubes decision and extrinsic information at the output of the depth elementary decoder.
Step 3
; .
End While.
Select the decided codeword at the th iteration.
Stopping Criterion for the Second Algorithm.
Since the GAD decoder decides always a codeword, our second decoder does not need to use the NCB (nonconvergent block) decoder proposed in [8]. So, its complexity will be reduced.
4.2. Complexity Analysis
In this section, we present and compare the expressions of time complexities of the studied decoders.
4.2.1. IGADs Time Complexity
If we do not take into consideration the calculating step of the extrinsic information, the two algorithms have the same time complexity. The GAD algorithm for a linear block code has polynomial time complexity , where the function is given by [12]
Time Complexity of IGAD1
(i)Time complexity of extrinsic information computing: For each decision (row, column, or depth) at the last generation of each iteration, the worst-case time complexity of competitors search is .
From (6), the time complexity of extrinsic information calculating in the worst-case (if the competitor exists) at the last generation of each iteration is . So the total time complexity of extrinsic information computing is , where(ii)total time complexity:
At any iteration of IGAD1, the first elementary decoder has a time complexity of , the second decoder has a complexity of , and the third decoder has a time complexity of , so the total complexity is polynomial: where For the symmetric 3D-PBC and , then the IGAD1 time complexity becomes
Time Complexity of IGAD2
(i)Time complexity of extrinsic information computing: The maximal number of competitors of each decision is . So, at the last generation of each iteration, the worst-case time complexity of the first step given by (8) is .
From (9), the worst-case time complexity of competitors search is is .
From (9) and (10), the time complexity in the worst-case of the second step of extrinsic information calculating is
So the total time complexity of extrinsic information computing is , where(ii)total time complexity:
The total complexity in this case is given from (16): For the symmetric 3D-PBC and , then the IGAD2 time complexity becomes It is clear from (17) and (21) that IGAD2 is less complex than IGAD1, and their complexities are equal if .
4.2.2. Chase-Pyndiah and LBDA Algorithms Time Complexities
We show that this algorithm has an exponential time complexity. Let be a BCH code, and let be the test patterns number used in both Chase and OSD- (ordered statistic decoding) algorithms. The complexity of each algorithm is .
The Euclidian distance computing of each codeword has a computational complexity of . So, the total time complexity of decoding and computing fitness of the test patterns is .
At any given decoding iteration of the Chase-Pyndiah algorithm, the sorting step of the fitness has a time complexity of and the the worst-case time complexity of competitors search is . Thus, the total time complexity of the Chase-Pyndiah algorithm is where
Thus, in the case of , , and, the exponential time complexity of the two algorithms is
Note that in the case of Chase-2 algorithm, , where .
From (17) and (24), it is shown that IGAD1 and IGAD2 are less complex than the two Chase-Pyndiah and LBDA algorithms for codes with large correction capacity or for large parameter or also with great length and low rate.
5. Simulation Results
The figures in this section plot the bit error rate (BER) versus the energy per bit to noise power spectral density ratio for the symmetric 3D-PBC (16, 11, 4)3 and (31, 21, 5)3. The simulation parameters used in IGADs are given in Table 1.
5.1. IGAD1 Performances
5.1.1. Scaling Factors Optimization for IGAD1
As the iterations number increases, the extrinsic information gradually becomes more reliable. To take the effect into account, the scaling factors are used to reduce the turbo decoder input impact. It has shown that these factors depend on the code and GAD. So, they are optimized step by step for each code. The optimized values and for our algorithm are shown in Table 2. However, as the scaling factors and are gradually increased or decreased from the optimal values, the decoding performance of IGAD1 decoder decreases. Figure 4 shows the gain with the optimized values of , for (16, 11, 4)3, compared to the values taken randomly. The values of genetic parameters used are and .
5.1.2. Effect of Evaluated Codewords Number
Generally, increasing the number of evaluated codewords , the probability to find the codeword closest to the input sequence becomes high. This makes it possible to improve the BER performances. The effect of increasing the number of evaluated codewords on the BER improvement for code (16, 11, 4)3 at the 12th iteration is presented in Figures 5 and 6. The values and can be the optimal values in a large range . The other genetic parameters for the first optimization are , , and ; , , and for the second.
5.1.3. Cross-Over Rate Effect
Since the cross-over rate is one of the important features of a genetic algorithm, an optimization of this probability is necessary. Figure 7 shows the optimized value for (16, 11, 4)3 3D-PBC which improves the BER at a rather high SNR and at 12th iteration. This value closing to 1 means that IGAD1 requires a broad exploration and efficient exploitation, but increases somewhat the algorithm complexity. Indeed, when is close to 0, the crossover operation will occur rarely. For this simulation, we fixed the other parameters as follows: , , and .
5.1.4. Mutation Rate Effect
The effect of mutation rate on IGAD1 for BCH (16, 11, 4)3 3D-PBC is depicted in Figure 8. it is shown that is the optimal value for BER at a high SNR and at 12th iteration. One reason of this value close to 0 may be the stability of members in vicinity of optima for low mutation rates. The fixed values are , , and .
5.1.5. Code Rate Effect
The Figure 9 shows the improvement/degradation of the BER performance of IGAD1 at the 12th and 15ths, iteration respectively, with decreasing/increasing the code dimension or code rate. The rate 0.31 of (31, 21, 4)3 is less than that of (16, 11, 4)3 which equals to 0.32. This explains the better performances for the first 3D-PBC code in the range dB. In this simulation, we adopted the optimal values previously found: , , , and .
5.1.6. Comparison between IGAD1 and IGAD2
As the iteration number increases, the IGADs performances improve approximately in this paper in the whole range for all 3D-PBC studied. The performances of the IGAD decoders is depicted in Figure 10 for BCH (16, 11, 4)3 3D-PBC. These performances can be improved by increasing the total number of members as shown in Figure 6. The IGAD1 and IGAD2 performances are respectively about 1.4 dB and 1.33 dB away from the Shannon capacity limit, which is 0.97 dB for this code. We used the following optimized parameters: , , , and .
6. Conclusion
In this paper, we have presented two iterative decoding algorithms which can be applied to any arbitrary 3D-product block codes based on a genetic algorithm, without the need of a hard-in hard-out decoder. Our theoretical results show that these algorithms reduce the decoding complexity, for codes with a low rate and large correction capacity or large parameter used in LBDA algorithm. Furthermore, the performances of these algorithms can be improved by using asymmetric 3D-PBC codes and also, by tuning some parameters like the selection method, the crossover/mutation rates, the population size, the number of generations, and the iterations number. These algorithms can be applied again on multipath fading channels in both CDMA systems and systems without spread-spectrum. Those features open broad prospects for decoders based on artificial intelligence.