About this Journal Submit a Manuscript Table of Contents
Journal of Electrical and Computer Engineering
VolumeΒ 2012Β (2012), Article IDΒ 609650, 8 pages
Research Article

Reduced Complexity Iterative Decoding of 3D-Product Block Codes Based on Genetic Algorithms

1Department of Industrial and Production Engineering, Moulay Ismail University, Ecole Nationale Supérieure d'Arts et Métiers, Meknès 50000, Morocco
2Department of Communication Networks, Ecole Nationale Supérieure d'Informatique et d'Analyse des Systèmes, Rabat 10000, Morocco

Received 15 September 2011; Revised 21 December 2011; Accepted 8 February 2012

Academic Editor: LisimachosΒ Kondi

Copyright Β© 2012 Abdeslam Ahmadi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Two iterative decoding algorithms of 3D-product block codes (3D-PBC) based on genetic algorithms (GAs) are presented. The first algorithm uses the Chase-Pyndiah SISO, and the second one uses the list-based SISO decoding algorithm (LBDA) based on order-𝑖 reprocessing. We applied these algorithms over AWGN channel to symmetric 3D-PBC constructed from BCH codes. The simulation results show that the first algorithm outperforms the Chase-Pyndiah one and is only 1.38 dB away from the Shannon capacity limit at BER of 10βˆ’5 for BCH (31, 21, 5)3 and 1.4 dB for BCH (16, 11, 4)3. The simulations of the LBDA-based GA on the BCH (16, 11, 4)3 show that its performances outperform the first algorithm and is about 1.33 dB from the Shannon limit. Furthermore, these algorithms can be applied to any arbitrary 3D binary product block codes, without the need of a hard-in hard-out decoder. We show also that the two proposed decoders are less complex than both Chase-Pyndiah algorithm for codes with large correction capacity and LBDA for large 𝑖 parameter. Those features make the decoders based on genetic algorithms efficient and attractive.

1. Introduction

Among the proposed codes in the history of error correcting, there are those who have performance very close to the Shannon limit, like Turbo codes [1] and LDPC codes [2]. Nevertheless, the remarkable reduction of BER is performed at the expense of their decoders complexity. The current challenge for researchers in this field is to find a compromise between performance and decoding complexity. Thus, several optimization works of decoding algorithms have emerged, in particular, those associated to product codes. These codes were first introduced in 1954 by Elias [3]. In 1981 and 1983, an iterative decoding method hard-in hard-out (HIHO) of these codes has been described, respectively by Tanner [4] and Lin and Costello [5]. In 1994, a soft-in soft-out (SISO) iterative decoding of the product block codes (PBC) was proposed by Pyndiah et al. [6], using the Chase algorithm as the elementary decoder [7]. This algorithm does not work alone, but together with another decoder HIHO which is not always easy to find for some codes, like quadratic residue (QR). Later, in 2004, an enhanced SISO iterative decoding algorithm of PBC, based on order reprocessing decoding, was developed by Martin et al. [8].

Recently, the researchers in the field of channel coding were inspired from artificial intelligence techniques to develop very good decoders for linear block codes. We quote from the first works in this sense, the decoding of linear block codes using algorithm A* [9], genetic algorithms [10], and neural networks [11].

We were interested in this work in decoders based on genetic algorithms (GAD) [10] applied to the 3D-product block code (3D-PBC). It was shown in [12], that these decoders applied to BCH codes outperform the Chase-2 algorithm and present a lower complexity for BCH codes with large block lengths. We note that their performances can be improved further by optimizing some parameters such as the population size and the number of generations.

In this paper, which is the continuation of the work [13], we introduce and study two iterative decoding algorithms of an arbitrary 3D binary product block code based on GAD. The extrinsic information is computed in the first proposed algorithm according to the Chase-Pyndiah formulas [6] and is computed in the second one according to the list-based SISO decoding algorithm (LBDA) [8]. A comparison at the level of complexity of the proposed algorithms versus Chase-Pyndiah and LBDA algorithms was made.

This paper is organized as follows. Section 2 defines the 3D-PBC code. Then, we explain in Section 3, the elementary decoding based on GAD. The presentation and complexity study of our iterative decoding algorithms using genetic algorithms IGAD, will be given in Section 4. Section 5 illustrates, through simulations, the IGAD performances and the effect of some parameters on these performances. It also presents a comparison of performances between the two proposed algorithms. Finally, Section 6 presents the conclusion and indicates how the performances of our decoders can be improved further.

2. 3D-Product Block Code (3D-PBC)

The product codes (or iterative codes) are a particular case of serial concatenated codes. They allow to construct codes of great length by concatenating two or more arbitrary block codes with short lengths. In our case, we considered two symmetric  3D-PBC, (16, 11, 4)3 and (31, 21, 5)3, which consists of three identical codes BCH each one.

Let 𝐢(1)(𝑛1,π‘˜1,𝑑1), 𝐢(2)(𝑛2,π‘˜2,𝑑2), and 𝐢(3)(𝑛3,π‘˜3,𝑑3), three linear block codes. We encode an information block, using 3D-PBC =𝐢(1)βŠ—πΆ(2)βŠ—πΆ(3) given in the Figure 1, by(1)filling a cube of π‘˜2 rows, π‘˜1 columns and π‘˜3 as the depth by π‘˜1Γ—π‘˜2Γ—π‘˜3 information bits;(2)coding the π‘˜2Γ—π‘˜3 rows (the cube contains π‘˜3 lateral plans which are composed from π‘˜2 rows each one) using code 𝐢(1). The check bits are placed at the right, and we obtain a new cube with π‘˜2Γ—π‘˜3×𝑛1 bits;(3)coding the 𝑛1Γ—π‘˜3 columns of the cube obtained in the previous step using code 𝐢(2). This means that the check bits will be also encoded (the previous cube contains 𝑛1 transverse plans which are composed from π‘˜3 columns each one). The check bits are placed at the bottom of the cube obtained in step 2, and we get a new cube with 𝑛1Γ—π‘˜3×𝑛2 bits;(4)Coding, finally the obtained cube in step 3 from the front to the behind, that is, coding the 𝑛1×𝑛2 columns, using code 𝐢(3) (the previous cube consists of 𝑛2 horizontal plans which contains 𝑛1 columns). The check bits are placed at the behind. So, the last cube which has 𝑛1×𝑛2×𝑛3 bits is the codeword.

Figure 1: The 3D-product block code.

We can show by similar reasoning in [14] that the parameters of the 3D-PBC are(i)length: 𝑛=𝑛1×𝑛2×𝑛3;(ii)dimension: π‘˜=π‘˜1Γ—π‘˜2Γ—π‘˜3:(iii)minimum Hamming distance; 𝑑=𝑑1×𝑑2×𝑑3.(iv)rate: 𝑅=𝑅1×𝑅2×𝑅3=π‘˜1/𝑛1Γ—π‘˜2/𝑛2Γ—π‘˜3/𝑛3.

This shows one of the best advantages of product block codes: building very long block codes with large minimum Hamming distance by concatenating short codes with small minimum Hamming distance.

3. Elementary Decoding of Linear Codes

Let 𝑅=(𝑅1,…,𝑅𝑛) be the received sequence at the decoder input of a binary linear block code 𝐢(𝑛,π‘˜,𝑑) with a generator matrix 𝐺.

3.1. Hard-Input Soft-Output Decoder

Step 1
Sort the elements of received vector 𝑅 in descending order of magnitude. This will put reliable elements in the first ranks, since using an AWGN channel. Then, the vector is permuted such that its first π‘˜ coordinates are linearly independent. We obtain a vector π‘…ξ…ž=πœ‹(𝑅)=(π‘…ξ…ž1,…,π‘…ξ…žπ‘›) such that |π‘…ξ…ž1|β‰₯|π‘…ξ…ž2|β‰₯β‹…β‹…β‹…β‰₯|π‘…ξ…žπ‘›|. Let πΊξ…ž be the permutation of 𝐺 by πœ‹, that is, πΊξ…ž=πœ‹(𝐺).

Step 2
Quantize the first π‘˜ bits of π‘…ξ…ž to obtain vector π‘Ÿ and randomly generate (π‘π‘–βˆ’1) information vectors of π‘˜ bits each one. This vectors form with vector π‘Ÿ the initial population of 𝑁𝑖 individuals (𝐼1,…,𝐼𝑁𝑖).

Step 3
Encode individuals of the current population, using πΊξ…ž to obtain codewords: 𝐢𝑖=πΊξ…žβ‹…πΌπ‘–(1≀𝑖≀𝑁𝑖). Then, compute individuals fitness, defined as Euclidian distance between 𝐢𝑖 and π‘…ξ…ž. Sort individuals in ascending order of fitness.

Step 4
Place the first 𝑁𝑒 individuals (𝑁𝑒: elite number ≀𝑁𝑖) to the next population, which will be completed by offsprings generated using reproduction operators: selection of two best individuals as parents (π‘Ž,𝑏) using the following linear ranking: π‘Šπ‘–=π‘Šmaxξ€·π‘Šβˆ’2(π‘–βˆ’1)maxξ€Έβˆ’1π‘π‘–ξ€½βˆ’1,βˆ€π‘–βˆˆ1,…,𝑁𝑖,(1) where π‘Šπ‘– is the 𝑖th individual weight, and π‘Šmax weight is assigned to the fittest (nearest) individual.
Reproduce the (𝑁𝑒+1) remaining individuals of the next population by crossover and mutation operations. Let 𝑝𝑐, π‘π‘š and Rand be respectively, probabilities of crossover and mutation, and a uniformly random value between 0 and 1, generated at each time.
ifRand<𝑝𝑐,  then forall∈{𝑁𝑒+1,…,𝑁𝑖},π‘—βˆˆ{1,…,π‘˜}:𝐼𝑖𝑗=⎧βŽͺ⎨βŽͺβŽ©π‘Žπ‘—ξ€·ifRand<1βˆ’π‘Žπ‘—+π‘Žπ‘—π‘π‘—ξ€Έ+π‘Žπ‘—βˆ’π‘π‘—1+π‘’βˆ’4𝑅′𝑗/𝑁0𝑏𝑗else,(2) and then, 𝐼𝑖𝑗=1βˆ’πΌπ‘–π‘—ifRand<π‘π‘š,(3)else𝐼𝑖=ξ‚»π‘ŽifRand<0.5𝑏else,(4)end if

Repeat steps 3 and 4 for 𝑁𝑔 generations.

Step 5
The first (fittest) individual 𝐷′ of the last generation is the nearest to π‘…ξ…ž. So, the decided codeword is 𝐷=πœ‹βˆ’1(π·ξ…ž).

3.2. Soft-Input Soft-Output Decoder

In this section, we present the SO_GAD decoders (soft-output GAD) used as the elementary decoder in our iterative decoding algorithms.

Let 𝐷 denote the GAD decision of the input sequence 𝑅 and 𝑀 the extrinsic information.

Let 𝐻(𝑗) be the competitor codeword of 𝐷 corresponding to the 𝑗th bit defined by ‖‖𝐻(𝑗)β€–β€–βˆ’π‘…=min2≀𝑝≀𝑁𝑖‖‖𝑄(𝑝)β€–β€–βˆ’π‘…,𝑄𝑗(𝑝)≠𝐷𝑗,(5) where 𝑄(𝑝)is the 𝑝th codeword of the last generation, 𝑄𝑗(𝑝) and 𝐷𝑗 are the 𝑗th bits of 𝑄(𝑝),𝐷 and, β€–.β€– is the Euclidean distance.

Algorithm 1. (𝑀,𝐷)=SO_GAD(π‘˜,𝑛,𝑅,𝑝𝑐,π‘π‘š,𝑁𝑖,𝑁𝑔,𝛽). Algorithm SO_GAD accepts as input π‘˜,𝑛,𝑝𝑐,π‘π‘š,𝑁𝑖,𝑁𝑔, the coefficient 𝛽. This coefficient is optimized according to the chosen code and SNR to enhance the algorithm performance.
For 𝑗=1  to 𝑛  do
if  𝐻(𝑗)  exists, then 𝑀𝑗=𝐷𝑗‖‖𝐻(𝑗)β€–β€–βˆ’π‘…βˆ’β€–π·βˆ’π‘…β€–4ξƒ­=𝐷𝑗𝑛𝐻𝑝=1,𝑝≠𝑗𝑝(𝑗)≠𝐷𝑝𝑅𝑃𝐷𝑝,(6)  else𝑀𝑗𝐷=𝛽𝑗,(7) where 𝐷𝑗=2π·π‘—βˆ’1.
end if
End for

Algorithm 2. (𝑀,𝐷)=SO_GAD(π‘˜,𝑛,𝑅,𝑝𝑐,π‘π‘š,𝑁𝑖,𝑁𝑔,𝑁𝑠). Let 𝑁𝑠 be the LBDA parameter (π‘π‘ β‰€π‘˜) enhancing the decoding performances [8]. The algorithm SO_GAD accepts as input π‘˜,𝑛,𝑝𝑐,π‘π‘š,𝑁𝑖,𝑁𝑔, and 𝑁𝑠. This parameter is usually chosen to be ⌈2π‘˜/3βŒ‰ or π‘˜.
For 𝑗=1  to β€‰π‘˜βˆ’π‘π‘   do 𝑀𝑗=𝐷𝑗||Ξ“||𝑛𝑙=π‘˜βˆ’π‘π‘ +1,π‘™βˆˆΞ“ξ‚π·π‘™π‘€π‘™if𝑛𝑙=π‘˜βˆ’π‘π‘ +1,π‘™βˆˆΞ“ξ‚π·π‘™π‘€π‘™π‘€β‰₯0,𝑗=𝐷𝑗minπ‘™βˆˆΞ“ξ‚†ξ‚π·π‘™π‘€π‘™ξ‚‡>0otherwise,(8) where Ξ“ denotes the set of positions 𝑗 where 𝐻(𝑗) exists.
End for
For 𝑗=π‘˜βˆ’π‘π‘ +1  to 𝑛   do
if  𝐻(𝑗)  exists, then 𝑀𝑗=12𝐷𝑗𝑛𝑙=1ξ‚€ξ‚π·π‘™βˆ’ξ‚π»π‘™(𝑗)ξ‚π‘…π‘™ξƒ­βˆ’π‘…π‘—,(9)  else𝑀𝑗=𝐷𝑗||Ξ“||𝑛𝑙=π‘˜βˆ’π‘π‘ +1,π‘™βˆˆΞ“ξ‚π·π‘™π‘€π‘™if𝑛𝑙=π‘˜βˆ’π‘π‘ +1,π‘™βˆˆΞ“ξ‚π·π‘™π‘€π‘™π‘€β‰₯0,𝑗=𝐷𝑗minπ‘™βˆˆΞ“ξ‚†ξ‚π·π‘™π‘€π‘™ξ‚‡>0otherwise,(10) where Ξ“ denotes the set of positions 𝑗 where 𝐻(𝑗) exists.
end if
End for.

3.2.1. Decoding

The SO_GAD algorithm uses GAD for decoding the input sequence 𝑅. The decision codeword 𝐷 is the top of the 𝑁𝑔th generation sorted in ascending order of fitness, and the competitor codeword 𝐻(𝑗) corresponding to the 𝑗th bit of 𝐷, if it exists, is the first member of the last generation which have the different 𝑗th bit 𝐻𝑗(𝑗)(𝐻𝑗(𝑗)≠𝐷𝑗).

3.2.2. Extrinsic Information

The decision codeword 𝐷 and the associated competitor codewords (𝐻(𝑗))1≀𝑗≀𝑛 are used to calculate the extrinsic information from the formulas (6) and (7) for the first algorithm and (8) and (9) in the case of the second one.

4. Iterative Decoding Algorithm and Complexity

In this section, we describe the iterative decoding algorithm of PBC based on GAD (IGAD), then we show that IGAD has a polynomial time complexity.

Let {𝐢(𝑖)(𝑛𝑖,π‘˜π‘–,𝑑𝑖)}1≀𝑖≀3 denotes three binary linear block codes of length 𝑛𝑖, dimension π‘˜π‘–, minimum Hamming distance 𝑑𝑖, and generator matrix 𝐺(𝑖).

4.1. Iterative Decoding Algorithm

Let  (π‘…π‘–π‘—π‘˜)1≀𝑖≀𝑛2,1≀𝑗≀𝑛1,1β‰€π‘˜β‰€π‘›3 be the received codeword. Figures 2 and 3 show the iterative decoding schemes of PBC based on GAD for the proposed algorithms. The following is an outline of IGADs.

Figure 2: The (βŒŠπœƒ/3βŒ‹+1)th iteration of IGAD1.
Figure 3: The (βŒŠπœƒ/3βŒ‹+1)th iteration of IGAD2.

Algorithm 3. IGAD(π‘˜1, π‘˜2, π‘˜3,𝑛1, 𝑛2, 𝑛3,𝑅, 𝑝𝑐, π‘π‘š, 𝑁𝑖, 𝑁𝑔, 𝑁𝑖𝑑, 𝛼,{𝑁𝑠|𝛽}).
Algorithm IGAD accepts as input π‘˜1, β€‰π‘˜2, β€‰π‘˜3,  𝑛1,  𝑛2,  𝑛3,𝑅,  𝑝𝑐,  π‘π‘š,  𝑁𝑖,  𝑁𝑔, the iterations number 𝑁𝑖𝑑, the coefficients (𝛼(πœƒ))0β‰€πœƒ<3𝑁𝑖𝑑. In the case of the first algorithm, we use the coefficients (𝛽(πœƒ))0β‰€πœƒ<3𝑁𝑖𝑑, and in the second, we use the 𝑁𝑠 parameter. The 𝛼 and 𝛽 coefficients are optimized by simulation step by step for each code. For the second algorithm, we choose𝛼 to be 0.5.

Step 1
Extrinsic information initialization
πœƒ=0, Iteration=1.
Let 𝑀(πœƒ)π‘–π‘—π‘˜ is the extrinsic information given to πœƒth elementary decoder by the other decoder:𝑀(0)π‘–π‘—π‘˜=0,1≀𝑖≀𝑛2,1≀𝑗≀𝑛1,1β‰€π‘˜β‰€π‘›3.(11)

Step 2
Row, column, and depth decoding:
While  (𝐼𝑑≀𝑁𝑖𝑑)  do

Step 2.1.
Decoding with SO_GAD the 𝑗th column and estimating the extrinsic information 𝑀(πœƒ+1)π‘–π‘—π‘˜, using (6) and (7), of each vector 𝑠.𝑗. at the input of the elementary decoder: 𝑠(πœƒ)π‘–π‘—π‘˜=π‘…π‘–π‘—π‘˜+𝛼(πœƒ)𝑀(πœƒ)π‘–π‘—π‘˜1≀𝑖≀𝑛2,1β‰€π‘˜β‰€π‘›3.(12)

Step 2.2. and 2.3.
Repeat step 2.1 for decoding the rows and depths and estimating the extrinsic information. Let 𝐷(πœƒ+3) and 𝑀(πœƒ+3) be respectively the cubes decision and extrinsic information at the output of the depth elementary decoder.

Step 3

End While.

Select the decided codeword 𝐷(3𝑁𝑖𝑑)  at the 𝑁𝑖𝑑th iteration.

Stopping Criterion for the Second Algorithm.

Since the GAD decoder decides always a codeword, our second decoder does not need to use the NCB (nonconvergent block) decoder proposed in [8]. So, its complexity will be reduced.

4.2. Complexity Analysis

In this section, we present and compare the expressions of time complexities of the studied decoders.

4.2.1. IGADs Time Complexity

If we do not take into consideration the calculating step of the extrinsic information, the two algorithms have the same time complexity. The GAD algorithm for a linear block code 𝐢(𝑛,π‘˜) has polynomial time complexity 𝑂(𝑓(π‘˜,𝑛,𝑁𝑖,𝑁𝑔)), where the function 𝑓 is given by [12]π‘“ξ€·π‘˜,𝑛,𝑁𝑖,𝑁𝑔=π‘˜2𝑛+π‘π‘–π‘π‘”ξ€·π‘˜π‘›+log𝑁𝑖.(13)

Time Complexity of IGAD1
(i)Time complexity of extrinsic information computing: For each decision (row, column, or depth) at the last generation of each iteration, the worst-case time complexity of competitors search is 𝑂([π‘π‘–βˆ’1]𝑛).

From (6), the time complexity of extrinsic information calculating in the worst-case (if the competitor exists) at the last generation of each iteration is 𝑂(𝑛2). So the total time complexity of extrinsic information computing is 𝑂(comp1(𝑁𝑖,𝑛)), wherecomp1𝑁𝑖,𝑛=𝑁𝑖𝑛+𝑛2,(14)(ii)total time complexity:

At any iteration of IGAD1, the first elementary decoder has a time complexity of 𝑂(π‘˜2π‘˜3𝑓(π‘˜1,𝑛1,𝑁𝑖,𝑁𝑔)), the second decoder has a complexity of 𝑂(𝑛1π‘˜3𝑓(π‘˜2,𝑛2,𝑁𝑖,𝑁𝑔)), and the third decoder has a time complexity of 𝑂(𝑛1𝑛2𝑓(π‘˜3,𝑛3,𝑁𝑖,𝑁𝑔)), so the total complexity is polynomial:π‘‚ξ€·π‘π‘–π‘‘ξ€Ίπ‘˜2π‘˜3π‘”ξ€·π‘˜1,𝑛1,𝑁𝑖,𝑁𝑔+𝑛1π‘˜3π‘”ξ€·π‘˜2,𝑛2,𝑁𝑖,𝑁𝑔+𝑛1𝑛2π‘”ξ€·π‘˜3,𝑛3,𝑁𝑖,𝑁𝑔,ξ€Έξ€»ξ€Έ(15) whereπ‘”ξ€·π‘˜,𝑛,𝑁𝑖,𝑁𝑔=π‘“π‘˜,𝑛,𝑁𝑖,𝑁𝑔+comp1𝑁𝑖,𝑛.(16) For the symmetric 3D-PBC 𝑛1=𝑛2=𝑛3=𝑛 and π‘˜1=π‘˜2=π‘˜3=π‘˜, then the IGAD1 time complexity becomesπ‘‚ξ€·π‘π‘–π‘‘ξ€Ίπ‘˜2+𝑛2π‘˜+π‘˜π‘›ξ€»ξ€Ί2𝑛+π‘π‘–π‘π‘”ξ€·π‘˜π‘›+log𝑁𝑖+𝑛𝑁𝑖+𝑛2ξ€»ξ€Έ.(17)

Time Complexity of IGAD2
(i)Time complexity of extrinsic information computing: The maximal number of competitors of each decision is |Ξ“|max=𝑛. So, at the last generation of each iteration, the worst-case time complexity of the first step given by (8) is 𝑂((π‘˜βˆ’π‘π‘ )max(2𝑛+1,2(𝑛+π‘π‘ βˆ’π‘˜)+3))=𝑂(𝑛(π‘˜βˆ’π‘π‘ )).

From (9), the worst-case time complexity of competitors search is is 𝑂([π‘π‘–βˆ’1](π‘›βˆ’π‘˜+𝑁𝑠)).

From (9) and (10), the time complexity in the worst-case of the second step of extrinsic information calculating isπ‘‚ξ€·ξ€·π‘›βˆ’π‘˜+𝑁𝑠max2𝑛+3,2𝑛+π‘π‘ ξ€Έξ€·π‘›ξ€·βˆ’π‘˜+3,2𝑛+1))=π‘‚π‘›βˆ’π‘˜+𝑁𝑠.ξ€Έξ€Έ(18)

So the total time complexity of extrinsic information computing is 𝑂(comp2(𝑁𝑖,𝑛,𝑁𝑠,π‘˜)), wherecomp2𝑁𝑖,𝑛,𝑁𝑠,π‘˜=π‘π‘–ξ€·π‘›βˆ’π‘˜+𝑁𝑠+𝑛2,(19)(ii)total time complexity:

The total complexity in this case is given from (16):π‘”ξ€·π‘˜,𝑛,𝑁𝑖,𝑁𝑔=π‘“π‘˜,𝑛,𝑁𝑖,𝑁𝑔+comp2𝑁𝑖,𝑛,𝑁𝑠.,π‘˜(20) For the symmetric 3D-PBC 𝑛1=𝑛2=𝑛3=𝑛 and π‘˜1=π‘˜2=π‘˜3=π‘˜, then the IGAD2 time complexity becomesπ‘‚ξ€·π‘π‘–π‘‘ξ€Ίπ‘˜2+𝑛2ξ€»Γ—ξ€Ίπ‘˜+π‘˜π‘›2𝑛+π‘π‘–π‘π‘”ξ€·π‘˜π‘›+log𝑁𝑖+π‘π‘–ξ€·π‘›βˆ’π‘˜+𝑁𝑠+𝑛2.ξ€»ξ€Έ(21) It is clear from (17) and (21) that IGAD2 is less complex than IGAD1, and their complexities are equal if 𝑁𝑠=π‘˜.

4.2.2. Chase-Pyndiah and LBDA Algorithms Time Complexities

We show that this algorithm has an exponential time complexity. Let 𝐢(𝑛,π‘˜,𝑑) be a BCH code, and let 𝑀 be the test patterns number used in both Chase and OSD-𝑖 (ordered statistic decoding) algorithms. The complexity of each algorithm is 𝑂(𝑀𝑛2log2𝑛).

The Euclidian distance computing of each codeword has a computational complexity of 𝑂(𝑛). So, the total time complexity of decoding and computing fitness of the 𝑀 test patterns is 𝑂(𝑀𝑛2log2𝑛).

At any given decoding iteration of the Chase-Pyndiah algorithm, the sorting step of the 𝑀 fitness has a time complexity of 𝑂(𝑀log2𝑀) and the the worst-case time complexity of competitors search is 𝑂([π‘€βˆ’1]𝑛). Thus, the total time complexity of the Chase-Pyndiah algorithm isπ‘‚ξ€·π‘π‘–π‘‘ξ€Ίπ‘˜2π‘˜3𝐹𝑛1,π‘˜1,𝑑1ξ€Έ+π‘˜3𝑛1𝐹𝑛2,π‘˜2,𝑑2ξ€Έ+𝑛1𝑛2𝐹𝑛3,π‘˜3,𝑑3,ξ€Έξ€»ξ€Έ(22) where 𝑛𝐹(𝑛,π‘˜,𝑑)=𝑀2log2𝑛+log2𝑀.(23)

Thus, in the case of 𝑛1=𝑛2=𝑛3=𝑛, π‘˜1=π‘˜2=π‘˜3=π‘˜, and, the exponential time complexity of the two algorithms is𝑂𝑁𝑖𝑑𝑀𝑛2+π‘˜2+π‘˜π‘›ξ€»ξ€Ίlog2𝑀+𝑛2log2𝑛.ξ€»ξ€Έ(24)

Note that in the case of Chase-2 algorithm, 𝑀=2𝑑,  where   𝑑=⌊(π‘‘βˆ’1)/2βŒ‹.

From (17) and (24), it is shown that IGAD1 and IGAD2 are less complex than the two Chase-Pyndiah and LBDA algorithms for codes with large correction capacity 𝑑 or for large 𝑖 parameter or also with great length and low rate.

5. Simulation Results

The figures in this section plot the bit error rate (BER) versus the energy per bit to noise power spectral density ratio 𝐸𝑏/𝑁0 for the symmetric 3D-PBC (16, 11, 4)3 and (31, 21, 5)3. The simulation parameters used in IGADs are given in Table 1.

Table 1: Simulation default parameters.
5.1. IGAD1 Performances
5.1.1. Scaling Factors Optimization for IGAD1

As the iterations number increases, the extrinsic information gradually becomes more reliable. To take the effect into account, the scaling factors 𝛼 are used to reduce the turbo decoder input impact. It has shown that these factors depend on the code and GAD. So, they are optimized step by step for each code. The optimized values 𝛼 and 𝛽 for our algorithm are shown in Table 2. However, as the scaling factors 𝛼 and 𝛽 are gradually increased or decreased from the optimal values, the decoding performance of IGAD1 decoder decreases. Figure 4 shows the gain with the optimized values of 𝛼, for (16, 11, 4)3, compared to the values taken randomly. The values of genetic parameters used are 𝑁𝑔=18,𝑁𝑖=35,𝑝𝑐=0.97 and π‘π‘š=0.03.

Table 2: Optimized values of 𝛼 and 𝛽 for IGAD1.
Figure 4: Effect scaling factor of (16, 11, 4)3 for 12th iterations on IGAD1.
5.1.2. Effect of Evaluated Codewords Number

Generally, increasing the number of evaluated codewords 𝑁𝑖𝑁𝑔, the probability to find the codeword closest to the input sequence becomes high. This makes it possible to improve the BER performances. The effect of increasing the number of evaluated codewords on the BER improvement for code (16, 11, 4)3 at the 12th iteration is presented in Figures 5 and 6. The values 𝑁𝑔=18 and 𝑁𝑖=60 can be the optimal values in a large range 𝐸𝑏/𝑁0. The other genetic parameters for the first optimization are 𝑁𝑖=35,  𝑝𝑐=0.97,  and π‘π‘š=0.03; 𝑁𝑔=18,  𝑝𝑐=0.97,  and π‘π‘š=0.03 for the second.

Figure 5: Effect of the generation number for (16, 11, 4)3 at 12th iteration on IGAD1.
Figure 6: Effect of the population size for (16, 11, 4)3 at 12th iteration on IGAD1.
5.1.3. Cross-Over Rate Effect

Since the cross-over rate is one of the important features of a genetic algorithm, an optimization of this probability is necessary. Figure 7 shows the optimized value 𝑝𝑐=0.97 for (16, 11, 4)3   3D-PBC which improves the BER at a rather high SNR and at 12th iteration. This value closing to 1 means that IGAD1 requires a broad exploration and efficient exploitation, but increases somewhat the algorithm complexity. Indeed, when 𝑝𝑐 is close to 0, the crossover operation will occur rarely. For this simulation, we fixed the other parameters as follows: 𝑁𝑔=18,  𝑁𝑖=60, and π‘π‘š=0.03.

Figure 7: Effect of the crossover probability for (16, 11, 4)3 at 12th iteration on IGAD1.
5.1.4. Mutation Rate Effect

The effect of mutation rate on IGAD1 for BCH (16, 11, 4)3   3D-PBC is depicted in Figure 8. it is shown that π‘π‘š=0.05 is the optimal value for BER at a high SNR and at 12th iteration. One reason of this value close to 0 may be the stability of members in vicinity of optima for low mutation rates. The fixed values are 𝑁𝑔=18,  𝑁𝑖=60, and 𝑝𝑐=0.97.

Figure 8: Effect of the mutation rate for (16, 11, 4)3  3D-PBC at 12th iteration on IGAD1.
5.1.5. Code Rate Effect

The Figure 9 shows the improvement/degradation of the BER performance of IGAD1 at the 12th and 15ths, iteration respectively, with decreasing/increasing the code dimension or code rate. The rate 0.31 of (31, 21, 4)3 is less than that of (16, 11, 4)3 which equals to 0.32. This explains the better performances for the first 3D-PBC code in the range 𝐸𝑏/𝑁0β‰₯2.5 dB. In this simulation, we adopted the optimal values previously found: 𝑁𝑔=18,  𝑁𝑖=60,  𝑝𝑐=0.97,  and π‘π‘š=0.03.

Figure 9: IGAD1 performances for BCH  (16, 11, 4)3 and BCH  (31, 21, 4)3  3D-PBC PBC at 12th iteration.
5.1.6. Comparison between IGAD1 and IGAD2

As the iteration number increases, the IGADs performances improve approximately in this paper in the whole 𝐸𝑏/𝑁0 range for all 3D-PBC studied. The performances of the IGAD decoders is depicted in Figure 10 for BCH (16, 11, 4)3   3D-PBC. These performances can be improved by increasing the total number of members as shown in Figure 6. The IGAD1 and IGAD2 performances are respectively about 1.4 dB and 1.33 dB away from the Shannon capacity limit, which is 0.97 dB for this code. We used the following optimized parameters: 𝑁𝑔=18,  𝑁𝑖=60,  𝑝𝑐=0.97, and π‘π‘š=0.05.

Figure 10: BER of IGAD1 and IGAD2 of (16, 11, 4)3 for 12 iteration (𝑁𝑠=8).

6. Conclusion

In this paper, we have presented two iterative decoding algorithms which can be applied to any arbitrary 3D-product block codes based on a genetic algorithm, without the need of a hard-in hard-out decoder. Our theoretical results show that these algorithms reduce the decoding complexity, for codes with a low rate and large correction capacity 𝑑 or large 𝑖 parameter used in LBDA algorithm. Furthermore, the performances of these algorithms can be improved by using asymmetric 3D-PBC codes and also, by tuning some parameters like the selection method, the crossover/mutation rates, the population size, the number of generations, and the iterations number. These algorithms can be applied again on multipath fading channels in both CDMA systems and systems without spread-spectrum. Those features open broad prospects for decoders based on artificial intelligence.


  1. C. Berrou and A. Glavieux, β€œNear optimum error correcting coding and decoding: turbo-codes,” IEEE Transactions on Communications, vol. 44, no. 9, pp. 1261–1271, 1996. View at Scopus
  2. D. J. C. Mackay and R. M. Neal, β€œGood codes based on very sparse matrices,” in Proceedings of the 5th IMA Conference on Cryptography and Coding, Springer, Berlin, Germany, 1995.
  3. P. Elias, β€œError-free coding,” IRE Transactions on Information Theory, vol. PGIT-4, pp. 29–37, 1954.
  4. R. M. Tanner, β€œA recursive approach to low complexity codes,” IEEE Transactions on Information Theory, vol. IT-27, no. 5, pp. 533–547, 1981.
  5. S. Lin and D. Costello, Error Control Coding: Fundamentals and Applications, Prentice-Hall, 1983.
  6. R. Pyndiah, A. Glavieux, A. Picart, and S. Jacq, β€œNear optimum decoding of product codes,” in Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM ’94), vol. 1–3, pp. 339–343, San Francisco, Calif, USA, 1994.
  7. D. Chase, β€œA class of algorithms for decoding block codes with channel measurement information,” IEEE Transactions on Information Theory, vol. 18, pp. 170–181, 1972.
  8. P. A. Martin, D. P. Taylor, and M. P. C. Fossorier, β€œSoft-input soft-output list-based decoding algorithm,” IEEE Transactions on Communications, vol. 52, no. 2, pp. 252–262, 2004. View at Publisher Β· View at Google Scholar Β· View at Scopus
  9. Y. S. Han, C. R. P. Hartmann, and C.-C. Chen, β€œEfficient maximumlikelihood soft-decision decoding of linear block codes using algorithm A*,” Tech. Rep. SU-CIS-91-42, School of Computer and Information Science, Syracuse University, Syracuse, NY, USA, 1991.
  10. H. S. Maini, K. G. Mehrotra, C. Mohan, and S. Ranka, β€œGenetic algorithms for soft decision decoding of linear block codes,” Journal of Evolutionary Computation, vol. 2, no. 2, pp. 145–164, 1994.
  11. J. L. Wu, Y. H. Tseng, and Y. M. Huang, β€œNeural networks decoders for linear block codes,” International Journal of Computational Engineering Science, vol. 3, no. 3, pp. 235–255, 2002.
  12. F. El Bouanani, H. Berbia, M. Belkasmi, and H. Ben-azza, β€œComparaison des décodeurs de Chase, l’OSD et ceux basés sur les algorithmes génétiques,” in Proceedings of the GRETSI, Troyes, France, 2007.
  13. M. Belkasmi, H. Berbia, and F. El Bouanani, β€œIterative decoding of product block codes based on the genetic algorithms,” in Proceedings of the 7th International ITG Conference on Source and Channel Coding (SCC'08), 2008.
  14. F. J. Macwilliams and N. J. A. Sloane, The Theory Error Correcting Codes, vol. North-Holland, Amsterdam, The Netherlands edition, 1978.