About this Journal Submit a Manuscript Table of Contents
Journal of Electrical and Computer Engineering
Volume 2012 (2012), Article ID 503834, 12 pages
http://dx.doi.org/10.1155/2012/503834
Research Article

Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms

SIME Lab, ENSIAS, Mohammed V-Souissi University, Rabat, Morocco

Received 14 September 2011; Revised 21 December 2011; Accepted 12 January 2012

Academic Editor: Lisimachos P. Kondi

Copyright © 2012 Ahmed Azouaoui et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA). The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR) codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.

1. Introduction

The current large development and deployment of wireless and digital communication encourage the research activities in the domain of error correcting codes. The later is used to improve the reliability of data transmitted over communication channels susceptible to noise. Coding techniques create codewords by adding redundant information to the user. Information vectors. Decoding algorithms try to find the most likely transmitted codeword related to the received one as depicted in Figure 1. Decoding algorithms are classified into two categories: hard-decision and soft-decision algorithms. Hard-decision algorithms work on a binary form of the received information. In contrast, soft-decision algorithms work directly on the received symbols [1].

503834.fig.001
Figure 1: A simplified communication system model.

Soft-decision decoding is an NP-hard problem and was approached in different ways. Recently, artificial intelligence techniques were introduced to solve this problem. Among the related works are the decoding of linear block codes using algorithm A* [2], another one uses genetic algorithms for decoding linear block codes [3], and the third one uses neural networks to decode BCH codes [4].

Maini et al. [3] were the first, according to our knowledge, to introduce genetic algorithm in the soft decoding of linear block codes. After that, Cardoso and Arantes [5] came to work on the hard decoding of linear block codes using GA and Shakeel [6] worked on soft-decision decoding for block codes using a compact genetic algorithm. These decoders based on GA use the generator matrix of the code; this fact makes the decoding very complicated for codes of high rates.

Genetic algorithms are search algorithms that were inspired by the mechanism of natural selection where stronger individuals are likely the winners in a competing environment. They combine survival of the fittest among string structures with a structured yet randomized information exchange to form a search algorithm with some of the innovative flair of human search. In each generation, a new set of artificial creatures (chromosomes) is created using bits and pieces of the fittest of the old [7, 8].

The Dual Domain Decoding GA algorithm (DDGA) is a significant contribution to soft-decision decoding. In effect, a comparison with other decoders, that are currently the most successful algorithms for soft decision decoding, shows its efficiency. This new decoder can be applied to any binary linear block code, particularly for codes without algebraic decoder. Unlike chase algorithm which needs an algebraic hard-decision decoder. Further, it uses the dual code and works with parity-check matrix. The later makes them less complicated for codes of high rates. In order to show the effectiveness of this decoder, we applied it for BCH and QR codes over two transmission channels.

The remainder of this paper is organized as follows: in Section 2, we introduce the genetic algorithms. Section 3 expresses soft-decision decoding as a combinatorial optimisation problem. In Section 4, DDGA, our genetic algorithm for decoding, is described. Section 5 reports the simulation results and discussions. Finally, Section 6 presents the conclusion and future trends.

2. Genetic Algorithm

Genetic algorithm is an artificial intelligence based methodology for solving problems. It is a non-mathematical, nondeterministic, but stochastic process or algorithm for solving optimization problems. The concept of genetic algorithm was introduced by John Holland [7] in 1975 with the aim of making computers do what nature does. He was concerned with algorithms that manipulate strings of binary digits to find solution to problem in such a way that it exhibits the characteristics of natural evolution, that is, developing an algorithm that is an abstract of natural evolution. The idea of genetic algorithm by Holland stemmed from the evolutionary theory.

GA is excellent for all tasks requiring optimization and highly effective in any situation where many inputs (variables) interact to produce a large number of possible outputs (solutions). Some example situations are as the following.

Optimization
Such as data fitting, clustering, trend spotting, path finding, and ordering.

Management
Distribution, scheduling, project management, courier routing, container packing, task assignment, and timetables.

Financial
Portfolio balancing, budgeting, forecasting, investment analysis, and payment scheduling.

Engineering
Structural design (e.g., beam sizes), electrical design (e.g., circuit boards), mechanical design (e.g., optimize weight, size cost), process control, and network design (e.g., computer networks).

The typical steps in the design of genetic algorithm are described below and illustrated in the Figure 2:

503834.fig.002
Figure 2: A simplified model of a genetic algorithm.

Step 1. Representation of the problem variable domain as a chromosome of a fixed length, the size of a chromosome population is chosen as well as the crossover probability.

Step 2. Definition of fitness function, which is used for measuring the quality of an individual chromosome in the problem domain. The fitness function establishes the basis for selecting chromosomes that will be mated during reproduction.

Step 3. Random generation of an initial population of chromosomes of a fixed size.

Step 4. Calculation of fitness function for each individual chromosome.

Step 5. Selection of pairs of chromosomes for mating from the current population. Parent chromosomes are selected with a probability related to their fitness. The highly fit chromosomes have a higher probability of being selected for mating.

Step 6. Application of the genetic operations (crossover and mutation) for creating pairs of offspring chromosomes.

Step 7. Placement of the created offspring chromosomes in the new population.

Step 8. Repetition of Step 5 through Step 7 until the size of the new chromosome population becomes equal to the size of the initial population.

Step 9. Replacement of the initial (previous) parent chromosome population with the new offspring population.

Step 10. Repeat Step 4 through Step 9 until the termination criterion is satisfied.

The termination criterion could be any of the following:(1)attaining a known optimal or acceptable solution level;(2)a maximum number of generations has been reached.

3. Soft Decision Decoding as an Optimisation Problem

The maximum-likelihood soft-decision decoding problem of linear block codes is an NP-hard problem and can be stated as follows.

Given the received vector 𝑟 and the parity-check matrix 𝐻, and let 𝑆=𝑧𝐻𝑇 be the syndrome of 𝑧, where 𝑧 is the hard decision of 𝑟, 𝐻𝑇 is the transpose of matrix 𝐻, and 𝐸(𝑆) be the set of all errors patterns whose syndrome is 𝑆, find the 𝐸𝐸(𝑆) which minimises correlation discrepancy:𝑓𝑟(𝐸)=𝑛𝑗=1𝐸𝑗||𝑟𝑗||,=𝑛𝑗=1,𝐸𝑗=1||𝑟𝑗||,(1) where 𝑛 is the code length. The optimisation problem (1) has n error variables, out of which only (n-k) are independent, where 𝑘 is the code dimension. Using the algebraic structure of the code, the remaining 𝑘 variables can be expressed as a function of these (n-k) variables.

Up till now, only few authors have tried to solve the soft decision decoding problem by viewing it as an optimisation problem and using artificial intelligence techniques.

4. DDGA Algorithm

Let 𝐶 denote, a (𝑛,𝑘,𝑑) binary linear block code of parity check matrix 𝐻, and let (𝑟𝑖)1𝑖𝑛 be the received sequence over a communication channel with noise variance 𝜎2=𝑁0/2, where 𝑁0 is noise power spectral density.

Let 𝑁𝑖, 𝑁𝑒, and 𝑁𝑔 denote, respectively, the population size, the number of elite members, and the number of generations.

Let 𝑝𝑐 and 𝑝𝑚 be the crossover and the mutation rates.

4.1. Decoding Algorithm

The decoding-based genetic algorithm is depicted on Figure 3. The steps of the decoder are as follows.

503834.fig.003
Figure 3: Basic structure of DDGA.

Step 1. Sorting the sequence 𝑟 in such a way that |𝑟𝑖|>|𝑟𝑖+1| for 1𝑖𝑛. Further, permute the coordinates of r to ensure that the last (𝑛𝑘) positions of 𝑟 are the least reliable linearly independent positions. Call this vector 𝑟 and let 𝜋 the permutation related to this permutation (𝑟=𝜋(𝑟)). Apply the permutation 𝜋 to 𝐻 to get a new check matrix 𝐻=[𝐴𝐼𝑛𝐾](𝐻=𝜋(𝐻)).

Step 2. Generate an initial population of 𝑁𝑖 binary vectors of 𝑘 bits (an individual represents the systematic part of an error candidate).
Substep 2.1. The first member, 𝐸1, of this population is zero vector.Substep 2.2. The other 𝑁𝑖1 members, (𝐸𝑗)2𝑗𝑁𝑖, are uniformly random generated.

Step 3. For 𝑖 from 1 to 𝑁𝑔.
Substep 3.1. Compute the fitness of each individual in the population. An individual is a set of 𝑘 bits.
Let 𝐸 be an individual, 𝑧 be the quantization of 𝑟, 𝑆 be the syndrome of 𝑧 such that 𝑆=𝑧𝐻𝑇, 𝑆1 be an (𝑛𝑘)-tuple such that 𝑆1=𝐸𝐴𝑇 where 𝐴 is submatrix of 𝐻, and 𝑆2 be an (𝑛𝑘)-tuple such that 𝑆2=𝑆+𝑆1.
We form the error pattern 𝐸 such that 𝐸=(𝐸,𝐸), where 𝐸 is the chosen individual and 𝐸=𝑆2. Then, (𝑧+𝐸) is a codeword.
The fitness function is the correlation discrepancy between the permuted received word and the estimated error such that 𝑓𝑟(𝐸)=𝑛𝑗=1,𝐸𝑗=1||𝑟𝑗||.(2)
Substep 3.2. The population is sorted in ascending order of member’s fitness defined by (2).Substep 3.3. The first (elite) 𝑁𝑒 best members of this generation are inserted in the next one.Substep 3.4. The other 𝑁𝑖𝑁𝑒 members of the next generation are generated as follows.
Subsubstep 3.4.1
Selection operation: a selection operation that uses the linear ranking method is applied in order to identify the best parents (𝐸(1),𝐸(2)) on which the reproduction operators are applied. For each individual 𝑗(1𝑗𝑁𝑖), we choose the following linear ranking: 𝑤𝑗=𝑤max2𝑤(𝑗1)max1𝑁𝑖1,1𝑗𝑁𝑖,(3) where 𝑤𝑗 is the 𝑗th member weight and 𝑤max=1.1 is the maximum weight associated to the first member.
Subsubstep 3.4.2
Crossover operator: Create a new vector 𝐸𝑗 “child” of 𝑘 bits. Let Rand be a uniformly random value between 0 and 1 generated at each occurrence. The crossover operator is defined as follows: if Rand1𝑝𝑐, then the 𝑖th bit of child (E𝑗)𝑁𝑒+1𝑗𝑁𝑖,(1𝑖𝑘) is given by 𝐸𝑗𝑖=𝐸𝑖(1)if𝐸𝑖(1)=𝐸𝑖(2)𝐸otherwise𝑖(1)ifRand2𝐸𝑝𝑖(2)otherwise,(4) where 1𝑝=1+𝑒4𝑟𝑗/𝑁0if𝐸(1)=0,𝐸(2)𝑒=1,4𝑟𝑗/𝑁01+𝑒4𝑟𝑗/𝑁0if𝐸(1)=1,𝐸(2)=0.(5) It is clear that if the 𝑖th bit of the parent is different, then for greater positive values 𝑟𝑗, the function 1/(1+𝑒)4|𝑟𝑗|/𝑁0 converges to 1. Hence, the 𝑖th bit of child has a great probability to be equal to 0. Note that if rand1𝑝𝑐(no crossover case): 𝐸𝑗=𝐸(1)𝐸ifRand0.5,(2)otherwise.(6)
Subsubstep 3.4.3
Mutation operator: if the crossover operation realized, the bits 𝐸𝑗𝑖 are muted with the mutation rate 𝑝𝑚: 𝐸𝑗𝑖1𝐸𝑗𝑖ifRand3𝑝𝑚.(7)

Step 4. The decoder decision is 𝑉=𝜋1(𝐸best+𝑧), where 𝐸best is the best member from the last generation.

Remark 1. In Step 1 of the DDGA, in order to have a light algorithm we apply the Gaussian eliminations on the (𝑛𝑘) independent columns corresponding to the least reliable positions, without the permutation 𝜋. This optimisation is not used in other similar works [3, 9, 10].

4.2. Complexity Analysis

Firstly, we show that our algorithm has a polynomial time complexity.

Let 𝑛 be the code length, 𝑘 be the code dimension which must be equal to the length of individuals in population, and 𝑁𝑖 be the population size which must be equal to the total number of individuals in the population.

At any given stage, we maintain a few sets of 𝑁𝑖×𝑘 arrays, therefore the memory complexity of this algorithm is 𝑂(𝑁𝑖𝑘).

In order to get the complexity of our algorithm, we will compute the complexity of each step.

Step 2 has time complexity of 𝑂(𝑘2𝑛) [2]. This complexity depends on the random number generator in use, but the cost is negligible compared to that of Step 3.

Substeps 3.1 to 3.3 have a computational complexity of𝑂𝑘(𝑛𝑘)𝑁𝑖𝑁+𝑂𝑖log𝑁𝑖+𝑂(𝑘).(8)

Steps 3.4.1 to 4 have an advantage case complexity of𝑂(1)+𝑝𝑐[𝑂]+(𝑘)+𝑂(𝑘)+𝑂(𝑘)1𝑝𝑐[𝑂](1)+𝑂(𝑘)=𝑂(1)+𝑝𝑐𝑂(𝑘)+1𝑝𝑐𝑂(𝑘).(9)

This reduces to 𝑂(𝑘), which is also the worst case complexity. Hence, any iteration of the genetic algorithm part of DDGA has a total time complexity of 𝑂(𝑘(𝑛𝑘)𝑁𝑖+𝑁𝑖log𝑁𝑖) per generation.

Algorithm DDGA has a time complexity of 𝑂(𝑁𝑖𝑁𝑔[𝑘(𝑛𝑘)+log𝑁𝑖]).

Now, we compare our algorithm with four competitors. The Table 1 shows the complexity of the five algorithms. The Chase-2 algorithm increases exponentially with 𝑡, where 𝑡 is the error correction capability of the code. Its complexity is 2𝑡 time the complexity of the hard-in hard-out decoding [11]. Similarly, the complexity of the OSD algorithm of order 𝑚 is exponentially in m [9]. Moreover, the decoding becomes more complex for codes with large code length.

tab1
Table 1: Complexity of Chase-2, OSD-m, Shakeel, Maini, and DDGA algorithms.

For Maini [3] and DDGA algorithms, the complexity is polynomial in 𝑘,𝑛,𝑁𝑔,𝑁𝑖, and log𝑁𝑖, making it less complex compared to other algorithms.

For Shakeel algorithm, the complexity is also polynomial in 𝑘,𝑛 and 𝑇𝑐, where 𝑇𝑐 presents the average number of generations.

The three decoders based on genetic algorithms are almost the same complexity and have lower complexity than Chase-2 and OSD-m algorithm.

5. Simulation Results and Discussions

In order to show the effectiveness of DDGA, we do intensive simulations.

Except the subsection 𝐽 of the current section, the simulations where made with default parameters outlined in Table 2.

tab2
Table 2: Default parameters.

The performances are given in terms of BER (bit error rate) as a function of SNR (Signal to Noise Ratio 𝐸𝑏/𝑁0).

5.1. Evolution of BER with GA Generations

We illustrate the relation between bit error probability and the number of genetic algorithm generation for same SNRs.

Figure 4 indicates the evolution of bit error probability with genetic algorithm generations. This figure shows that the bit error probability decreases with increasing number of generations.

503834.fig.004
Figure 4: BER versus number of generations.
5.2. Effect of Elitism Operator

The Figure 5 compares the performances of DDGA with or without elitism for the BCH (63,51,5) code. These simulation results reveal a slight superiority of DDGA with elitism.

503834.fig.005
Figure 5: Effect of elitism operator on DDGA performances.
5.3. Effect of Code Length

The Figure 6 compares the DDGA performance for four codes with rate=1/2. Except the small code, all other codes have performance almost equal and this is possibly due to the parameters of DDGA, in particular, the number of individuals chosen is fixed for all codes. Indeed, BCH code (31.16) is small and gives poor performance compared to the other three codes (BCH and QR) that are comparable in length and performance.

503834.fig.006
Figure 6: Effect of code length on DDGA performances.
5.4. Effect of Crossover Rate on Performance

The Figure 7 shows the effect of crossover rate on the performance. From this figure, we remark that the increasing of the crossover rate from 0.01 to 0.97 improves the performance of DDGA for BCH (63,51,5) code by 1 dB at 10−4.

503834.fig.007
Figure 7: Effect of crossover rate on performances.
5.5. Effect of Mutation Rate on Performances

The Figure 8 emphasizes the influence of the mutation rate on the performance of DDGA. Decreasing the mutation rate from 0.2 to 0.1, we can gain 1 dB at 10−4. If we further decrease the mutation rate, the gain becomes negligible as shown in Figure 8. According to this result, we remark that 0.03 is the optimum value of the mutation rate. This confirms the result in literature.

503834.fig.008
Figure 8: Effect of mutation rate on performances.
5.6. Comparison between Various Crossover Methods in DDGA

In the Figure 9, we compare between results obtained using the proposed crossover (see Section 4), uniform crossover (UX), and two point crossover, (2-pt), in DDGA. Simulation results show that the proposed crossover is better than UX and 2-pt ones. The gain between the proposed crossover and the two other crossovers is 1.5dB at 105. Besides, the three crossover methods have the same complexity of order O(k).

503834.fig.009
Figure 9: Comparison between different crossover operators in DDGA.
5.7. Comparison between Different Selection Methods in DDGA

In the Figure 10, we present a comparison between the results obtained using linear ranking selection, tournament selection, and random selection in DDGA. Simulation results show that the linear ranking is better than tournament and random selection.

503834.fig.0010
Figure 10: Comparison between different selection operators in DDGA.
5.8. Comparison of DDGA versus Other Decoding Algorithms

In this subsection, we compare the performance of DDGA with other decoders (Chase-2 decoding, OSD-1, OSD-3, and Maini decoding algorithms).

The performance of DDGA is better than Chase-2 and OSD-1 algorithms as shown in Figure 11. According to this figure, we observed that DDGA is comparable to Maini algorithm.

503834.fig.0011
Figure 11: Performances of Chase-2, Maini, OSD-1, and DDGA algorithms for BCH (31,16,7) code.

The performance of DDGA, Chase, OSD-1 and Maini algorithms, for BCH (31,21,5) code, is shown in Figure 12. From the later, we remark that our algorithm is better than Chase-2 algorithm and comparable with Maini and OSD-1 algorithms for this code.

503834.fig.0012
Figure 12: Performances of Chase-2, OSD-1, Maini, and DDGA algorithms for BCH (31,21,5) code.

The Figure 13 presents the performance of different decoders for BCH (31,26,3) code. The behaviors of the four decoders for this code are similar as for the BCH (31,21,5) code.

503834.fig.0013
Figure 13: Performances of Chase-2, OSD-1, Maini, and DDGA algorithms for BCH (31,26,3) code.

For the BCH (63,45,7) code, DDGA outperforms Chase-2 by 1 dB at 10−5. Nevertheless, the gain of DDGA from the others is negligible as shown in Figure 14.

503834.fig.0014
Figure 14: Performances of Chase-2, OSD-1, Maini, and DDGA algorithms for BCH (63,45,7) code.

The Figure 15 compares the performances of DDGA and others decoders for BCH (63,51,5) code. We notice the superiority of DDGA over Chase-2 algorithm and similarity to the others.

503834.fig.0015
Figure 15: Performances of Chase-2, OSD-1, Maini, and DDGA algorithms for BCH (63,51,5) code.

The performances of DDGA, Chase-2, OSD1, and Maini algorithms, for BCH (63,57,3) code, are shown in Figure 16. From the later, we observe that DDGA is better than Chase-2, and comparable to Maini and OSD-1 algorithms.

503834.fig.0016
Figure 16: Performances of Chase-2, Maini, DDGA, and OSD-1 algorithms for BCH (63,57,3) code.

DDGA outperforms OSD-1 by 1 dB at 10−5. However, the gain of DDGA from the others is negligible as shown in Figure 17.

503834.fig.0017
Figure 17: Performances of Chase-2, Maini, DDGA, and OSD-1 algorithms for BCH (127,64,21) code.

The Figure 18 compares the performance of DDGA with other decoders for BCH (127,113,5) code. From this figure, we remark that DDGA is better than Chase-2 algorithm and comparable to the others.

503834.fig.0018
Figure 18: Performances of Chase-2, OSD-1, Maini, and DDGA algorithms for BCH (127,113,5) code.

The Figure 19 presents the performances of the decoders mentioned above, for the BCH (127,120,3) code. The behaviors of the four decoders are the same as for the BCH (127,113,3) code.

503834.fig.0019
Figure 19: Performances of Chase-2, OSD-1, Maini, and DDGA algorithms for BCH (127,120,3) code.

Figure 20 shows that DDGA performs the same as the OSD-3 for code QR (71,36,11). The complexity of DDGA is less than that of the OSD-3.

503834.fig.0020
Figure 20: Performances of Maini, DDGA, OSD-3, and OSD-1 algorithms for QR (71,36,11) code.

The performances of DDGA, Chase OSD-1, and Maini algorithms, for QR (103,52,19) code, is shown in Figure 21. From the later, we notice that our algorithm is better than OSD-1 algorithm and is comparable with Maini algorithm for this code.

503834.fig.0021
Figure 21: Performances of OSD-1, Maini, and DDGA for QR (103,52,19) code.
5.9. DDGA Performances on Rayleigh Channel

DDGA has been described for the AWGN channel. Soft-decision decoding is particularly desirable over channels with more severe impairments. Therefore, a wire channel corrupted by both AWGN and Rayleigh is used for the performance investigation of DDGA.

The fading channel is modelled as𝑟𝑖=𝑎𝑖𝑧𝑖+𝑛𝑖,(10) where 𝑖=1,,𝑁;{𝑎𝑗}𝑁1 are statistically independent Gaussian random variables with zero mean and variance 𝑁0/2;{𝑎𝑗}𝑁1 represent the statistically independent Rayleigh distributed fading coefficients. We assume ideal channel state information (CSI) of {𝑎𝑗}𝑁1 at the receiver.

The objective function (1) in Step 2 of DDGA is slightly changed for the channel model considered.

The resulting function is:𝑓(𝑧)=𝑁𝑗=1𝑎𝑗𝑧𝑗𝑟𝑗,(11) where {𝑎𝑗}𝑁1 is the CSI vector. The remaining steps of the algorithm are kept unchanged.

We provide a comparison of DDGA and Shakeel algorithm over Rayleigh channel (RC).

The performance of DDGA and Shakeel algorithms, for BCH (31,16,7) code, is shown in Figure 22. From the later, we observe that the two algorithms are similar. Again, the Figure 23 shows that the two decoders are similar. The Figure 24 compares the performances of DDGA and Shakeel algorithms for BCH (31,26,3) code. We remark that our decoder is slightly better than the Shakeel one. The performances of DDGA and Shakeel algorithms, for BCH (127,113) code, are shown in Figure 25. From the simulation result, we observe that the performances of the two decoders are similar. The Figure 26 illustrates the performances of

503834.fig.0022
Figure 22: Performances of DDGA, and Shakeel algorithms for BCH (31,16,7) code over RC.
503834.fig.0023
Figure 23: Performances of DDGA, and Shakeel algorithms for BCH(31,21,5) code over RC.
503834.fig.0024
Figure 24: Performances of DDGA, and Shakeel algorithms for BCH (31,26,3) code over RC.
503834.fig.0025
Figure 25: Performances of DDGA, and Shakeel algorithms for BCH (127,113,5) code on RC.
503834.fig.0026
Figure 26: Performances of DDGA, and Shakeel algorithms for QR (71,36,11) code over RC.

DDGA and Shakeel algorithms for QR (71,31,11) code. According to this figure we remark that our decoder outperforms Shakeel decoder about 2.5 at 10−5. The Figure 27 illustrates the performances of DDGA and Shakeel algorithms for BCH (63,51,5) code. Then, we notice that our decoder is better then the Shakeel one. The Figure 28 presents the performances of DDGA and Shakeel algorithms for QR (127,64,21) code. According to this figure, we remark that our decoder outperforms Shakeel decoder by 2 dB at 10−5. The Figure 29 shows the performance of DDGA and Shakeel algorithm for BCH (63,45,7) code. We observe that our decoder is slightly better than the Shakeel one. The Figure 30 presents the performance of DDGA and Shakeel algorithms for QR (103,52,19) code. According to this figure, we remark that our decoder outperforms Shakeel decoder by 3 dB at 10−5.

503834.fig.0027
Figure 27: Performances of DDGA, and Shakeel algorithms for BCH (63,51,5) code over RC.
503834.fig.0028
Figure 28: Performances of DDGA, and Shakeel algorithms for BCH (127,64,21) code over RC.
503834.fig.0029
Figure 29: Performances of DDGA, and Shakeel algorithms for BCH (63,45,7) code over RC.
503834.fig.0030
Figure 30: Performances of DDGA, and Shakeel algorithms for QR (103,52,19) code over RC.

6. Conclusion

In this paper, we have proposed a new decoder based on GA for linear block codes. The simulations applied on some BCH and QR code, show that the proposed algorithm is an efficient soft-decision decoding algorithm. Emphasis was made on the effect of the genetic algorithm parameters and the code length on the decoder performance. A comparison was done in terms of bit error rate performance and complexity aspects of the decoder. The proposed algorithm has an advantage compared to competitor decoders developed by Maini and Shakeel.

The crossover operator developed in this paper can be generalized and successfully applied to other optimisation problems.

The obtained results will open new horizons for the artificial intelligence algorithms in the coding theory field.

References

  1. G. C. Clarck and J. B. Cain, Error-Correction Coding for Digital Communication, Plenum, New York, NY, USA, 1981.
  2. Y. S. Han, C. R. P. Hartmann, and C.-C. Chen, “Efficient maximum likelihood soft-decision decoding of linear block codes using algorithm A*,” Tech. Rep. SU-CIS-91-42, School of Computer and Information Science, Syracuse University, Syracuse, NY, USA, 1991.
  3. H. Maini, K. Mehrotra, C. Mohan, and S. Ranka, “Soft decision decoding of linear block codes using genetic algorithms,” in Proceedings of the IEEE International Symposium on Information Theory, p. 397, Trondheim , Norway, 1994. View at Publisher · View at Google Scholar
  4. J.-L. Wu, Y.-H. Tseng, and Y.-M. Huang, “Neural networks decoders for linear block codes,” International Journal of Computational Engineering Science, vol. 3, no. 3, pp. 235–255, 2002.
  5. F. A. Cardoso and D. S. Arantes, “Genetic decoding of linear block codes,” in Proceedings of the Congress on Evolutionary Computation, pp. 2302–2309, Washington, DC, USA, 1999. View at Publisher · View at Google Scholar
  6. I. Shakeel, “GA-based Soft-decision decoding of linear block codes,” in Proceedings of the International Confrrence on Telecommunications Congress on Evolutionary Computation, pp. 13–17, Doha, Qatar, April 2010. View at Publisher · View at Google Scholar
  7. J. Holland, Adaptation in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor, Mich, USA, 1975.
  8. D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, Mass, USA, 1989.
  9. M. P. C. Fossorier and S. Lin, “Soft-decision decoding of linear block codes based on ordered statistics,” IEEE Transactions on Information Theory, vol. 41, no. 5, pp. 1379–1396, 1995. View at Publisher · View at Google Scholar · View at Scopus
  10. M. P. C. Fossorier, S. Lin, and J. Snyders, “Reliability-based syndrome decoding of linear block codes,” IEEE Transactions on Information Theory, vol. 44, no. 1, pp. 388–398, 1998. View at Scopus
  11. D. Chase, “Class of algorithms for decoding blockcodes with channel measurement information,” IEEE Transactions on Information Theory, vol. 18, no. 1, pp. 170–182, 1972. View at Publisher · View at Google Scholar