Abstract

In this study, we propose a “New Reliability Ratio Weighted Bit Flipping” (NRRWBF) algorithm for Low-Density Parity-Check (LDPC) codes. This algorithm improves the “Reliability Ratio Weighted Bit Flipping” (RRWBF) algorithm by modifying the reliability ratio. It surpasses the RRWBF in performance, reaching a 0.6 dB coding gain at a Binary Error Rate (BER) of 10−4 over the Additive White Gaussian Noise (AWGN) channel, and presents a significant reduction in the decoding complexity. Furthermore, we improved NRRWBF using the sum of the syndromes as a criterion to avoid the infinite loop. This will enable the decoder to attain a more efficient and effective decoding performance.

1. Introduction

Low-Density Parity-Check (LDPC) codes form a block code class characterized by the null space of sparse parity-check matrix . They were first discovered by Gallager in the early 1960s [1]. In the late 1990s, LDPC codes were rediscovered by MacKay and Neal [2]. Since then, the LDPC codes have been very successful because of their error correction performance which is very close to the Shannon limit [3] and have been adopted in many wireless standards including DVB-S2 [4], IEEE 802.15 [5], IEEE 802.11n [6], and IEEE 802.16e [7] as well as long-term evolution-advanced (LTE-A) communication standards [8]. LDPC codes will play an important role in wireless communication systems in the future, as it has been confirmed that the LDPC codes will be adopted in the 5G system for the enhanced mobile broadband scenario [9].

LDPC codes can be decoded mainly by two algorithms’ families. The first one is the so-called soft decision algorithms. Their original algorithm is called Belief Propagation (BP) [2], also called the Sum-Product (SP) algorithm in some publications [10, 11]. Several modified versions of the BP decoding algorithm have been introduced, for instance, the Min-Sum (MS) [12], Offset Min-Sum (OMS), Normalized Min-Sum [13, 14], and Relaxed Min-Sum (RMS) [15] algorithms.

The BP algorithm is known for its effective performance in error correction, but it takes a lot of multiplications to update the variable nodes and controls, which makes it difficult to use in an efficient hardware implementation. Different variants have been proposed for reducing its complexity and improving its performance. Despite these attempts to achieve these goals, this family of decoding algorithms remains much higher in complexity compared to the second decoding family.

This second one includes the algorithms known as the hard decision which originated from the Bit Flipping (BF) algorithm [16]. This algorithm has low complexity but considerable degradation in performance. To improve this property, several derivatives have been proposed in the literature. It began with Weighted Bit Flipping (WBF) [17], Modified Weighted Bit Flipping (MWBF) [18], and Improvement on the Modified Weighted Bit Flipping (IMWBF) [19]. These decoding algorithms are based mainly on the weight of checksums and differ by the use of these weights in the flipping function. The RRWBF [20] is another decoding algorithm which was introduced to improve the BF algorithm. It uses a reliability ratio different from the weight of previous algorithms and demonstrates a significant improvement in performance. An Improved Low Complex Hybrid Weighted Bit Flipping (ILCHWBF) [21] algorithm is an important hard decision algorithm compared to the previously mentioned ones because the weight used includes parameters from both the MS and BF variants along with an experimental factor determined by simulations.

Researchers in this field continued to propose decoding algorithms to improve the BF algorithm by minimizing the difference in performance compared to the SP one. Among these proposed algorithms in recent years, and which present interesting results, we can cite the following:

(1) Mixed Modified Weighted Bit Flipping (MMWBF) [22]. This algorithm offers a significant performance gain compared to the WBF algorithm and achieves a compromise of performance and complexity between the BF and SP algorithms. This algorithm will be further explained.

(2) Reliability Variance Weighted Bit Flipping (RVWBF) Algorithms for LDPC [23]. The authors of this work used the variance of the values received. These modifications improve performance with a slight increase in calculations at the level of the inversion function and decrease the average number of iterations to maintain high reliability and decoding at a higher speed.

(3) Modified Weighted Bit Flipping Algorithm Based on Intrinsic Information (MWBFII) [24]. This algorithm is based on the reliability of intrinsic information, in opposition to other existing ones, where reliability is based on extrinsic information used in the inversion function. This algorithm only uses additions and subtractions to modify reliability which uses multiplications and divisions.

(4) Classification-Based Algorithm for Bit Flipping Decoding (CBFD) of GLDPC Codes over AWGN Channels [25]. Here, the authors took advantage of fast BF decoding with only the initial help of soft channel information by introducing a classification step at first. Second, they added an auxiliary bit in messages between the variable and control nodes as a tool to increase decision reliability and improve performance. The reliability is presented as a tool to reduce the effect of the trapping sets generated.

(5) In a Two-Bit Weighted Bit Flipping Decoding (TBWFD) Algorithm [26]. The authors produced reliability bits for the bit decision results and syndrome values at the bit and control nodes, respectively. Reliability bits are exchanged between the bit and control nodes as the decoding progresses. Updating messages in control nodes is carefully designed with simple bitwise operations.

(6) Cyclic Switching Weighted Bit Flipping Decoding (CSWBFD) for Low-Density Parity-Check Codes [27]. For this algorithm, the authors carefully chose two criteria for selecting the bit to return and by passing cyclically from one criterion to another according to certain rules during decoding. This proposition can effectively break the infinite decoding loop, which often appears for the Weighted Bit Flipping algorithms and greatly degrades the decoding performance.

(7) Hard Decision Bit Flipping Decoder Based on Adaptive Bit-Local Threshold (HDBFABLT) for LDPC Codes [28]. It consists of an adaptive local threshold bit switching algorithm to improve the decoding performance of LDPC codes. The authors used a threshold for each bit which can be modified adaptively to maximize the number of errors corrected during iterations.

(8) High-Throughput Bit Flipping Decoder for Structured (HTBFS) LDPC Codes [29]. A high-speed parallel Bit Flipping (BF) decoder using the BF multiple threshold algorithm is proposed in this work. The decoder is endowed with functionalities such as low interconnection complexity, simpler calculations, and high speed.

(9) Multistage Bit Flipping Decoding (MSBFD) Algorithms for LDPC Codes [30]. Two algorithms, which are made up of soft decision and hard decision BF decoding parts, are presented. This approach is based on the transition from one algorithm to another and the choice of the algorithm which plays the first role and the second one. This is done by adjusting certain parameters to obtain optimal performance.

Within this sense, in the current paper, we propose an improvement of the RRWBF algorithm called “New Reliability Ratio Weighted Bit Flipping” (NRRWBF). In this algorithm, we propose a change in the inversion function, especially in the syndrome’s weight (reliability ratio), to improve the performance and to minimize, even more, the difference in performance between the SP algorithms and the BF ones. In a second step, we added an improvement to our algorithm by using the criterion of the syndrome’s sum to avoid falling into the infinite loop of decoding. We called this new proposition “Improvement of the New Reliability Ratio Weighted Bit Flipping” (INRRWBF).

After the first section which established an introduction for this study, the rest of this document is organized as follows. Section 2 gives a brief overview of the various BF decoding algorithms, their decoding steps, and their inversion functions. After this, a description of the NRRWBF algorithm is discussed in Section 3. Section 4 deals with the analysis of the complexity of the NRRWBF algorithm, and the description of the INRRWBF algorithm is introduced in Section 5. In Section 6, the simulation results obtained with MATLAB and the discussion of the results are described. Finally, a conclusion of this document is given in Section 7.

2. Brief Review of BF and Its Various Decoding Algorithms

LDPC codes are defined by a hollow matrix (the number of zeros greater than the number of ones) called the parity-check matrix . This matrix is characterized by columns which represent the length of the code and rows which represent the parity-check equations. One can also define a LDPC code by a bipartite graph called a tanner graph. This graph contains two types of nodes: the variable nodes correspond to the number of columns , and the control nodes correspond to the number of rows in the matrix. A variable node is connected to a control node if . We define the column weight or column degree as the number of ones per column and the row weight or row degree as the number of ones per row. If is the same for all the rows and is constant for any column, the LDPC code is regular; otherwise, it is irregular.

It is assumed that the binary codeword message is ; this message becomes using the Binary Phase-Shift Keying (BPSK) modulation with , . Then, the sequence is transmitted on an Additive White Gaussian Noise (AWGN) channel. The received symbol is with , and is the Additive White Gaussian Noise with mean zero and variance .

The following are defined: (1) represents the set of variable nodes participating in the control node(2) represents all the control nodes participating in the variable node

An example of an matrix is shown below. It is a very small matrix compared to the used ones in our algorithm.

This regular dimension matrix (, ) (, ) can be transformed into a tanner graph with 10 variable nodes and 5 control nodes as shown in Figure 1; a variable node is connected to a control node if and only if the corresponding element in the matrix is 1.

2.1. BF Decoding Algorithms

In this section, we provide the steps of the BF decoding algorithm and some of its derivatives in general. After receiving the information from the channel, the initialization step is performed to find the estimated values that will be used in the decoding steps.

We initialize a variable (estimation) with , which is the firm decision on each received bit by

The iterative process (decoding steps in general) includes the following: (1)We calculate the syndrome bits: , where and the matrix product is modulo 2. If all parity control equations are satisfied (i.e., ), the decoder is passed and the iterative process will be completed. Otherwise, it proceeds to the second step(2)We calculate the inversion function (the error term) for each variable node as follows:where is the syndrome bit, with . and are parameters related to each algorithm as we will see later, and represents the absolute value of received by the variable node channel . (3)We determine which variable node has the highest value of (). The estimated value of this node must be flipped(4)We repeat steps 1-3 until all parity control equations are satisfied or a predefined number of iterations is reached

2.2. The Inversion Function of BF Decoding Algorithms

The inversion function (IF) of the decoding algorithm differs from one algorithm to another depending on the relationships or values given to the parameters and .

For the original BF algorithm, and .

The IF becomes

For the WBF algorithm, we find and , with which represents the least reliable variable node associated with each control node.

The IF becomes

For the MWBF algorithm, we find and , where is an experimentally determined coefficient.

The IF becomes

For the IMWBF algorithm, we find and , where represents the minimum of each control node linked to the bit excluding the information coming from the bit itself (variable node ).

The IF becomes

For the RRWBF algorithm, we find and .

The IF becomes

For the ILCHWBF algorithm, we find and .

The IF becomes where is an attenuation factor used to improve the accuracy of weighted extrinsic information, with .

For MMWBF [21] algorithm, the two WBF algorithms are mixed: RRWBF and IMWBF, acting, respectively, as a primary decoding algorithm and as an auxiliary algorithm.

The inversion function is calculated for each variable node as follows:

In the main algorithm,

In the auxiliary algorithm,

According to and , we determine the positions of the nodes of variables most likely to errors by

If and the bit position is not switched by the main algorithm, we switch the two bits and ; otherwise, we switch only .

3. A New Reliability Ratio Weighted Bit Flipping Decoding Algorithm

Hard decision decoding algorithms are based on the calculation of the inversion function. This function determines the least reliable bit to be flipped at each iteration. Because of this, the parameters used in this function are very important; they must be chosen intelligently.

After exploring several algorithms known in the literature, such as MS [12], WBF [17], and MWBF [18], we noticed that they use, in their messages sent from control nodes to variable nodes for the MS and its variants and in the inversion functions for the WBF and its variants, the which represents the minimum value of each parity-check node. The importance of is that it is the least reliable bit for every parity-check node. Thus, there is a high probability that the inversion function calculated using leads us to the right bit to be flipped at each iteration.

Henceforth, we chose the RRWBF algorithm to improve it by integrating the in its inversion function.

The RRWBF algorithm, which performs well against some variants of WBF, uses the inversion function :

So, we replace the weight in the inversion function of the RRWBF by , where .

Therefore, we simplified the sum of messages by using with the row weight as the number of ones per row in the check matrix .

The inversion function for the NRRWBF algorithm becomes

The NRRWBF algorithm is summarized in Algorithm 1.

Initialization. Set the iteration counter , and initialize a variable (estimation).
Step 1. Calculate the syndrome bits: , where . If all the syndrome bits are zero, then the decoding process is stopped and output ; otherwise, go to step 2.
Step 2. Calculate the inversion function for each variable node by
    
Step 3. Determine which variable node has the highest value of (). The estimated value of this node must be flipped.
Step 4. If (a predefined iteration limit is reached), then stop and output the estimation codeword; else, and go to step 1.

4. Complexity Analysis

In this section, we analyze the complexity of calculating the inversion function and syndrome to determine which bit will be flipped for iteration. We consider a LDPC code with the weight of the column and the weight of the row . For the NRRWBF algorithm, the calculation of the syndrome requires additions. The inversion function needs additions, multiplications, N divisions, and comparison. So the number of additions will be . The comparison is done at the initialization of the algorithm, unlike the other operation which is repeated at each iteration.

The operations for an iteration in WBF, RRWBF, and NRRWBF algorithms are grouped in Table 1.

The operations in the WBF, RRWBF, and NRRWBF algorithms for code 1 ( and ) (, ) are grouped in Table 2.

To see the difference between RRWBF and NRRWBF, we calculate , for example, the error term value for the variable node 1, using the matrix given previously.

For RRWBF, we find

For NRRWBF, we find

This is due to the use of the minimum value of messages related to the control node in relation (14), instead of the sum of messages related to the control node in relation (13); this considerably reduces calculations, thus reducing the complexity of the decoder.

5. The Improvement of the New Reliability Ratio Weighted Bit Flipping Decoding Algorithm

During the LDPC decoding process, the decoder may fall into an infinite decoding loop before reaching the predefined maximum iteration number limit. In other words, the decoder may sometimes reverse an already-corrected bit, and this can be repeated several times during the decoding steps. Then, the decoding loop will cause a decoding failure when the maximum number of iterations is reached. To avoid such a situation of the infinite loop, we will adopt a method that allows preventing this problem: (1)This method is based on the calculation of the sum of the syndromes. As the syndrome is a vector of 0 and 1, if it contains only 0s, the sum of the syndromes is 0 and the estimated codeword is valid; otherwise, it is invalid(2)In this algorithm, we propose to calculate the sum of the syndromes at each iteration , after the decoding steps, and compare it with the sum of the syndromes at iteration . If the sum of the syndromes decreases, then we exclude the inverse bit (correct bit) at the current iteration, and we put it in a predefined exclusion set . Otherwise, we flip this bit a second time, and the decoding continues. Therefore, the sum of the syndromes can be used to detect the decoding loop, where an increase in the sum of the syndromes is treated as a decoding loop

The INRRWBF algorithm is summarized in Algorithm 2.

Initialization. Set the iteration counter , excluding set , and initialize a variable (estimation).
Step 1. Calculate the syndrome bits: , where . If the (the sum of the syndrome bits at iteration ) or (a predefined iteration limit is reached), stop and output ; else, and go to step 2.
Step 2. Calculate the inversion function for each variable node by
    
Step 3. Determine which variable node has the highest value of (). The estimated value of this node must be flipped.
Step 4. Calculate the syndrome again, where ; if the , record the position , then set ; otherwise, the bit must be flipped for the second time.

6. Simulation Results and Discussion

In this section, we introduce the simulation results found by MATLAB software, using BPSK and AWGN channels.

The parameters used in the simulations for the three codes are presented in Table 3.

To evaluate our NRRWBF algorithm, we need to compare it with algorithms from the two LDPC decoding families cited in the introduction. For the soft decision, the Min-Sum (MS) algorithm was chosen as a good variant of the original Sum-Product (SP) algorithm. For the hard decision, we chose the following algorithms, RRWBF and WBF.

The MS is characterized by a normalization factor noted as . It belongs to the interval , and it is experimentally determined. This factor varies from one code to another.

Then, before running the simulation, we must find the normalization factor for the MS algorithm for code 1, code 2, and code 3.

Figures 24 illustrate the obtained results for the three codes.

The optimum values of the normalization factor for code 1, code 2, and code 3 are at different signal-to-noise ratios: 0.8, 0.75, and 0.74, respectively.

The simulation results are illustrated in Figures 5 and 6, comparing the NRRWBF algorithm with WBF, RRWBF, and MS.

The simulation results are illustrated in Figures 7, comparing the NRRWBF algorithm with WBF, RRWBF, ILCHWBF, and MS.

Figures 57 show the simulation results using a low-density matrix characterized by (, ) (, ) (code 1), (, ) (, ) (code 2), and (, ) (, ) (code 3), respectively. We set the maximum number of iterations to for each value of  dB for NRRWBF, ILCHWBF, RRWBF, and WBF; we also set the maximum number of iterations to for MS.

Simulation results show that the NRRWBF algorithm gives the same performance as the RRWBF algorithm for the low SNR because the decoders in this interval do not distinguish between wrong and correct bits during the flipping step as long as the noise is higher. In this same interval, we observe that our algorithm surpasses the WBF algorithm by 0.7 dB and presents a mediocrity in performance compared to MS by 0.8 dB.

For the loud signal-to-noise ratio, it is noted from Figure 5, for the 10-4 value of BER, that the NRRWBF algorithm surpasses the RRWBF algorithm by 0.3 dB and far by 1.1 dB of the MS algorithm.

From Figure 6, NRRWBF can be compared with RRWBF and MS for the 10-4 BER value; it is found that the NRRWBF algorithm surpasses the RRWBF algorithm by 0.5 dB and far by 0.8 dB of the MS algorithm.

From Figure 7, NRRWBF can be compared with RRWBF, ILCHWBF, and MS for the 10-4 BER value. It is found that the NRRWBF algorithm surpasses both the RRWBF algorithm by 0.6 dB and the ILCHWBF algorithm by 0.3 dB, but it is lower by 0.7 dB compared to the MS algorithm.

All simulations reveal the good performance for our algorithm compared to the WBF algorithm.

Results indicate that there is a critical performance improvement when using the large line weight versus , which also shows that the performance is important when the control matrix size is large.

These results are due to the importance of using the minimum of the absolute value of the received values by variable nodes for each control node in the inversion function. This makes it possible to determine the least reliable bit, which increases the probability of determining exactly the erroneous bit at each iteration so that it can be corrected. This method is more effective when the size of the codeword and the row weight are large.

Figure 8 shows the BER for different algorithms as a function of the maximum number of iterations (MNI) at . The result shows that our algorithm is almost ten times faster than the others ( for 8 iterations instead of 75 for ILCHWBF and 100 for RRWBF).

In addition, for the same number of iterations (100 in this case), the BER in our case is better.

In Figure 9, we show the simulation results using a low-density matrix characterized by (, ) (, ) for the second code (code 2). We set the maximum number of iterations to for each value of  dB for INRRWBF, NRRWBF, RRWBF, and WBF. For MS, we set the maximum number of iterations to .

We observe that there is a 0.25 dB performance gain at a 10-4 BER for the INRRWBF compared to the NRRWBF. This shows firstly the importance of using the sum of the syndromes as a decoding loop detector and secondly that there is an addition of performance compared to the NRRWBF algorithm, which mainly comes down to the sum of the syndromes as a criterion to avoid the infinite loop. Therefore, in this way, we have to take advantage of all the efficiency of our first algorithm to correct the errors found during the decoding step.

7. Conclusions

In this article, we have, in a first step, proposed the NRRWBF algorithm to improve the performance of LDPC codes. The simulation results show that this algorithm gives good performance compared to the RRWBF algorithm at a high SNR for different line weights, with a significant reduction in complexity. In a second step, we present an amelioration of the proposed algorithm called the INRRWBF algorithm. This proposition has increased performance thanks to the criterion of avoiding the infinite loop which results in greatly enhancing the efficiency of the decoder.

Data Availability

The data used in our work are declared in the manuscript, and the matrices used in the simulations to evaluate our algorithm against the algorithms found in the literature are available from us.

Conflicts of Interest

The authors declare that they have no conflicts of interest.