Abstract

We propose a generalization belief propagation (BP) decoding algorithm based on particle swarm optimization (PSO) to improve the performance of the polar codes. Through the analysis of the existing BP decoding algorithm, we first introduce a probability modifying factor to each node of the BP decoder, so as to enhance the error correcting capacity of the decoding. Then, we generalize the BP decoding algorithm based on these modifying factors and drive the probability update equations for the proposed decoding. Based on the new probability update equations, we show the intrinsic relationship of the existing decoding algorithms. Finally, in order to achieve the best performance, we formulate an optimization problem to find the optimal probability modifying factors for the proposed decoding algorithm. Furthermore, a method based on the modified PSO algorithm is also introduced to solve that optimization problem. Numerical results show that the proposed generalization BP decoding algorithm achieves better performance than that of the existing BP decoding, which suggests the effectiveness of the proposed decoding algorithm.

1. Introduction

Since the ability of achieving Shannon capacity and their low encoding and decoding complexity, the polar codes have received much attention in the research field of error-correcting coding recently [117]. While compared to some existing coding schemes, such as low-density parity-check (LDPC) and turbo codes, the performance of the polar codes in the finite length regime is disappointing [2, 3]. Hence, motivated by this observation, researchers have proposed many decoding algorithms to improve the performance of the polar codes [417].

Based on the successive-cancelation (SC) decoding algorithm proposed by Arıkan [1], authors of [4, 5] had introduced a list successive-cancelation (SCL) decoding algorithm with consideration of SC decoding paths, where the results showed that performance of SCL was very close to that of maximum-likelihood (ML) decoding. Then, to decrease the time complexity of the SCL, another decoding algorithm derived from SC called stack successive-cancelation (SCS) was proposed in [6]. Furthermore, it was proven in [7] that, with cyclic redundancy check (CRC) aided, SCL even outperformed more than some turbo codes. However, due to the serial processing nature of the SC, all the algorithms in [47] would suffer a low decoding throughput and high latency. Therefore, some improved versions of SC were further proposed with the explicit aim to increase throughput and reduce the latency without sacrificing error-rate performance, such as simplified successive-cancellation (SSC) [8], maximum-likelihood SSC (ML-SSC) [9], and repetition single parity check ML-SSC (RSM-SSC) [10, 11]. Besides from those SC based algorithms, researchers had also investigated some other algorithms more in parallel, one typical representation of which was the belief propagation (BP) [12] decoding algorithm. With the factor graph representation of polar codes [13], authors in [14, 15] showed that BP polar decoding has particular advantages with respect to the decoding throughput, while the performance was better than that of the SC and some improved SC decoding. What is more, with the minimal stopping set optimized, results of [16, 17] had shown that the error floor performance of polar codes was superior to that of LDPC codes.

Indeed, all the decoding algorithms in [417] can improve the performance of polar codes to a certain degree. However, as the capacity achieving coding scheme, the results of those algorithms is still limited. Hence, in this paper, we propose a generalization belief propagation (BP) decoding algorithm based on particle swarm optimization (PSO) to improve the performance of the polar codes with the finite length. Based on the analysis results of the existing BP decoding algorithm, we first show that the error-correcting capacity of the BP decoding algorithm is important for performance of the decoding. Motivated by that observation, we introduce a probability modifying factor to each node of the BP decoder to enhance the reliability of the probability updated. Then, we generalize the BP decoding algorithm on the basis of these modifying factors and further drive the probability update equations for the proposed generalization BP decoding. Finally, in order to achieve the best performance, we formulate an optimization problem to find the optimal probability modifying factors for the proposed generalization BP decoding. Furthermore, a method based on the modified PSO algorithm is also introduced to solve the optimization problem. The main contributions of this paper can be summarized as follows.(i)A generalization belief propagation (BP) decoding algorithm based on probability modifying factor is introduced. Furthermore, as to improve the performance of the proposed decoding, a BP decoding algorithm optimization problem is formulated.(ii)A method based on the modified PSO algorithm is introduced to solve the optimization problem.

The finding of this paper suggests that with the probability modifying factor, the error-correcting capacity of the BP decoding can be enhanced, and the performance of the polar codes can be improved correspondingly, which is finally proven by the simulation results.

The remainder of this paper is organized as follows. In Section 2, we explain some notations and introduce some background on the polar codes. And in Section 3, the generalization belief propagation (BP) decoding algorithm based on the probability modifying factors is described in detail. In Section 4, we formulate and solve the optimization problem of finding the probability modifying factors. And Section 5 provides the simulation results for the complexity and bit error performance of the proposed decoding. Finally, we make some conclusions in Section 6.

2. Preliminary

2.1. Notations

In this paper, the blackboard bold letters, such as , denote the sets, and denotes the number of elements in . The notation denotes an -dimensional vector , and indicates a subvector of , , . When , is an empty vector. Further, given a vector set , vector is the th element of .

The matrices in this paper are denoted by bold letters. The subscript of a matrix indicates its size, for example, represents an matrix . Specifically, the square matrices are written as , size of which is , and is the inverse of . Furthermore, the Kronecker product of two matrices and is written as , and the th Kronecker power of is .

During the procedure of the encoding and decoding, we denote the intermediate node by or , where is the code length. Besides, we also indicate the probability messages of the intermediate node as , where the probability of being equal to 0 or 1 is or . Similarly, the probability messages of are and .

Throughout this paper, “” denotes the modulo-two sum, and “” means “”. While the operators “” and “” are defined as and , respectively, .

2.2. Background of Polar Codes

A polar coding scheme can be uniquely defined by three parameters: block-length , code rate , and an information set , where . With these three parameters, a source binary vector consisting of information bits and frozen bits can be mapped a codeword by a linear matric , where , is a bit-reversal permutation matrix defined in [1], and .

In practice, the construction procedure of a polar code can be divided into stages, as shown in Figure 1(a), where the circle nodes in the leftmost column are the input nodes of encoder, values of which are equal to binary source vector,that is, , and the circle nodes in the rightmost column are the output nodes of encoder, . It is also noticed that, each encoding stage has , so-called processing units (PU) with a generation matrix of , and each PU has two input and two output variable nodes, as shown in Figure 1(b). During the process of polar encoding, based on , we have where and are the input nodes of the th PU in the th stage, , and similarly, and are the output nodes. After the procedure of the polar encoding, all the bits in the codeword are passed to the -channels, which consisted of independent channels of , with a transition probability of , where is th element of the received vector . Then, the decoder of the receiver will output the estimated codeword and the estimated source binary vector with different decoding algorithms [417].

3. BP Decoding for Polar Codes

In this section, we will generalize the BP decoding algorithm for polar codes with the analysis of the existing polar BP decoding.

3.1. Existing BP Decoding

The basic BP process unit (BP-PU) of polar codes is shown in Figure 2, where and are the right-to-left probability messages passed to and ; is the iteration number. and are the left-to-right probability messages passed from the nodes. and are the right-to-left probability messages passed from and and and are the left-to-right probability messages passed to the nodes. According to the formula of total probability, there has Furthermore, based on the transition probability of the channel , that is, , and the received vector , we have where and are the probability distribution functions of coded bit and received signal sample . In the BP decoding algorithm [1317], the messages updated equations are given by

Based on (4), when the decoding reaches maximum iteration number (mIter), BP decoder output the decoded vector , each elements of which depended on

3.2. Analysis of BP Decoder

It is noticed from [1317], based on the BP decoding, the performance of polar codes can be improved to a certain degree. However, as the capacity achieving coding scheme, the results of those algorithms is still disappointing. Hence, we cannot help wondering why the performance of the polar codes with finite length is inferior to that of LDPC codes [16, 17], and how we can improve it. To answer the questions, we need to make a further analysis of the existing BP decoder.

In fact, it is noticed from the factor graph of polar codes in [13] that the degree of the check or variable nodes in the polar BP decoder is 2 or 3. While the average degree of the LDPC codes is usually greater than 3 [12, 18], which means that the error-correcting capacity of the polar BP decoding will be weaken. Hence, the performance of the polar codes is inferior to that of LDPC codes with the same length [18]. As shown in Figure 3, two Tanner graphes with different degree distributions are given. We assume that there exists an error variable node in a certain iteration. Compared to the check nodes and in Figure 3(a), the influence of error probability messages from to and in Figure 3(a) is weak, because with a greater degree number, the proportion of the error probability messages in the input probability messages of and will be reduced. What is more, the convergence of BP decoding with a greater average degree will be quicker, which results in less computational iteration number, as shown in [1218]. Therefore, to improve the performance of polar BP decoding, it is important to enhance the error-correcting capacity of the algorithm.

3.3. Generalization of BP Decoding

Motivated by aforementioned observation, in this subsection, we will introduce a generalization framework of BP decoder for polar codes. In order to improve the error-correcting capacity of polar BP decoding, twelve probability messages modifying factors are introduced to the nodes of the BP-PU shown in Figure 2. When the probability messages of a node of the BP-PU is updated, three probability messages modifying factors are added to other three nodes, respectively.

As illustrated in Figure 4, and are the probability messages modifying factors of the th BP-PU in th stage. In this case, each equation in (4) will be correspondingly added with three variables, e.g., the first equation of (4) is added with , , and . Besides, based on the new construction of BP-PU, we further define the result of each nodes’ probability messages calculation as a function with respect to these three variables and the probability messages of other three nodes; hence, we can get the new messages update equations as (6). Consider

In (6), , , , and are the four probability messages update functions of the th BP-PU in th stage, and are six probability messages variables. Based on (6), we can easily get a new expression of (4) defined as the functions of , and ; that is, That is to say, the existing BP decoding algorithm is a special case of the generalization BP decoding proposed in this work. Furthermore, with the four probability messages update functions of we can also get the probability messages update equations of the SC based decoding algorithms as shown in [1, 411]. Similarly, it is concluded that the SC based decoding algorithms are also the special cases of our generalization BP decoding, which has shown the intrinsic relationship of the SC based decoding and BP decoding algorithms.

Therefore, we can conclude from (6)–(8) that, in order to improve the performance of the polar codes, it is needed to find the optimal functions of , , and , and the optimal parameters of and , which will be discussed in the next section.

4. BP Decoding Algorithm Optimization

Motivated by the aforementioned conclusion, we now consider how to optimize the generalization BP decoding algorithm so as to improve the performance of the polar codes.

4.1. Messages Update Functions

It is noticed from (4) and (6)–(8) that, in order to enhance the reliability of messages propagation, it is important to determine the appropriate messages update functions for the generalization BP decoding algorithm. However, in general, it is difficult to find the optimal messages update functions; therefore, in this work, the messages update functions are introduced by the analysis of the construction of polar encoding. And based on (1), we have

Therefore, based on the error coding theory and (9), we can get the messages update functions as

That is to say, with the probability messages modifying factors, the new probability messages update equations will be written as

Next, our task is to determine the optimal probability messages modifying factors, that is, and , so as to achieve the best performance.

4.2. Probability Modifying Factors Optimization

In practice, due to the input of frozen bits [1], there exists some so-called frozen nodes in the decoder, and values of these nodes are determined and independent of the decoding algorithm. Hence, if the decoding is correct, the probability messages of a frozen node or (the determined value of frozen nodes is 0) must satisfy the condition of While in practice, due to the noise of received signal, there may exist some frozen nodes unsatisfied the reliability condition. Therefore, in order to improve the accuracy of decoding, it is needed to find the optimal probability messages modifying factors such that all the frozen nodes could always satisfy the conditions of (12) during the procedure of BP decoding.

To achieve above goal, we introduce a pair parameters to each node in the decoder, which are called left and right reliability degrees. And for a node (similar to ), the left and right reliability degrees are denoted as and , respectively. Where if is a frozen node, we can get and by otherwise, the values of and will be given by where and .

The reliability degrees indicate the reliability of the node’s decision value, and the larger reliability degree, the higher reliability of that value. Specially, when there is no noise in the channel, the reliability degrees of a node will approach to infinity. Based on previous observations, the problem of getting the optimal probability messages modifying factors can be formulated as an optimization problem to maximize the reliability degrees of all the node in decoder, while the target function is written as where and are decided by (11) and (13)-(14), and is the integral value of ; that is, .

4.3. Problem Solving Based on Modified PSO Algorithm

Based on (15), we have formulated the problem to find the optimal probability modifying factors, and in this subsection, we will consider the solution of the problem so as to get the optimized BP decoding algorithm.

Among the different global optimizers, particle swarm optimization (PSO) has demonstrated its usefulness as an optimizer capable of finding global optimum. The idea of PSO is based on simulation of simplified social models such as birds flocking and fish schooling. Like other stochastic evolutionary algorithms, PSO is independent of mathematical characteristics of objective problems. However, unlike these algorithms, with PSO, each particle has its own unique memory to “remember” its own best solution. Thus, each particle has its own “idea” of whether it has found the optimum. However, in other algorithms, previous knowledge of the problem will be destroyed once the population changes. PSO has attracted increasing attention because of its simple concept, easy implementation, and quick. These advantages allow it to be applied successfully in a variety of fields, mainly for unconstrained continuous optimization problems [1926]. Therefore, in this work, we explore the PSO algorithm with certain modification based on particle fitness evaluation to solve the problem (15).

To present our modified PSO algorithm, we first introduce some preliminary definitions and concepts used in the operation of the PSO algorithm, which is described as follows.

(1) Parameters Definitions. We consider a swarm of particles, , where the position of represents a possible solution point of those probability modifying factors in the problem (15), , and is the number of the probability modifying factors. Each element of is the value of a probability modifying factor; that is, , . Based on the work of [19], the position of particle can be updated by In (16), the pseudovelocity is calculated as follows: where is particle inertia, indicates the iteration number, represents the best ever position of particle at iteration , and represents the global best position in the swarm at iteration . and represent uniform random numbers between 0 and 1. It is proven in [19] that, when , the products, or , have a mean of 1.

(2) Fitness Evaluation. In general, with the parameter definitions aforementioned, we can get the global optimal solution of a optimization problem. However, in problem (15), the situation will be different, because during the procedure of the BP decoding for polar codes, a node’s probability messages calculated on the basis of the probability modifying factors must satisfy the conditions of (12) or (14). In order to check these conditions during the decoding, we further define two functions as where is the frozen nodes set of the decoder, and is the set of frozen nodes whose probability messages based on do not satisfy the condition of (12). Similarly, is the nonfrozen nodes set of the decoder, and is the set of nonfrozen nodes whose left and right reliability degrees based on do not satisfy (14). Furthermore, we have . Based on (18), we can evaluate the fitness of a particle’s current position so as to accelerate the finding of the the optimal solution. After the checking of conditions of (12) and (14), we further get the fitness value of a particle by

As the summarization of previous description, we now provide the whole procedure of the proposed BP decoding algorithm based on PSO optimization with the form of pseudocode, as shown in Algorithm 1.

Input:
   The stopping parameters and ;
   The frozen nodes set, ;
   The probability messages of the input nodes of the decoder, ;
Output:
   The decoded source binary vector, ;
(1)   Initialize the parameters used in the PSO algorithm, including particles initial set ,
    the pseudovelocities , particle inertia , maximum iteration number and , ;
(2)  while (Stopping criterion is not satisfied) do
(3)     Update particle pseudovelocity vector using (17);
(4)    Update particle position vector using (16);
(5)    Calculate the probability messages of each nodes in the decoder using (11);
(6)    if   and   then
(7)     Calculate the fitness value using (19);
(8)     if     then
(9)       , ;
(10)    end if
(11)     if     then
(12)      , ;
(13)     end if
(14)    end if
(15) end while
(16) Output the decoded source binary vector by hard decision of the output nodes of the decoder;
(17) return   ;

It is noticed form Algorithm 1 that the steps 2–15 are the key of the algorithm. The main consideration of these steps is based on the following two observations. On the one hand, if there is no noise in the channel, the BP decoding of the polar codes will be errorless, which means that all the nodes in the decoder will satisfy the condition of (12); that is, and are both equal to 0. However, in practice, this case is impossible to happen due to the noise of channel; hence, we introduce two stopping parameters and for and . Since the values of the frozen nodes are known to the decoder, we set to 0. To ensure that the correction of the decoding, should be as small as possible, such that when a particle is found to make and both true, which indicates that stopping criterion is satisfied, the loop of can be stopped. On the other hand, when there exist multiparticles that make the stopping criterion satisfied, we can choose the one which can generate maximum fitness value to be our final solution, as shown in steps .

5. Numerical Results

In this section, Monte Carlo simulation is provided to show the performance of the proposed decoding algorithm. In the simulation, the BPSK modulation and the additive white Gaussian noise (AWGN) channel are assumed. The code length is , code rate is 0.5, and the index of the information bits is the same as [1].

As it is shown in Figure 5, the proposed decoding algorithm based on PSO achieves better bit error rate (BER, defined as as the ratio of error decoding bits to the total decoding bits) performance than that of the existing BP decoding. Particularly, in the low region of signal to noise ratio (SNR), that is, ( is the average power of one bit and is the average value of the noise power spectrum density) the proposed algorithm provides a higher SNR advantage; for example, when the BER is , the proposed decoding algorithm provides SNR advantages of 0.9 dB, and when the BER is , the SNR advantages are 0.6 dB. Hence, we can conclude that with the proposed generalization BP decoding and PSO, performance of the belief propagation decoding algorithm for polar codes could be improved.

Additionally, one important thing to note is that, since the proposed algorithm is implemented depending on the probability messages modifying factors, while values of these factors are optimized by the PSO algorithm. Hence, the time complexity of the existing BP decoding is lower than that of the proposed decoding, which is tightly depending on the convergence time of the PSO algorithm. And the work for the time complexity optimization of the proposed algorithm will be conducted in our future research.

6. Conclusion

In this work, we proposed a generalization belief propagation (BP) decoding algorithm based on particle swarm optimization (PSO) to improve the performance of the polar codes. Through the analysis of the existing BP decoding algorithm for polar codes, we first introduced a probability modifying factor to each node of the BP decoder so as to enhance the error-correcting capacity of the BP decoding. Then, we generalized the BP decoding algorithm based on these modifying factors and drive the probability update equations for the proposed generalization BP decoding. Based on these new probability update equations, we further proved the intrinsic relationship of the existing decoding algorithms. Finally, in order to achieve the best performance, we formulated an optimization problem to find the optimal probability modifying factors for the proposed generalization BP decoding. Furthermore, a method based on the modified PSO algorithm was also introduced to solve the optimization problem. Numerical results showed that the proposed generalization BP decoding algorithm achieved better performance than that of the existing decoding, which suggested the effectiveness of the proposed decoding algorithm.

Conflict of Interests

The authors declare that they do not have any commercial or associative interests that represent a conflict of interests in connection with the work submitted.