Abstract

An energy efficient low-density parity-check (LDPC) decoder using an adaptive wordwidth datapath is presented. The decoder switches between a Normal Mode and a reduced wordwidth Low Power Mode. Signal toggling is reduced as variable node processing inputs change in fewer bits. The duration of time that the decoder stays in a given mode is optimized for power and BER requirements and the received SNR. The paper explores different Low Power Mode algorithms to reduce the wordwidth and their implementations. Analysis of the BER performance and power consumption from fixed-point numerical and post-layout power simulations, respectively, is presented for a full parallel 10GBASE-T LDPC decoder in 65 nm CMOS. A 5.10 mm2 low power decoder implementation achieves 85.7 Gbps while operating at 185 MHz and dissipates 16.4 pJ/bit at 1.3 V with early termination. At 0.6 V the decoder throughput is 9.3 Gbps (greater than 6.4 Gbps required for 10GBASE-T) while dissipating an average power of 31 mW. This is 4.6 lower than the state of the art reported power with an SNR loss of 0.35 dB at .

1. Introduction

Communication systems are becoming a standard requirement of every computing platform from wireless sensors, mobile telephony, netbooks, and server class computers. Local and cellular wireless communication throughputs are expected to increase to hundreds of Mbps and even beyond 1 Gbps [13]. With this increased growth for bandwidth comes larger systems integration complexity and higher energy consumption per packet. Low power design is therefore a major design criterion alongside the standards’ throughput requirement as both will determine the quality of service and cost.

Additionally, mobile computing will take on a new dimension as a portal into Software as a Service (i.e., cloud computing) where low performance computers can tap into the power of a distant high-performance computer cluster [4, 5]. So far the emerging 10GBASE-T standard has not been adopted as quickly as predicted into the data center infrastructures because of their power consumption [6]. The power consumption of the 10GBASE-T PHY layer (more specifically the receiver, whose implementation is left open by the 802.3 an standard [7]) has become difficult to reduce [8].

LDPC code was first developed in 1962 [9] as an error correction technique that allowed communication over noisy channels possibly near the Shannon limit. With advancements in VLSI, LDPC codes have recently received a lot of attention because of their superior error correction performance and have been adopted by many recent standards such as digital video broadcasting via satellite (DVB-S2) [10], the WiMAX standard (802.16e) [11], the G.hn/G.9960 standard for wired home networking [12], and the 10GBASE-T standard for 10 Gigabit Ethernet (802.3 an) [7].

LDPC decoder architectures can be categorized into two domains: full parallel and partial parallel. Full parallel is a direct implementation of the LDPC decoding algorithm with every computational unit and interconnection between them realized in hardware. Partial parallel decoders use pipelining, large memory resources, and shared computational blocks to deal with the inherent communication complexity and massive bandwidth. Since the amount of operations achievable per cycle is larger with a full parallel processor, their energy efficiencies are theoretically the best [13]. For example, an LDPC decoder implementing the 10GBASE-T standard requires 24,576 operations per iteration (this is the total number of check node update and variable node update computation in message-passing algorithm [14]). A full parallel decoder can take one cycle to perform one iteration, while a partial parallel decoder takes multiple cycles (e.g., in a design, each iteration takes 12 cycles [15]). Compared to partial parallel decoders, full parallel decoders can achieve the same throughput performance while operating at a lower clock frequency, that is, runing at lower minimum supply voltages and thus reducing energy. However, for complex codes, full parallel decoders deviate strongly from this ideal due to their large interconnect complexity and low clock rate [14]. Given equivalent 10GBASE-T compliant LDPC codes, throughput requirements, and 65 nm CMOS technology, a full parallel LDPC decoder achieves a 2.6 TOPS per Watt efficiency compared to a partial parallel LDPC decoder at 1.4 TOPS per Watt [14, 15]. Thus practical full parallel decoders show less than × performance-power efficiency compared to the × promised in the ideal scenario.

To improve their efficiency, previous research has focused on reducing routing congestion and wire delay of the full parallel decoder implementations through bit-serial communication [13], wire partitioning [16], and algorithm modification [17]. A full parallel design using the Split-Row algorithm modification resulted in an implemented architecture that achieved 14 TOPS per Watt, that is, 10 × the efficiency of a partial parallel decoder [14].

This paper proposes an adaptive wordwidth algorithm that takes advantage of data input patterns during the LDPC decoding process. We show that the method is valid for both MinSum and Split-Threshold, and, for demonstration, we implement the proposed method for Split-16 Threshold decoder. Switching activity reduction through adaptive arithmetic datapath wordwidth reduction has been explored in low power designs based on data spatial correlation [18]. To our knowledge this has not been explored in LDPC decoding yet. The paper presents an architecture which switches between Normal Mode and Low Power Mode operation with a final post-layout implementation. It also optimizes energy efficiency by minimizing unnecessary bit toggling while maximizing bit error rate (BER) performance.

The paper is organized as follows: Section 2 gives an overview of LDPC decoding, the Split-Row Threshold algorithm, and common power reduction techniques; Section 3 introduces the adaptive wordwidth power reduction method with analysis for three different methods along with their bit error performance results; Section 4 gives details of their architecture; Section 5 presents the results of the post-layout implementations of three full parallel 10GBASE-T LDPC decoders that implement the low power adaptive algorithm.

2. Background

2.1. LDPC Codes and MinSum Normalized Decoding

The LDPC decoding algorithm works by performing an iterative computation known as message passing. Each iteration consists of variable node and check node computations. Common iterative decoding algorithms are Sum-Product Algorithm (SPA) [19] and MinSum algorithms [20]. Both algorithms are defined by a check node update equation that generates and a variable node update equation that generates . The MinSum variable node update equation, which is identical to the SPA version, is given as where each message is generated using the noisy channel information (of a single bit), , and the messages from all check nodes connected to variable node as defined by (excluding ). MinSum simplifies the SPA check node update equation, which replaces the computation of a nonlinear equation with a function. The MinSum check node update equation is given as where each message is generated using the messages from all variable nodes connected to check node as defined by (excluding ). Note that a normalizing scaling factor is included to improve error performance, and so this variant of MinSum is called “MinSum Normalized” [21].

An LDPC code is defined by an parity-check matrix , which encapsulates important matrix parameters: the number of rows, , is the number of check nodes; the number of columns (or code length), , is the number of variable nodes; row weight and column weight , which define the 1′s per rows and columns, respectively. In this work, we examine cases where is regular, and thus and are constants. For clearer explanations, in this paper we will use a (6,32) (2048,1723) RS-LDPC code adopted by the 10GBASE-T standard [22]. This code is described by a matrix with and . There are check nodes and variable nodes, and wherever , there is an edge (interconnection) between check node and variable node . There are variable nodes and check node computations, for a total of 24,576 computations per iteration. Each variable node sends the result (i.e., its message) to its connected check nodes, and vice versa. A single cycle per iteration full parallel architecture requires 24,576 message transfers (message-passing) per cycle. Given that each message can be as large as four to six bits, the bisection bandwidth of the communication links between the check to variable node processors, the memory to check node, and variable node to memory, are from 98 to 147 Kbit per cycle each. These links not only cause problems in interconnect latencies, but also add capacitance due to wires and repeaters, which increases the circuit power [13].

2.2. Split-Row Threshold Decoding

The proposed Split-Row Threshold [23] algorithm significantly reduces the interconnect complexity and circuit area by partitioning the links needed in the message-passing algorithm, which localizes message-passing. A minimal amount of information is transferred amongst partitions to ensure computational accuracy while reducing global communication. This is most effective in reducing wire congestion and back-end engineering time for full parallel architectures with large codes (e.g., from 2 Kbits to 64 Kbits) or high check node degrees.

The Split-Row Threshold algorithm gains back the loss in error performance by adding an additional form of information based on a comparison with a threshold value (). Based on this comparison, a “threshold enable” bit () is sent between each partition [14]. The check node update equation is modified as follows: wherewhere represents the variable nodes only contained in decoder partition on row (each partition has variable nodes). With the threshold comparison based information, error performance loss is improved from a 0.07 to 0.22 dB reduction (depending on the level of partitioning) from MinSum Normalized performance.

This paper discusses power improvements of a Split-16 Threshold decoder architecture (i.e., there are partitions) using the proposed adaptive wordwidth technique. Since the row weight of the 10GBASE-T code is 32, each partition contains check nodes that have inputs. The optimum values for and depend on the code rate, size, and the level of partitioning. For example, for (6,32) (2048,1723) LDPC code using Split-16 Threshold, results in the best BER performance with 0.3 dB SNR loss from MinSum Normalized.

2.3. Power Reduction Methods
2.3.1. Early Termination

An efficient technique to reduce the energy dissipation is through controlling the number of decoding iterations that a block requires for a successful decoding convergence. The common method is to verify if the computed codeword satisfies all parity-check constraints at the end of each iteration. Once convergence has been verified, the decoding process is terminated. Several methods are proposed to efficiently implement this early termination [13, 24, 25]. LDPC codes, especially high rate codes, converge early at high SNR [26]. Therefore, by detecting early decoder convergence, throughput and energy can potentially improve significantly while maintaining the same error performance.

2.3.2. Voltage Scaling

In order to save power and energy one effective technique is to employ voltage scaling in the decoder such that the application throughput requirement is met. For the Split-16 Threshold decoder, the minimum voltage to meet the 6.4 Gbps 10GBASE-T compliant throughput is 0.7 V in 65 nm CMOS [14]. For most cases, near-threshold operation is not advisable in nanometer technologies due to increased susceptibility to variations and soft errors [27], and so any further energy savings using voltage scaling will reduce the functional integrity of the decoder’s circuits.

2.3.3. Switching Activity and Wordwidth Reduction

Because decoders exhibit large switching activity due to their largely computational nature, we can decrease power by lowering the effective capacitance, . For full parallel architectures, this was done through the Split-Row implementations which reduced overall hardware complexity and thus eliminated interconnect repeaters and wire capacitance. The datapath wordwidth of the decoder directly determines the required memory capacity, routing complexity, decoder area, and critical path delays. Moreover, it affects the amount of switching activity on wires and logic gates, thus affecting the power dissipation.

For partial parallel architectures, wordwidth reduction using nonuniform quantization has been used to reduce the amount of information needed in check node processing and memory storage requirements (thus also reducing the SRAM capacitance as well) [28]. However, conversion steps are needed to do variable node computation in the original wider wordwidth. In [15], additional postprocessing is required to improve the error correction performance which also improves the error floor. Implementing nonuniform quantization to full parallel architectures may result in more costs than benefits. Conversion steps across all communication links add hardware between every check and variable node. Since memory is not a large part of such architectures, this method does not save on memory area. In this work, rather than statically fixing the wordwidth at run time we will introduce a low cost adaptive wordwidth datapath technique to reduce switching activity for a full parallel decoder.

3. Adaptive Wordwidth Decoder Algorithm

A simplified block diagram of a single cycle LDPC decoder is shown in Figure 1. With the Split-Row Threshold architecture, the check node processor logic generally has lower than that of the variable node processor due to its reduced hardware [14]. The figure shows some of the variable node details such as the adder tree. For the (6,32) (2048,1723) 10GBASE-T code, variable node processors add seven inputs: six inputs from the messages passed by the check node processors () as well as the original received data from the channel (). Since the wordwidth growth is required to maintain correct summation and given that the 10GBASE-T code length is large (), the amount of power dissipated by 2048 variable node processors in a full parallel decoder is significant.

Our proposed algorithm adapts the wordwidth datapath of variable node processing based on its data input patterns ( values). The algorithm switches between two modes: Low Power Mode and Normal Mode. In Normal Mode a full wordwidth computation is done, while Low Power Mode performs a reduced wordwidth computation. We first show values are largely concentrated in interval then present the algorithm.

3.1. Theoretical Investigations

Let the variable node messages be the inputs to a check node . Since variable node messages are initialized with channel information (assuming messages in (1) are initially zero), for BPSK modulation and an AWGN channel, their distribution at the first iteration is Gaussian.

For iterations >1, the variable node messages in MinSum Normalized are approximated to the Gaussian distributions [29]. Similarly, in Split-Row Threshold, variable node messages can be fitted with the sum of two Gaussian distributions, and a very good agreement (-square = 0.99) was achieved for the fit. Therefore, the distribution at iteration can be described as where and are the variance and the mean of the distribution. For this distribution, the probability that a variable node message has a magnitude less than a given value is Thus assuming are i.i.d., the probability that at least one input of the check node has a magnitude less than is

In MinSum Normalized and Split-Row Threshold, for each check node if there exists one input, , whose magnitude is less than , then applying (2) and (3) the other outputs of the check node ( messages) have absolute values less than after being normalized with . Thus if the probability from (7) is high enough for a particular , we should expect a large concentration of . Simulation results show, for the (2048,1723) 10GBASE-T code using MinSum Normalized, when , the probability from (7) at  dB is 99%, 92%, and 65% for iterations 1 through 3. Also they show 99%, 90%, and 62% of values which are within ( results in a near optimum BER performance).

In Split-Row Threshold, is set to threshold . For 10GBASE-T code in Split-16 Threshold, the probability value of is 99%–67% for SNR ranges 3.4–4.2 dB and iterations 1 through 4. If there exists an input in a partition whose absolute value is smaller than , then the signal is asserted high and is globally sent to other partitions. Therefore, the check nodes in other partitions set their minimum ( from (3)) to , if their local minimum was larger than . Due to this key characteristic and applying (3), a large number of check node messages () are .

Table 1 shows the percentage of and for a large number of decoding iterations at and 4.2 dB. We call interval as Threshold Region. The table shows that for  dB and through iterations 7, 95% down to 85% of all values are in the Threshold Region of which 90%–81% are . For a high SNR value of 4.2 dB and through iterations 3,90% down to 48% values are in the region, with 86%–47% being . This is shown in Figure 2. Most blocks converge beyond four iterations at  dB.

Therefore, at low iteration counts and low SNR values, since most messages lie within the Threshold Region, the inputs to the variable node processors can be represented by less bits, given a fixed quantization format, implying that variable node additions can be done in smaller wordwidths. This allows us to adaptively change the wordwidth of the variable node processor depending on SNR and iteration count in order to reduce the final energy per bit without losing significant error correction performance.

3.2. Power Reduction Algorithm

Given that variable node input wordwidths can be reduced without losing significant information at low SNR values and also at low iteration counts in high SNRs, we propose a Low Power Mode operation for the decoder which significantly reduces the switching activity of the variable node processors in the following. After check node processing and when the current iteration count (Iteration) is less than a preset Low Power Mode iteration max count (Low Power Iteration), we chop or saturate such that it is within the Threshold Region. Three methods are explored which have different BER performance, convergence behavior, and hardware complexity. All three methods try to remap all into the Threshold Region. In Method 1, we saturate values outside the Threshold Region into . In Method 2, we set all magnitudes to , because the majority of them are concentrated at value. In Method 3, we only keep the minimum number of LSB bits that can represent the values within the Threshold Region (in other words, the MSBs are chopped). These methods are described in Algorithm 1.

for     do
for     do
  for all     do
   if   Lowpower_flag = 0 then
   
              
   else
              
   end if
  end for
end for
end for

A qualitative perspective shows that Method 1 has the best error performance since it preserves any already within the Threshold Region and also maps values regularly. Method 2 offers a simple hardware solution at the cost of losing some information for , but it has a high reduction in bit toggling (to be explained in Section 4). For Method 3, its benefit comes from the compromise between the hardware cost of Method 1 and a better error correction performance than Method 2 (even though the values are irregularly mapped).

By reducing the information range of into the Threshold Region, the required datapath wordwidth is reduced, and thus variable node computation can be done with less switching activity in Low Power Mode. The challenges come from implementing a low overhead flexible datapath as well as deciding when to switch out of Low Power Mode such that the final convergence does not take much more iterations than running completely in Normal Mode. Algorithm 2 describes the complete Split-Row Threshold Low Power decoding process.

Required: , that is, channel information
Iteration = 1
while     do
   (
  for     do
   if  Lowpower_flag = 0 then
    
   else
    
   end if
   for all     do
         
   end for
  end for
  do Algorithm 1
  Iteration = Iteration + 1
end while

For our 10GBASE-T decoder implementation, the decoding message wordwidth is chosen to be 6 bits in Normal Mode. During Low Power Mode, for Methods 1 and 3, the 6-bit input additions in variable node are reduced into 3-bit input additions, while in Method 2, it is reduced to 1-bit input additions (see Section 4). In order to simplify hardware and further reduce the toggling, the variable node final subtractions (see (1)) can be bypassed during Low Power Mode without causing a significant distortion of messages. This is shown in Figure 3 which compares the distributions for 10GBASE-T code using Split-Row Threshold and modified version with Low Power Mode using Method 1 at iteration 4 at  dB. As shown in the figure, the distributions are closely matched.

Figure 4 illustrates the BER performance of the 2048-bit 10GBASE-T code using Split-Row Threshold for only Normal Mode operation () and adaptive low power operation using Methods 1, 2, and 3 when is 3, 5, and 6. The figure also shows that Methods 1, 2, and 3 have nearly the same bit error performance. They also perform very closely to All Normal Mode, with a 0.06–0.1 dB decrease at when . With , this SNR gap increases to 0.15–0.2 dB.

4. Architecture Design

The single pipeline block diagram for the proposed full parallel Split-Row Threshold decoder with partitions is shown in Figure 5. In each partition, there are check processors (each takes inputs) and variable processors. The and passing signals are the only wires passing (serially) between the partitions which are generated in the check node processors in parallel. The global signal is sent to every block and sets the operation mode to either Normal Mode or Low Power Mode (see Algorithm 2).

4.1. Check Node Processor

The check node processor implementation for partition is shown in Figure 6 and consists of two parts, which are described in the next two minor sections.

4.1.1. Split-Row Threshold

The magnitude update of is shown along the upper part of the figure while the global sign is determined by the XOR logic along the lower part. In Split-Row Threshold decoding, the sign bit calculated from partition is passed to the and neighboring partitions to correctly calculate the global sign bit according to the check node processing equations (2) and (3).

In both MinSum Normalized and Split-Row Threshold decoding, the first minimum and the second minimum are found alongside the signal , which indicates whether or is chosen for a particular . These are found through using multiple stages of comparators. The threshold logic implementation is shown within the dashed line which consists of two comparators and a few logic gates. The Threshold Logic contains two additional comparisons between and , and and , which are used to generate the final values. The local signal that is generated by comparing and is ORed with one of the incoming signals from and neighboring partitions and is then sent to their opposite neighbors. The next stage is multiplication according to (3).

4.1.2. Low Power Mode Implementation

This step (shown as the Mode Adjust) block in Figure 6 includes a multiplexer which selects the appropriate message magnitude ( or ) based on the status of the global signal. In order to shutoff the toggling of unused bits in Low Power Mode, they are kept zero (their initial value). For a -bit wordwidth implementation, we assume the Threshold Region can be implemented with bits. Therefore, in signed format, in Low Power Mode. However, to eliminate the extra logic to perform the sign extensions in the variable node processor, are set to zero and are swapped with LSB bits ( becomes ).

In Method 1 ( saturation to ), is adjusted based on (Equation (9(a)) in Algorithm 1). This can be easily implemented using the Sat_Control signal that is generated in Threshold Logic and determines whether . Overall, bit toggling is reduced to at most bits.

In Method 2, which implements (Equation (9(b)) in Algorithm 1), all outputs are set to . Therefore, always becomes Sat_Value, regardless of its input magnitude. Thus in addition to reducing the gate count in Method 1, Method 2 reduces the bit toggling to .

In Method 3, which implements (Equation (9(c)) in Algorithm 1), only the first LSB bits are kept along with the sign bit, and bit toggling is reduced to bits.

4.2. Variable Node Processor

The block diagram of the variable node processor is shown in Figure 7, which implements (1) in Algorithm 2. The key benefit of Low Power Mode operation is in the variable node processor, where all addition datapath wordwidths are reduced by at least bits (depending on the “Method” of implementation), which results in reduction of switching activity for the majority of the variable node processor. This wordwidth reduction is applied to all variable processors (for 10GBASE-T code ) in the decoder. Two adjustments (conversion steps) are performed to make the variable node processor operate correctly in both Normal Mode and Low Power Mode. Mode Adjust 1 is made before adding the sum of variable node inputs () to the channel information, , which shifts the addition result bits back to their original LSB positions (Recall that bits were shifted from positions to the left at the end of check node processing). Mode Adjust 2 is made in the subtraction stage, where bits are kept zero (their initial value) in order to bypass the subtraction in Low Power Mode.

5. Design of CMOS Decoders

To further investigate the impact of the proposed decoder on the hardware, we have implemented three full parallel decoders using Methods 1, 2, and 3 for the (6,32) (2048,1723) 10GBASE-T LDPC code in 65 nm 7-metal layer CMOS.

5.1. Design Steps

In order to design the proposed decoder using Split-Row Threshold with an adaptive wordwidth, these key steps are required.

(1) Choosing the number of partitioning (), Threshold (), and values: it is shown that the routing congestion, circuit delay, area, and power dissipation reduce as the number of partitions increases with a modest error performance loss [14]. The Threshold () and which directly affect the error performance are found through empirical simulations. For the 10GBASE-T decoder design, is set to 16, and the closest fixed-point values for and which attain a near optimum floating-point performance are both 0.25.

(2) Number of supported wordwidths: as discussed in Section 3, when using Split-Row Threshold, check node messages () are largely concentrated at at low iteration counts and low SNR values, (e.g., more than 80% for 10GBASE-T). Therefore, it naturally makes sense to define two regions, where one region represents values in which we call Threshold Region or Low Power Mode region and the other which represents the majority of values and we call Normal Mode. As long as there is no significant region in the distribution of values, increasing the number of regions (more wordwidth representation selection) is not efficient due to the large hardware overhead and error performance loss of introducing another mode into all check and variable node processors. For example, if we want to add one more region it requires an additional global signal to choose between regions. It also adds additional comparators to select the region (mode) that can fit in and requires us to increase the size of the muxes to choose between the outputs.

(3) Normal Mode wordwidth selection: this is the major datapath width of the decoder and is chosen to optimize the error performance with minimum hardware. The BER performance simulations for the (2048,1723) 10GBASE-T LDPC code using Split-Row Threshold indicate that the minimum wordwidth for fixed-point implementation which attains the near floating point error correction performance is 6 bit (0.03 dB gap at ). Therefore, for our implementation.

(4) Low Power Mode wordwidth selection: this is the subset of Normal Mode wordwidth where the Threshold Region () values can be represented. For the 10GBASE-T code, the Threshold Region is within . Therefore, its values in 6-bit (1.5 format) quantization are −0.0625, , 0, +0.03125, and +0.0625. These values can be represented with a 3-bit subset. Figure 8 shows the check node output () distribution using Split-Row Threshold decoder for (2048,1723) LDPC code which is binned into discrete values set by 6-bit (1.5 format) quantization. The 3-bit subset can cover all values within the Threshold Region. Representation with less bits, such as a 2-bit subset that is shown in the figure, will miss some values of the Threshold Region. Also there is no benefit if we use a 4-bit subset because the additional values represented by the 4-bit subset are not within the Threshold Region. Therefore, for our implementation.

5.2. Synthesis Results

The amount of hardware overhead to implement these three low power “Methods” is shown in Table 2. Among them, Method 2 has the least hardware increase, which has a 5% increase in check node processor and variable node processor area compared to Split-Row Threshold (which has none of the methods applied). Method 1 has the largest hardware overhead due to the added muxes and gates for saturation implementation with a 15% increase in check node processor area and a 6% increase in variable node processor area compared to the original design.

5.3. Back-End Implementations

Methods 1, 2, and 3 decoders are implemented using STMicroelectronics LP 65 nm CMOS technology with a nominal supply voltage of 1.2 V (max. at 1.3 V). We use a standard-cell RTL to GDSII flow using synthesis and automatic place and route to implement all decoders. The decoders were developed using Verilog to describe the architecture and hardware, synthesized with Synopsys Design Compiler, and placed and routed using Cadence SOC Encounter. Each block is independently implemented and connected to the neighboring blocks with Sign and wires.

To generate reliable power numbers, SoC Encounter is used to extract RC delays using the final place and route information and timing information from the standard-cell libraries. The delays are exported into a “standard delay format” (SDF) file. This file is then used to annotate the post-layout Verilog gate netlist for simulation in Cadence NC-Verilog. This generates a timing-accurate “value change dump” (VCD) file that records the signal switching for each net as simulated using a testbench. The VCD file is then fed back into SoC Encounter to compute a simulation-based power analysis. This analysis is performed for 100 test vectors for each SNR.

The chip layout of Methods 1 is shown in Figure 9. A summary of the post-layout results for the low power proposed Method 1, 2, and 3 decoders, when , is given in Table 3. For comparison a Method 1 decoder only running in Normal Mode is included in the table.

5.4. Results and Analysis

Due to the nature of Split-Row Threshold algorithm, which significantly reduces wire interconnect complexity, all three full parallel decoders achieve a very high logic utilization, 95%-96%. In this case synthesis results have a good correlation with the layout increases. For instance, as shown in Table 3, the decoders in Methods 1, 2, and 3 occupy 5.10–5.27 mm2. Method 2, which has the minimum number of added gates (see Table 2), has the smallest area among the three. Conversely, Method 1 has the most, and Method 2 is in between the other two. Also, results show that the critical path in general is about equal (implementations are optimized for area with circuit delay of a less priority). Method 1 has a 2%-3% greater critical path delay than the other decoders due to the increased path delays through the additional muxes and AND/OR gates.

The table also summarizes the power results for the case that decoders in three methods are kept in Low Power Mode for 6 iterations and Normal Mode for 9 iterations out of a total iterations. Energy data are reported for 15 decoding iterations without early termination at  dB. Under these conditions, Method 2 has the smallest energy dissipation per bit, 46 pJ/bit, which is 20% lower than running only with Normal Mode. Overall, the average power among the three methods is 1172–1215 mW, which is 181–224 mW lower than when running on only Normal Mode.

Figure 10 shows the power breakdown for Method 2 in Normal Mode only, Low Power Mode only, and adaptive mode ( out of 15 total iterations). Shown are the power contributions from variable node processors, check node processors, and the clock tree (including registers). By itself, Low Power Mode results in 41% reductions when compared to Normal Mode only. For an adaptive mode where iterations out of a total 15 iterations, this results in a net improvement of 22% in average power. Therefore, it is important to realize the tradeoff between the amount of Low Power Mode Iterations versus the number of convergence iterations (i.e., average iterations from early termination).

Energy gains are dependent on the since the desired BER performance (depends on as discussed later) and the convergence behavior (early termination and average iterations) of the proposed decoders also depend on the . The longer the Low Power Mode is enabled, the longer it will take to converge, and as a result the energy becomes dependent on both a tradeoff of the set and the final convergence iteration count. Figure 11 shows the energy consumption for Methods 1, 2, and 3 when the the Low Power Mode is enabled for three and six iterations over a range of SNR values: 2.2–4.6 dB. Notice that for the energy starts to become worse for because of longer average convergence times (i.e., larger average iterations).

5.5. SNR Adaptive Design

In Split-Row Threshold, a larger maximum number of iterations, , can improve bit error performance. This is shown by running on Normal Mode only while using . In this case, BER performance of the proposed decoder is only 0.2 dB away from MinSum Normalized at (a significant BER improvement is not observed for ). Although higher maximum iteration count has almost no effect on the average iterations at high SNRs, it increases the average iterations at low SNRs [15] (more of the channel information is corrupted beyond the ability for LDPC to correct), which results in higher energy dissipation. Given the fact that running in Low Power Mode at low SNRs results in larger energy savings it is more beneficial to use a larger with lower . Conversely, we can use only Normal Mode with a higher maximum iteration count to get the BER required at high SNR with lesser energy penalties as compared to operating the decoder with a large .

These scenarios are illustrated in Figure 12 where the bit error performance versus energy per bit dissipation of the proposed decoder with Method 2 is shown under two conditions.(1)Adaptive mode operation with Method 2, , and . (2)The decoder runs in only Normal Mode, and .

Given the worst case and a 10GBASE-T LDPC decoder throughput of 6.4 Gbps, both designs are set to 0.87 V and compared with early termination enabled. As shown in the figure, when (implying a low SNR) the energy dissipation of Method 2 decoder is about 20%–50% lower than that of the decoder in Normal Mode at the same BER. However, when the ( dB), the decoder at Normal Mode attains greater than an order of magnitude improvement in BER at nearly the same energy per bit dissipation.

Therefore, using an efficient SNR detector circuit, we can switch between different modes at  dB. Similar to [33], the proposed SNR detector compares the number of unsatisfied checks with a checksum threshold at the end of the first iteration and estimates the SNR range. For the 2048-bit 10GBASE-T code, it was found that a checksum threshold of 91 after the first iteration can estimate if the SNR is larger or smaller than 4.0 dB with a probability of being 89% true. By using this detection scheme the Low Power Mode iteration count and can be adjusted. The SNR detector circuit requires only one additional comparator in the early termination circuit.

5.6. Comparison with Others

The post-layout simulation results of the proposed wordwidth adaptive decoder using Method 2 are compared with recently implemented decoders [15, 3032] for 2048-bit LDPC codes and are summarized in Table 4. The 10GBASE-T code is implemented in [15, 30, 31]. Results for two supply voltages are reported for a Method 2 decoder: 1.3 and 0.7 V. (Note that, at 0.7 V, for , the 10GBASE-T required throughput is met.) The supply voltage can be lowered to 0.6 V based on a previously fabricated chip measurements [34]. At this voltage, the decoder throughput is 9.3 Gbps (greater than 6.4 Gbps required for 10GBASE-T) while dissipating an average power of 31 mW.

The sliced message passing (SMP) scheme in [30] is proposed for Sum-Product algorithm, divides the check node processing into equal size blocks, and performs the check node computation sequentially. The post-layout simulations for a 10GBASE-T partial parallel decoder are shown in the table. The multirate decoder in [31] supports RS-LDPC codes with different code lengths (1536–3968 bits) through the use of reconfigurable permutators. The post-layout simulation results of a 10GBASE-T decoder are reported in 90 nm CMOS in the table. The partial parallel 2048-bit decoder chip is fabricated in 180 nm CMOS. The decoder which supports turbo-decoding massage passing (TDMP) algorithm supports multiple code rates between 8/16 and 14/16. The partial parallel decoder chip [15] is fabricated in 65 nm and consists of a two-step decoder: MinSum and a postprocessing scheme which lowers the error floor down to . Compared to a previous reduced wordwidth 5-bit implementation of original Split-Row Threshold decoder [14], the proposed 6-bit decoder attains 10% improvement in energy dissipation with 15 decoding iterations. Compared to the sliced message passing decoder [30], the proposed wordwidth adaptive decoder is about 3 × smaller and has 6.8 × higher throughput with 0.2 dB coding gain reduction. Compared to the two-step decoder chip [15], the proposed decoder has 1.7 × higher throughput and dissipates 3.57 times less energy, with the same area at a cost of 0.35 dB coding gain reduction.

6. Conclusion

As high throughput LDPC decoders are becoming more ubiquitous for upcoming communication standards, energy efficient low power decoder algorithms and architectures are a design priority. We have presented a low power adaptive wordwidth LDPC decoder algorithm and architecture based on the input patterns during the decoding process. Depending on the SNR and decoding iteration, different low power settings were determined to find the best tradeoff between bit error performance and energy consumption. Of the three low power wordwidth adaptive methods explored one implementation had a post-layout decoder area of 5.10 mm2, while attaining a 85.7 Gbps throughput with early termination while dissipating 16.4 pJ/bit at 1.3 V. Compared to another 10GBASE-T design with similar areas in 65 nm and operating at 0.7 V, this work achieves nearly 2 × improvement in throughput, thus meeting the 6.4 Gbps required by the standard. Energy efficiency was over 3.5 × better with only 0.2 dB loss in coding gain. This loss compares favorably with the nonuniform quantization bit reduction technique.