Research Article  Open Access
Bin Zhou, Yingning Peng, David Hwang, "Pipeline FFT Architectures Optimized for FPGAs", International Journal of Reconfigurable Computing, vol. 2009, Article ID 219140, 9 pages, 2009. https://doi.org/10.1155/2009/219140
Pipeline FFT Architectures Optimized for FPGAs
Abstract
This paper presents optimized implementations of two different pipeline FFT processors on Xilinx Spartan3 and Virtex4 FPGAs. Different optimization techniques and rounding schemes were explored. The implementation results achieved better performance with lower resource usage than prior art. The 16bit 1024point FFT with the R2^{2}SDF architecture had a maximum clock frequency of 95.2 MHz and used 2802 slices on the Spartan3, a throughput per area ratio of 0.034 Msamples/s/slice. The R4SDC architecture ran at 123.8 MHz and used 4409 slices on the Spartan3, a throughput per area ratio of 0.028 Msamples/s/slice. On Virtex4, the 16bit 1024point R2^{2}SDF architecture ran at 235.6 MHz and used 2256 slice, giving a 0.104 Msamples/s/slice ratio; the 16bit 1024point R4SDC architecture ran at 219.2 MHz and used 3064 slices, giving a 0.072 Msamples/s/slice ratio. The R2^{2}SDF was more efficient than the R4SDC in terms of throughput per area due to a simpler controller and an easier balanced rounding scheme. This paper also shows that balanced stage rounding is an appropriate rounding scheme for pipeline FFT processors.
1. Introduction
The Fast Fourier Transform (FFT), as an efficient algorithm to compute the Discrete Fourier Transform (DFT), is one of the most important operations in modern digital signal processing and communication systems. The pipeline FFT is a special class of FFT algorithms which can compute the FFT in a sequential manner; it achieves realtime behavior with nonstop processing when data is continually fed through the processor. Pipeline FFT architectures have been studied since the 1970's when realtime large scale signal processing requirements became prevalent. Several different architectures have been proposed, based on different decomposition methods, such as the Radix2 Multipath Delay Commutator (R2MDC) [1], Radix2 SinglePath Delay Feedback (R2SDF) [2], Radix4 SinglePath Delay Commutator (R4SDC) [3], and Radix2^{2} SinglePath Delay Feedback (R2^{2}SDF) [4]. More recently, Radix2^{2} to Radix2^{4} SDF FFTs were studied and compared in [5]; in [6] an R2^{3}SDF was implemented and shown to be area efficient for 2 or 3 multipath channels. Each of these architectures can be classified as multipath or singlepath. Multipath approaches can process data inputs simultaneously, though they have limitations on the number of parallel datapaths, FFT points, and radix. This paper focuses on singlepath architectures.
From the hardware perspective, Field Programmable Gate Array (FPGA) devices are increasingly being used for hardware implementations in communications applications. FPGAs at advanced technology nodes can achieve high performance, while having more flexibility, faster design time, and lower cost. As such, FPGAs are becoming more attractive for FFT processing applications and are the target platform of this paper.
The primary goal of this research is to optimize pipeline FFT processors to achieve better performance and lower cost than prior art implementations. In this paper, two comparative implementations (R4SDC and R2^{2}SDF) of pipeline FFT processors targeted towards Xilinx Spartan3 and Virtex4 FPGAs are presented. Different parameters such as throughput, area, and SQNR are compared.
The rest of the paper is organized as follows. Section 2 discusses the methodology used to select the two architectures. Section 3 describes the implementation tools and optimization methods used to improve performance and reduce resource utilization. Section 4 explains the balanced rounding schemes that were implemented and their impact on the signaltoquantization noise ratio (SQNR). Section 5 presents the results, and Section 6 presents some brief conclusions.
2. Pipeline FFT Architectures
2.1. Architecture Selection
The major characteristics and resource requirements of several pipeline FFT architectures are listed in Table 1. Computational efficiency is measured by resource utilization percentage—how often the resources are in an active state versus an idle state. As shown in the table, the radix4 SinglePath DelayCommutator (R4SDC) and radix2^{2} Single Path Delay Feedback (R2^{2}SDF) architectures provide the highest computational efficiency and were selected for implementation. The R4SDC architecture is appealing due to the computational efficiency of its addition; however the controller design is complex. The R2^{2}SDF architecture has a simple controller but a less efficient addition scheme. These designs are both radix4 and scalable to an arbitrary FFT size ( is a power of 4).

2.2. R4SDC Architecture
RSDC Algorithms
The R4SDC was proposed by Bi and Jones [3] and uses an iterative architecture to calculate the radix4 FFT. The key to the algorithm is splitting the FFT into different stages by using different radices. In this paper, the radix is always 4. The derivation starts from the fundamental DFT equation for an point FFT:
can be represented as composite of numbers and defined as
where is the stage, and is the stage radix. After putting (2) into (1) and applying the relationship , (1) becomes
Indeces and can be defined by , where . Equation (3) becomes
Therefore the complete point DFT can be written as different stages with intermediate stages in a recursive equation:
is the twiddle factor. For radix4, the equations become
The R4SDC architecture is presented in Figures 1–3. An point radix4 pipeline FFT is decomposed to stages. Each stage consists of a commutator, a butterfly, and a complex multiplier. Figure 1 outlines the commutator for the R4SDC. Its six shift registers provide delays. The control signals are generated by logic functions. The butterfly element, shown in Figure 2, performs the summation, where trivial multiplication is replaced by add/sub and imaginary/real part swapping. Figure 3 shows the overall architecture.
2.3. R2^{2}SDF Architecture
The R2^{2}SDF architecture was proposed by He and Torkelson [4] and also begins from (1). He applies a 3dimensional index map:
Using the Common Factor Algorithm (CFA) to decompose the twiddle factor, the FFT can be reconstructed as a set of 4 DFTs of length : can be expressed as
The R2^{2}SDF algorithm can be mapped to the architecture shown in Figures 4–6. The number of stages is . Every stage contains two butterfly elements, each associated by an feedback shift register. A simple counter creates the control signals. Pipeline registers can be added between butterfly elements and between stages. Registers are also added inside the complex multipliers to reduce the critical path through the summation to the multiplier. The total latency is approximately cycles.
3. FPGABased Implementations and Optimizations
3.1. Specifications, Tool Flow, and Verification
Both of these FFT architectures were implemented with generic synthesizable VHDL code and verified with simulation against Matlab scripts using Modelsim. Synplify or XST was used to perform the synthesis, and ISE was used for place and route and implementation. The architectures were optimized to achieve maximum throughput with minimal area (slices). The tools and development environment used are shown in Table 2.

3.2. General Optimization Methods
Some general optimization measures were performed, including FSM encoding, retiming, and CADrelated optimizations. Since the FFT processors were targeted to Xilinx Spartan3 and Virtex4 FPGAs (as well as synthesized for VirtexE FPGAs), the SRL16 component, which can implement a 16bit shift register within a single LUT, was inferred as much as possible to preserve LUTs. This particularly helped the R2^{2}SDF architecture because of the large number of shift registers. R4SDC also benefited from SRL16 components in its commutator registers. Block RAMs were used to store twiddle factors, which dramatically reduced the combinational logic utilization.
3.3. ArchitectureSpecific Optimization
A number of architecturespecific optimizations were used. For both architectures, a complex multiplication technique was used. Usually, a complex multiplication is computed as:
This requires 4 multiplications and 2 add/suboperations. As is well known, the equation is simplified to save one multiplier:
This requires only 3 multiplications and 5 add/suboperations. Pipeline registers were also added in order to avoid the long critical path brought by the connection of real adders and multipliers. Figure 7 shows the pipeline stages inserted which were effective to reduce the critical path (REG means pipeline register).
3.3.1. R4SDC Optimization
The R4SDC has a complex controller, which creates a long critical path. By observing that all stages have the same control bits but have different sequences, using a ROM with an incremental address was a simpler solution than using a complex FSM. Pipeline registers were also added to the butterfly elements, multipliers, and between stages. Figure 8 illustrates the addition of pipeline registers to cut the critical path efficiently within the butterfly element. Since two continuous add/subelements bring about a long propagation path, they were split using pipelining. Figure 9 shows the addition of pipeline registers between majority elements and between stages. For timing purposes, the applicable control signals were also buffered.
There were some special measures taken into account within controller in order to keep proper timing of signals. Twiddle factors should also be delayed to cope with the delayed sequence.
3.3.2. R2^{2}SDF Optimization
Due to its simple control requirements, a simple counter was sufficient as the entire controller for the R2^{2}SDF. To speed up the controller, a fast adder could potentially be faster than a simple ripplecarry adder. However, due to the small number of stages (), no substantial savings were found for a fast adder. Pipeline registers were added between major elements and also between stages. Note that the R2^{2}SDF is not suited for adding pipeline registers within individual butterfly elements, because this would break the timing for the data feedback path. Figure 10 presents the pipelined stages. Note that registers were only added between element units; in addition, registers were added as necessary to keep the control signals properly timed.
4. Rounding Scheme and SQNR
Due to finite wordlength effects, the implemented FFTs always scaled by at the output of the design. This scaling factor was distributed as dividebytwo operations throughout each stage to reduce error propagation. As is well known, truncation or conventional rounding (which is denoted as roundhalfup) will bring a notable quantization error bias in dividebytwo operations, and this bias will accumulate throughout the processing chain [5]. To alleviate the bias, three unbiased rounding methods are investigated for division by two.
Sign BitBased Rounding
In this scenario, if the MSB of the number to be divided is 0 (i.e., positive number) it is roundedhalfup. This will have a positive bias. On the other hand, if the MSB is 1 (i.e., negative number) it is truncated, leaving a negative bias. Assuming that the positive and negative numbers are uniformly distributed, this approach will lead to an unbiased rounding scheme. However, selecting the bias based on the MSB implies that these two rounding methods coexist in a single rounding position, which requires extra hardware. This increases the critical path, harming the performance. So it is not chosen.
Randomized [7]
In this scenario, if the bit to be rounded is 1, a random up or down rounding is performed. If it is 0, the same rounding scheme as done previously is performed. From the statistical point of view, no bias exists. But this method requires a random bit generator and a long accumulation time, requiring big extra hardware resources and significantly affects the performance. So it is not implemented.
Balanced Stages Rounding [11]
This rounding method explores balancing between stages. Roundhalfup and truncation are used in an interlaced fashion, as shown in Figure 11.
For an even number of stages, this will achieve the same result as the randomized approach, while having a smaller resource usage and simpler control. This scheme fits the R2^{2}SDF architecture particularly well, because the two butterfly elements within same stage of R2^{2}SDF can be naturally balanced. This method was chosen for the designs presented in the paper.
In order to compute the signaltoquantization noise ratio (SQNR), random generated noise was used as the input to the pipeline FFT. A Matlab script generated double precision floating point FFT results, which were used as the true values. Figure 12 shows how they are compared with the fixedpoint implementations. Random experiments were run several times and averaged to get a better error approximation.
5. Results and Analysis
5.1. SQNR Results
Figure 13 shows the SQNR results with different rounding schemes (balanced stages, truncation, and roundhalfup), for R4SDC and R2^{2}SDF, respectively, for a 16bit data width (input data, twiddle factors, and output data are 16 bits). The balanced stage rounding typically improved the SQNR by 12 dB. The balanced stages scheme gives better SQNR, because it leverages the randomness between stages. The truncation and roundhalfup only reserve half of the information.
(a) R4SDC
(b) R22SDF
Table 3 presents the SQNR results as they vary with FFT size. The larger the FFT, the worse the SQNR due to the longer processing chain. Both architectures gave comparable results in terms of SQNR. It is clear that larger data widths will also give better SQNR but will increase area and critical path. A 16bit wordlength is a sufficient choice for many signal processing applications.

The FFT architectures with smaller wordlengths than 16 bits are also implemented. The example in the Figure 14 shows the R4SDC architecture for an point FFT. Every bit of word length increment brings about 6 dB of SQNR gain.
5.2. Implementation Results
Table 4 gives the performance results of both architectures with different FFT sizes on Spartan3 FPGAs (90 nm) [12]. The R2^{2}SDF achieved a smaller area and better throughput per area than the R4SDC. Due to the pipeline design, the maximum clock frequency did not change drastically with FFT size for either design. As expected the throughput per area decreases for larger FFT sizes, which require more stages and area.

Table 5 presents the results on Virtex4 FPGAs (90 nm) using a 16bit wordlength. Virtex4 FPGAs have hardwired DSP modules called DSP48 blocks, which are highspeed modules optimized for signal processing operations such as multiplyaccumulate, and FIR filtering. By utilizing these DSP48 blocks, the maximum clock frequency increased substantially over the Spartan3 devices.

Comparisons with prior art are shown in Table 6, which shows publicly available pipeline FFT implementations on FPGAs from literature. For fair comparison, since many of the prior art implementations were implemented on VirtexE FPGAs (180 nm), the designs are also implemented on VirtexE. The best performance for the R2^{2}SDF method for a 16bit 1024point FFT was published by Sukhsawas and Benkrid in [8]. They used HandelC as a rapid prototype language and implemented the design on VirtexE FPGAs. They achieved 82 MHz maximum clock frequency and 7365 slices, giving a throughput per area ratio of 0.011 Msamples/s/slice.

On the VirtexE, our R2^{2}SDF achieved better performance of 95 MHz and a smaller area of 5008 slices, giving a superior throughput per area ratio of 0.019 Msamples/s/slice. Our R4SDC architecture was also superior to prior art, running at 94.2 MHz and using 7052 slices, a throughput per area ratio of 0.013 Msamples/s/slice.
Another point of reference is the Xilinx FFT IP core. For comparison sake, the IP core for VirtexE is shown in the table. The VirtexE core shows four times the latency (4096) in cycles due to its internal architecture. Its throughput per area ratio is also only 0.011 Msamples/s/slice. Note that all comparisons for throughput per area do not take into account block RAMs, though each of the designs had a similar number of required block RAMs. However, the Xilinx FFT DSP core could perform better with new Xtreme technology: on Virtex 4 device 4vsx2510, 1024point FFT could be finished within 2.85 nanoseconds in best case, while cost 2141 Slices, 7 block RAMs, and 46 Xtreme DSP slices [13].
6. Conclusions
In this paper, optimized implementations of R4SDC and R2^{2}SDF pipeline FFT processors on Spartan3, Virtex4, and VirtexE FPGAs are presented. The 16bit 1024point FFT with the R2^{2}SDF architecture had a maximum clock frequency of 95.2 MHz and used 2802 slices on the Spartan3. The R4SDC ran at 123.8 MHz and used 4409 slices on the Spartan3. On Virtex4 device, the numbers became 235.6 MHz and 2256 slices for R2^{2}SDF and 219.2 MHz and 3064 slices for R2^{2}SDF, respectively. Different rounding schemes were analyzed and compared. SQNR analysis showed the balanced stages rounding scheme gave high SQNR with small overhead. The SQNR will gain around 6 dB with every bit increment of word length.
The R2^{2}SDF architecture outperformed the R4SDC architecture in terms of throughput per area, a measure of efficiency, for the 1024point FFT. This is due to its simpler controller and compatibility with pipelining insertion. Both architectures have comparable maximum clock frequency and SQNR with the balanced stages rounding scheme.
References
 L. R. Rabiner and B. Gold, Theory and Application of Digital Signal Processing, PrenticeHall, Upper Saddle River, NJ, USA, 1975.
 E. H. Wold and A. M. Despain, “Pipeline and parallelpipeline FFT processors for VLSI implementation,” IEEE Transactions on Computers, vol. 33, no. 5, pp. 414–426, 1984. View at: Google Scholar
 G. Bi and E. V. Jones, “A pipelined FFT processor for wordsequential data,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 12, pp. 1982–1985, 1989. View at: Publisher Site  Google Scholar
 S. He and M. Torkelson, “A new approach to pipeline FFT processor,” in Proceedings of the 10th International Parallel Processing Symposium (IPPS '96), pp. 766–770, Honolulu, Hawaii, USA, April 1996. View at: Google Scholar
 J.Y. Oh and M.S. Lim, “New radix2 to the 4th power pipeline FFT processor,” IEICE Transactions on Electronics, vol. E88C, no. 8, pp. 1740–1746, 2005. View at: Publisher Site  Google Scholar
 T. Sansaloni, A. PérezPascual, V. Torres, and J. Valls, “Efficient pipeline FFT processors for WLAN MIMOOFDM systems,” Electronics Letters, vol. 41, no. 19, pp. 1043–1044, 2005. View at: Publisher Site  Google Scholar
 S. Johansson, S. He, and P. Nilsson, “Wordlength optimization of a pipelined FFT processor,” in Proceedings of the 42nd Midwest Symposium on Circuits and Systems, vol. 1, pp. 501–503, 1999. View at: Google Scholar
 S. Sukhsawas and K. Benkrid, “A highlevel implementation of a high performance pipeline FFT on VirtexE FPGAs,” in Proceedings of the IEEE Computer Society Annual Symposium on VLSI (ISVLSI '04), pp. 229–232, February 2004. View at: Google Scholar
 Xilinx, Inc., “HighPerformance 1024Point Complex FFT/IFFT V2.0,” San Jose, Calif, USA, July 2000, http://www.xilinx.com/ipcenter. View at: Google Scholar
 Sundance Multiprocessor Technology Ltd., 1024Point Fixed Point FFT Processor, July 2008, http://www.sundance.com/web/files/productpage.asp?STRFilter=FC200.
 P. Kabal and B. Sayar, “Performance of fixedpoint FFT's: rounding and scaling considerations,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '86), pp. 221–224, 1986. View at: Google Scholar
 B. Zhou and D. Hwang, “Implementations and optimizations of pipeline FFTs on Xilinx FPGAs,” in Proceedings of the International Conference on Reconfigurable Computing and FPGAs (ReConFig '08), pp. 325–330, 2008. View at: Publisher Site  Google Scholar
 Xilinx, Inc., “Xilinx Fast Fourier Transform V3.2 Product Specification,” San Jose, Calif, USA, January 2006. View at: Google Scholar
Copyright
Copyright © 2009 Bin Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.