Table of Contents Author Guidelines Submit a Manuscript
VLSI Design
Volume 2014 (2014), Article ID 690594, 13 pages
http://dx.doi.org/10.1155/2014/690594
Research Article

Radix-2α/4β Building Blocks for Efficient VLSI’s Higher Radices Butterflies Implementation

Laboratory of Signals and Systems Integrations, Electrical and Computer Engineering Department, Université du Québec à Trois-Rivières, QC, Canada G9A 5H7

Received 27 December 2013; Revised 12 March 2014; Accepted 26 March 2014; Published 13 May 2014

Academic Editor: Dionysios Reisis

Copyright © 2014 Marwan A. Jaber and Daniel Massicotte. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper describes an embedded FFT processor where the higher radices butterflies maintain one complex multiplier in its critical path. Based on the concept of a radix-r fast Fourier factorization and based on the FFT parallel processing, we introduce a new concept of a radix-r Fast Fourier Transform in which the concept of the radix-r butterfly computation has been formulated as the combination of radix-2α/4β butterflies implemented in parallel. By doing so, the VLSI butterfly implementation for higher radices would be feasible since it maintains approximately the same complexity of the radix-2/4 butterfly which is obtained by block building of the radix-2/4 modules. The block building process is achieved by duplicating the block circuit diagram of the radix-2/4 module that is materialized by means of a feed-back network which will reuse the block circuit diagram of the radix-2/4 module.

1. Introduction

For the past decades, the main concern of the researchers was to develop a fast Fourier transform (FFT) algorithm in which the number of operations required is minimized. Since Cooley and Tukey presented their approach showing that the number of multiplications required to compute the discrete Fourier transform (DFT) of a sequence may be considerably reduced by using one of the fast Fourier transform (FFT) algorithms [1], interest has arisen both in finding applications for this powerful transform and for considering various FFT software and hardware implementations.

The DFT computational complexity increases according to the square of the transform length and thus becomes expensive for large . Some algorithms used for efficient DFT computation, known as fast DFT computation algorithms, are based on the divide-and-conquer approach. The principle of this method is that a large problem is divided into smaller subproblems that are easier to solve. In the FFT case, dividing the work into subproblems means that the input data can be divided into subsets from which the DFT is computed, and then the DFT of the initial data is reconstructed from these intermediate results. Some of these methods are known as the Cooley-Tukey algorithm [1], split-radix algorithm [2], Winograd Fourier transform algorithm (WFTA) [3], and others, such as the common factor algorithms [4].

The problem with the computation of an FFT with an increasing is associated with the straightforward computational structure, the coefficient multiplier memories’ accesses, and the number of multiplications that should be performed. The overall arithmetic operations deployed in the computation of an -point FFT decreases with increasing as a result; the butterfly complexity increases in terms of complex arithmetic computation, parallel inputs, connectivity, and number of phases in the butterfly’s critical path delay. The higher radix butterfly involves a nontrivial VLSI implementation problem (i.e., increasing butterfly critical path delay), which explains why the majority of FFT VLSI implementations are based on radix 2 or 4, due to their low butterfly complexity. The advantage of using a higher radix is that the number of multiplications and the number of stages to execute an FFT decrease [46].

The most recent attempts to reduce the complexity of the higher radices butterfly’s critical path was achieved by the concept of a radix- fast Fourier transform (FFT) [8, 9], in which the concept of the radix- butterfly computation has been formulated as composed engines with identical structures and a systematic means of accessing the corresponding multiplier coefficients. This concept enables the design of butterfly processing element (BPE) with the lowest rate of complex multipliers and adders, which utilizes or complex multipliers in parallel to implement each of the butterfly computations. Another strategy was based on targeting hardware oriented radix or which is an alternative way of representing higher radices by means of less complicated and simple butterflies in which they used the symmetry and periodicity of the root unity to further lower down the coefficient multiplier memories’ accesses [1020].

Based on the higher radices butterfly and the parallel FFT concepts [21, 22], we will introduce the structure of higher multiplexed or butterflies that will reduce the resources in terms of complex multiplier and adder by maintaining the same throughput and the same speed in comparison to the other proposed butterflies structures in [1320].

This paper is organized as follows. Section 2 describes the higher radices butterfly computation and Section 3 details the FFT parallel processing. Section 4 elaborates the proposed higher radices butterflies; meanwhile Section 5 draws the performance evaluation of the proposed method and Section 6 is devoted to the conclusion.

2. Higher Radices’ Butterfly Computation

The basic operation of a radix- PE is the so-called butterfly computation in which inputs are combined to give the outputs via the following operation:

where and are, respectively, the butterfly’s input and output vectors. is the butterfly matrix which can be expressed as for decimation in frequency (DIF) process, and for decimation in time (DIT) process. In both cases the twiddle factor matrix, , is a diagonal matrix which is defined by with and   and is the adder tree matrix within the butterfly structure expressed as [4]

As seen from (2) and (3), the adder tree, , is almost identical for the two algorithms, with the only difference being the order in which the twiddle factor and the adder tree multiplication are computed. A straightforward implementation of the adder tree is not effective for higher radices butterflies due to the added complex multipliers in the higher radices butterflies’ critical path that will complicate its implementation in VLSI.

By defining the element of the th line and the th column in the matrix as , where , , and represents the operation modulo . By defining the set of the twiddle factor matrix as where the index is the FFT’s radix, represents the number of words of size , and , is the number of stages (or iterations ). Finally, the twiddle factor matrix in (2) and (3) can be expressed for the different stages of an FFT process as [7, 8] for the DIF process and (3) would be expressed as for the DIT process, where is the th butterfly’s output, is the th butterfly’s input, and represents the integer part operator of .

As a result, the th transform output during each stage can be illustrated as for the modified DIF process, and for the modified DIT process.

The conceptual key to the modified radix- FFT butterfly is the formulation of the radix- as composed engines with identical structures and a systematic means of accessing the corresponding multiplier coefficients [8, 9]. This enables the design of an engine with the lowest rate of complex multipliers and adders, which utilizes or complex multipliers in parallel to implement each of the butterfly computations. There is a simple mapping from the three indices , , and (FFT stage, butterfly, and element) to the addresses of the multiplier coefficients needed by using the proposed FFT address generator in [24]. For a single processor environment, this type of butterfly with parallel multipliers would result in decreasing the time delay for the complete FFT by a factor of . A second aspect of the modified radix- FFT butterfly is that they are also useful in parallel multiprocessing environments. In essence, the precedence relations between the engines in the radix- FFT are such that the execution of engines in parallel is feasible during each FFT stage. If each engine is executed on the modified processing element (PE), it means that each of the parallel processors would always be executing the same instruction simultaneously, which is very desirable for SIMD implementation on some of the latest DSP cards.

Based on this concept, Kim and Sunwoo proposed a proper multiplexing scheme that reduces the usage of complex multiplier for the radix-8 butterfly from 11 to 5 [25].

3. Parallel FFT Processing

For the past decades, there were several attempts to parallelize the FFT algorithm which was mostly based on parallelizing each stage (iteration) of the FFT process [2628]. The most successful FFT parallelization was accomplished by parallelizing the loops during each stage or iteration in the FFT process [29, 30] or by focusing on memory hierarchy utilization that is achieved by the combination of production and consumption of butterflies’ results, data reuse, and FFT parallelism [31].

The definition of the DFT is represented by where is the input sequence, is the output sequence, is the transform length, and is the th root of unity: . Both and are complex valued sequences.

Let be the input sequence of size and let denote the degree of parallelism which is multiple of ; therefore, we can rewrite (11) by considering , , , , and as [9] If is the th order Fourier transform , then, , , and will be the order Fourier transforms given, respectively, by the following expressions: , , and .

4. The Proposed Higher Radices Butterflies

Most of the FFTs’ computation transforms are done within the butterfly loops. Any algorithm that reduces the number of additions and multiplications in these loops will reduce the overall computation speed. The reduction in computation is achieved by targeting trivial multiplications which have a limited speedup or by parallelizing the FFTs that have a significant speedup on the execution time of the FFT. In this section we will be limited in the elaboration of the proposed butterfly’s radix-2α/4β (the radix-2/4 families) for the DIT FFT process. By rewriting (3) as and by applying the concept of the parallel FFT (introduced in Section 3) on the kernel , therefore, (13) will be expressed as

It is to be noted that the notation in all figures of this paper represents the set of twiddle factor associated with the butterfly input defined by .

For the radix-4 butterfly ( and ), we can express (13) as and the conventional radix-22 (MDC-R22) BPE in terms of radix-2 butterfly is illustrated in Figure 1.

690594.fig.001
Figure 1: Conventional radix-22 (MDC-R22) BPE (butterfly processing element).

The use of resources could also be reduced by a feedback network and a multiplexing network where the feedback network is for feeding the th output of the th radix-2 adder network to the th input of the th butterfly and the multiplexers selectively pass the input data or the feedback, alternately, to the corresponding radix-2 adder network as illustrated in Figure 2(a) [23]. The circuit block diagram of the radix-2 adder network is illustrated in Figure 2(b) that consists of two complex adders only.

fig2
Figure 2: (a) Proposed multiplexed radix-22 (MuxMDC-R22) BPE and (b) block circuit diagram of the radix-2 adder network [7].

With the rising edge of the clock cycle the inputs data are fed to the butterfly’s input of the system presented in Figure 1. In order to complete the butterfly’s operations within one clock cycle, the following conditions should be satisfied: where is the time required to perform one complex multiplication/addition and the timing block diagram of Figure 1 is sketched in Figure 3.

690594.fig.003
Figure 3: Timing block diagram of Figure 1.

With the rising edge of the clock cycle the inputs data are fed to the butterfly’s input of the system presented in Figure 2(a) and with the falling edge of the clock cycle the feedback data are fed to the butterfly’s input. In order to complete the butterfly’s operations within one clock cycle, the following conditions should be satisfied: and the timing block diagram of Figure 2(a) is illustrated in Figure 4.

690594.fig.004
Figure 4: Timing block diagram of Figure 2(a).

Further block building of these modules could be achieved by duplicating the block circuit diagram of Figure 2(a) and combining them in order to obtain the radix-8 MDC-R23 BPE; therefore, for this case ( and ), (4) could be expressed as and the signal flow graph (SFG) of the DIT conventional MDC-R23 BPE butterfly is illustrated in Figure 5. The resources in the conventional MDC-R23 BPE could also be reduced by means of the partial multiplexed radix 22 and a feedback network yielding to the proposed MuxMDC-R23 BPE structure in Figure 6.

690594.fig.005
Figure 5: Conventional MDC-R23 BPE.
690594.fig.006
Figure 6: Proposed MuxMDC-R23 BPE based on the partial MuxMDC-R22.

The clock timing of Figure 5 is computed as where is the time required to execute one complex multiplication on a constant multiplier and the clock timing of the proposed MuxMDC-R23 is estimated as

The overall timing block diagram of the proposed MuxMDC-R23 is sketched in Figure 7. In Figure 6, the inputs are multiplied by the twiddle factors when and by the constant factors , , or 1 for .

690594.fig.007
Figure 7: Timing block diagram of Figure 6.

Further block building of these modules could be achieved by combining two radix-8 butterflies with eight radix-2 butterflies in order to obtain the conventional MDC-R24 BPE; therefore, for this case ( and ), (4) could be expressed as and the signal flow graph (SFG) of the proposed DIT radix-24 MuxMDC-R24 based on the partial MuxMDC-R23 (Figure 8) is illustrated in Figure 9.

690594.fig.008
Figure 8: Proposed Partial MuxMDC-R23.
690594.fig.009
Figure 9: Proposed MuxMDC-R24 BPE based on the Partial MuxMDC-R23.

The clock timing of the conventional MDC-R24 BPE is computed as and the clock timing of the proposed MuxMDC-R24 is estimated as The overall timing block diagram of the proposed MuxMDC-R24 is sketched in Figure 10.

690594.fig.0010
Figure 10: Timing block diagram of Figure 6.

With the same reasoning as above, we will be limited in the elaboration of the proposed butterfly’s radix- family to the DIT FFT process.

For the radix-16 butterfly ( and ), we can express (4) as and the proposed MDC-R42 in terms of radix-4 network is illustrated in Figure 11 where the feedback network is for feeding the th output of the th radix-4 network to the th input of the th butterfly and the switches selectively pass the input data or the feedback, alternately, to the corresponding radix-4 butterfly. The circuit block diagram of the radix-4 network is illustrated in Figure 12.

690594.fig.0011
Figure 11: The proposed DIT MuxMDC-R42.
690594.fig.0012
Figure 12: Block circuit diagram of the radix-4 adder network.

5. Performance Evaluation

FFTs are the most powerful algorithms that are used in communication systems such as OFDM. Their implementation is very attractive in fixed point due to the reduction in cost compared to the floating point implementation. One of the most powerful FFT implementations is the pipelined FFT which is highly implemented in the communication systems; see Figure 13.

690594.fig.0013
Figure 13: stages radix- pipelined FFT.

Since the objective of this paper is mainly concentrated on the higher radices butterflies structures, in our performance study we will be limited to the impact of the butterfly structure. Once the pipeline is filled, the butterflies will produce output each clock cycle (throughput in samples per cycle (Spc)). Therefore, Table 1 will draw the comparison between the different butterflies’ structures in terms of resources needed to compute an FFT of size .

tab1
Table 1: Resources needed to compute an FFT of size .

As shown in Figure 14, we could clearly see that the proposed MuxMDC-R22 for the four parallel pipelined FFTs of size will have the same amount of complex multiplier compared to the radix 24 cited in [30]. Furthermore, our proposed MuxMDC-R22 achieves a reduction in the usage of complex multiplier by a factor that ranges between 1.1 and 1.4 compared to the other cited butterflies.

690594.fig.0014
Figure 14: Comparison between the different butterflies’ structures in terms of complex multiplier needed to compute the 4 parallel BPE pipelined FFTs of size .

For the 4 parallel pipelined FFTs of size , the reduction in the usage of complex adder for our proposed method MuxMDC-R22 ranges between 1.9 and 3.9 compared to the cited butterflies as shown in Figure 15.

690594.fig.0015
Figure 15: Comparison between the different butterflies’ structures in terms of complex adder needed to compute the 4 parallel BPE pipelined FFTs of size .

For the 8 parallel pipelined FFTs of size , the reduction factor in the usage of complex multiplier for our proposed MuxMDC-R23 could range from 1.3 to 2.1 compared to the cited butterflies as illustrated in Figure 16.

690594.fig.0016
Figure 16: Comparison between the different butterflies’ structures in terms of complex multiplier needed to compute the 8 parallel BPE pipelined FFTs of size .

For the same structure, the reduction factor in the usage of complex adder for our proposed method MuxMDC-R23 could range from 3.0 to 5.4 compared to the cited butterflies (Figure 17).

690594.fig.0017
Figure 17: Comparison between the different butterflies’ structures in terms of complex adder needed to compute the 8 parallel BPE pipelined FFTs of size .

It seems that the proposed MuxMDC-R24 uses less complex adders than the proposed MuxMDC-R42 as shown in Figure 18 where the proposed MuxMDC-R24 achieves a reduction in the usage of complex adder by a factor of 2 but the proposed MuxMDC-R42 achieves a reduction in the usage of complex multiplier by a factor of 1.1 as shown in Figure 19.

690594.fig.0018
Figure 18: Comparison between the different butterflies’ structures in terms of complex adder needed to compute the 16 parallel BPE pipelined FFTs of size .
690594.fig.0019
Figure 19: Comparison between the different butterflies’ structures in terms of complex multiplier needed to compute the 16 parallel BPE pipelined FFTs of size .

Since one complex multiplication is counted as 3 real multiplications and 5 real additions as shown in Figure 20, Table 2 will illustrate the required resources in terms of full adder (FA) that will be computed as (a) for two -digit real multiplier and (b) for two -digit real adder.

tab2
Table 2: Resources needed in terms of FA to compute an FFT of size .
690594.fig.0020
Figure 20: Complex multiplier using three real multipliers and five real adders.

For the four parallel pipelined FFTs of size , it seems that the R-22 butterfly cited in [30] will have approximately the same amount of FA as the proposed MuxMDC-R22 according to Figure 21. Our proposed MuxMDC-R22 will achieve a reduction in the usage of FA by a factor that ranges between 1.17 and 1.34 (Figure 21).

690594.fig.0021
Figure 21: Comparison between the different butterflies’ structures in terms of full adder needed to compute the 4 parallel pipelined FFTs of size (multiplier on 16 bits and adder on 32 bits).

With regard to the eight parallel pipelined FFTs of size , it seems that the proposed MuxMDC-R23 will achieve a reduction in the usage of FA by a factor that ranges between 1.4 and 1.9 in comparison to the other cited butterflies as shown in Figure 22.

690594.fig.0022
Figure 22: Comparison between the different butterflies’ structures in terms of full adder needed to compute the 8 parallel pipelined FFTs of size (multiplier on 16 bits and adder on 32 bits).

Since the implementation of higher radices by means of the radix-2α/4β butterfly is feasible, the optimal pipelined FFT is achieved by the two stage FFT as shown in Figure 23 where the use of complex memories between the different stages is completely eliminated and the delay required to fill up the pipeline is totally absent.

690594.fig.0023
Figure 23: Two stage pipelined FFT (or array structure) with a feedback network [23].

6. Conclusion

It has been shown that the higher radix FFT algorithms are advantageous for the hardware implementation, due to the reduced quantity of complex multiplications and memory access rate requirements. This paper has presented an efficient way of implementing the higher radices butterflies by means of the radix-2α/4β kernel where serial parallel models have been represented. The proposed optimized different structures with a scheduling scheme of complex multiplications are suitable for embedded FFT processors. Furthermore, it has been proven that the higher radices butterflies could be obtained by reusing the block circuit diagram of the radix-2α/4β butterfly. Based on this concept, the hardware resources needed could be reduced which is highly desirable for low power consumption FFT processors. The proposed method is suitable for large pipelined FFTs implementation where the performance gain will increase with an increasing FFTs’ radix size. This structure is also appropriate for SIMD implementation on some of the latest DSP cards.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the financial support from the Natural Sciences and Engineering Research Council of Canada and from JABERTECH’s Shareholders Trevor Hill from Alberta and Bassam Kabbara from Kuwait.

References

  1. W. Cooley and J. W. Tukey, “An algorithm for the machine calculation of complex Fourier series,” Mathematics of Computation, vol. 19, pp. 297–301, 1965. View at Google Scholar
  2. P. Duhamel and H. Hollmann, “Split radix FFT algorithm,” Electronics Letters, vol. 20, no. 1, pp. 14–16, 1984. View at Google Scholar · View at Scopus
  3. S. Winograd, “On computing the discrete Fourier transform,” Proceedings of the National Academy of Sciences of the United States of America, vol. 73, no. 4, pp. 1005–1006, 1976. View at Publisher · View at Google Scholar
  4. T. Widhe, Efficient Implementation of FFT Processing Elements, Linkoping Studies in Science and Technology no. 619, Linkoping University, Linköping, Sweden, 1997.
  5. T. Widhe, J. Melander, and L. Wanhammar, “Design of efficient radix-8 butterfly PEs for VLSI,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS '97), pp. 2084–2087, June 1997. View at Scopus
  6. J. Melander, T. Widhe, K. Palmkvist, M. Vesterbacka, and L. Wanhammar, “An FFT processor based on the SIC architecture with asynchronous PE,” in Proceedings of the IEEE 39th Midwest Symposium on Circuits and Systems, vol. 3, pp. 1313–1316, Ames, Iowa, USA, August 1996. View at Scopus
  7. M. Jaber and D. Massicotte, “The self-sorting JMFFT algorithm eliminating trivial multiplication and suitable for embedded DSP processor,” in Proceedings of the 10th IEEE International NEWAS Conference, Montreal, Canada, June 2012.
  8. M. Jaber, “Butterfly processing element for efficient fast Fourier transform method and apparatus,” US Patent No. 6, 751, 643, 2004.
  9. M. A. Jaber and D. Massicotte, “A new FFT concept for efficient VLSI implementation: part I—butterfly processing element,” in 16th International Conference on Digital Signal Processing (DSP '09), pp. 1–6, Santorini, Greece, July 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. Y. Wang, Y. Tang, Y. Jiang, J. Chung, S. Song, and M. Lim, “Novel memory reference reduction methods for FFT implementations on DSP processors,” IEEE Transactions on Signal Processing, vol. 55, no. 5, pp. 2338–2349, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. S. He and M. Torkelson, “Design and implementation of a 1024-point pipeline FFT processor,” in Proceedings of the IEEE Custom Integrated Circuits Conference, pp. 131–134, May 1998. View at Scopus
  12. S. He and M. Torkelson, “New approach to pipeline FFT processor,” in Proceedings of the 10th International Parallel Processing Symposium (IPPS '96), pp. 766–770, April 1996. View at Scopus
  13. E. E. Swartzlander, W. K. W. Young, and S. J. Joseph, “A radix 4 delay commutator for fast Fourier transform processor implementation,” IEEE Journal of Solid-State Circuits, vol. 19, no. 5, pp. 702–709, 1984. View at Google Scholar · View at Scopus
  14. J. H. McClellan and R. J. Purdy, Applications of Digital Signal Processing, Applications of Digital Signal Processing to Radar, chapter 5, Prentice Hall, New York, NY, USA, 1978.
  15. H. Liu and H. Lee, “A high performance four-parallel 128/64-point radix-24 FFT/IFFT processor for MIMO-OFDM systems,” in Proceedings of the IEEE Asia Pacific Conference on Circuits and Systems (APCCAS '08), pp. 834–837, Macao, China, December 2008. View at Publisher · View at Google Scholar · View at Scopus
  16. S.-I. Cho, K.-M. Kong, and S.-S. Choi, “Implementation of 128-point fast fourier transform processor for UWB systems,” in Proceedings of the International Wireless Communications and Mobile Computing Conference (IWCMC '08), pp. 210–213, Crete Island, Greece, August 2008. View at Publisher · View at Google Scholar · View at Scopus
  17. M. Garrido, J. Grajal, M. A. Sanchez, and O. Gustafsson, “Pipelined radix-2k feedforward FFT architectures,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 21, no. 1, pp. 23–32, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. J. A. Johnston, “Parallel pipeline fast Fourier transformer,” IEE Proceedings F: Communications, Radar and Signal Processing, vol. 130, no. 6, pp. 564–572, 1983. View at Google Scholar · View at Scopus
  19. E. H. Wold and A. M. Despain, “Pipeline and parallel-pipeline FFT processors for VLSI implementations,” IEEE Transactions on Computers, vol. 33, no. 5, pp. 414–426, 1984. View at Google Scholar · View at Scopus
  20. S.-N. Tang, J.-W. Tsai, and T.-Y. Chang, “A 2.4-GS/s FFT processor for OFDM-based WPAN applications,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 57, no. 6, pp. 451–455, 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. M. Jaber, “Parallel multiprocessing for the fast Fourier transform with pipeline architecture,” US Patent No. 6, 792, 441.
  22. M. A. Jaber and D. Massicotte, “A new FFT concept for efficient VLSI implementation: part II—parallel pipelined processing,” in Proceedings of the 16th International Conference on Digital Signal Processing (DSP 2009), pp. 1–5, Santorini, Greece, July 2009. View at Publisher · View at Google Scholar · View at Scopus
  23. M. Jaber, “Fourier transform processor,” US Patent No. 7, 761, 495.
  24. M. Jaber, “Address generator for the fast Fourier transform processor,” US-6, 993, 547 82 and European Patent Application Serial no: PCT/USOI /07602.
  25. E. J. Kim and M. H. Sunwoo, “High speed eight-parallel mixed-radix FFT processor for OFDM systems,” in Proceedings of the IEEE International Symposium of Circuits and Systems (ISCAS '11), pp. 1684–1687, Rio de Janeiro, Brazil, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. P. Li and W. Dong, “Computation oriented parallel FFT algorithms on distributed computer,” in Proceedings of the 3rd International Symposium on Parallel Architectures, Algorithms and Programming (PAAP '10), pp. 369–373, Dalian, China, December 2010. View at Publisher · View at Google Scholar · View at Scopus
  27. D. Takahashi, A. Uno, and M. Yokokawa, “An implementation of parallel 1-D FFT on the K computer,” in Proceedings of the IEEE International Conference on High Performance Computing and Communication, pp. 344–350, Liverpool, UK, June 2012. View at Publisher · View at Google Scholar
  28. R. M. Piedra, “Parallel 1-D FFT implementation with TMS320C4x DSPs,” Texas Instruments SPRA108, Digital Signal Processing Semiconductor Group, 1994. View at Google Scholar
  29. http://www.fftw.org/.
  30. V. Petrov, “MKL FFT performance—comparison of local and distributed-memory implementations,” Intel Report, 2012, http://software.intel.com/en-us/node/165305?wapkw=fft. View at Google Scholar
  31. V. I. Kelefouras, G. S. Athanasiou, N. Alachiotis, H. E. Michail, A. S. Kritikakou, and C. E. Goutis, “A methodology for speeding up fast fourier transform focusing on memory architecture utilization,” IEEE Transactions on Signal Processing, vol. 59, no. 12, pp. 6217–6226, 2011. View at Publisher · View at Google Scholar · View at Scopus