Special Issue

## Mathematical Tools of Soft Computing 2014

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 827509 | 12 pages | https://doi.org/10.1155/2014/827509

# Multiple Memory Structure Bit Reversal Algorithm Based on Recursive Patterns of Bit Reversal Permutation

Revised04 Jun 2014
Accepted05 Jun 2014
Published17 Jul 2014

#### Abstract

With the increasing demand for online/inline data processing efficient Fourier analysis becomes more and more relevant. Due to the fact that the bit reversal process requires considerable processing time of the Fast Fourier Transform (FFT) algorithm, it is vital to optimize the bit reversal algorithm (BRA). This paper is to introduce an efficient BRA with multiple memory structures. In 2009, Elster showed the relation between the first and the second halves of the bit reversal permutation (BRP) and stated that it may cause serious impact on cache performance of the computer, if implemented. We found exceptions, especially when the said index mapping was implemented with multiple one-dimensional memory structures instead of multidimensional or one-dimensional memory structure. Also we found a new index mapping, even after the recursive splitting of BRP into equal sized slots. The four-array and the four-vector versions of BRA with new index mapping reported 34% and 16% improvement in performance in relation to similar versions of Linear BRA of Elster which uses single one-dimensional memory structure.

#### 1. Introduction

The efficiency of a bit reversal algorithm (BRA) plays a critical role in the Fast Fourier Transform (FFT) process because it contributes 10% to 50% of total FFT process time . Therefore, it is vital to optimize the BRA to achieve an efficient FFT algorithm. In 2009, Elster showed the relation between the first and the second halves of the BRP , but did not implement it. Elster stated that implementation of this relation may cause serious impact on cache performance of modern computers. As Elster stated, use of a two-dimensional memory structure to implement this relation reduced the efficiency of the bit reversal permutation (BRP). In contrast, the efficiency of the BRA increased when a one-dimensional memory structure was used for index mapping. When two equal sided one-dimensional memory structures were used, the performance was even much better than with single one-dimensional memory structure. Also, it was found out that bit reversal permutation can be further split into equal size blocks recursively, up to maximum of log2(n) times, where is the number of samples and ; . These two findings motivate us to introduce a BRA which is capable of using    equal-sized () one-dimensional memory structures.

In 1965 Cooley and Tukey introduced the FFT algorithm, which is an efficient algorithm to compute the Discrete Fourier Transformation (DFT) and its inverse . FFT is a fast algorithm that has replaced the process of DFT, which had been used frequently in the fields of signal and image processing . The structure of the FFT algorithm published by Cooley and Tukey known as radix-2 algorithm  is the most popular one . There are several other algorithm structures as radix-4, radix-8, radix-16, mixed-radix, and split-radix .

To apply FFT to a certain signal, there are basically two major requirements. The first requirement is , where is the number of samples of the signal, , and is the selected radix structure, for example, , 4, 8, and 16 for radix-2, radix-4, radix-8, and radix-16, respectively. The second requirement is that the input (or output) samples must be arranged according to a certain order to obtain the correct output [3, 5, 8, 9]. The BRA is used to create the order of input or output permutation according to the required order. The BRA, used in most FFT algorithms, including the original Cooley-Tukey algorithm , is known as bit reversal method (BRM). The BRM is an operation, for exchanging two elements and of an array of length as shown in (1) and (2), respectively, where are either 0 or 1 and is the relevant base 2, 4, 8, or 16 depending on the selected radix structure:

All the later algorithms for creating BRP were named as BRA (bit reversal algorithm), though they used other techniques like patterns of BRP instead of bit reversing techniques.

During the last decades, many publications addressed new BRAs  by improving the already existing original BRA (BRM) or using totally different approaches. In 1996, Karp compared the performance of 30 different algorithms  against uniprocessor systems (computer system with a single central processing unit) with different memory systems. Karp found that the performance of a BRA depended on the memory architecture of the machine and the way of accessing the memory. Karp stated two hardware facts that influence the BRA, namely, the memory architecture and the cache size of the machine. According to Karp, a machine with hierarchical memory is slower than a machine with vector memory (computers with vector processor), and algorithms do not perform well when array size is larger than the cache size. Also Karp pointed out four features of an algorithm that influence the BRA, namely, memory access technique, data reading sequence, size of the index of memory, and type of arithmetic operations. According to Karp, an algorithm that uses efficient memory access techniques is the fastest among algorithms with exactly the same number of arithmetic operations. Algorithms are faster if (i) they require only a single pass over the data, (ii) they use short indexes, and (iii) they operate with addition instead of multiplication.

Karp especially mentioned that the algorithm published by Elster  in 1989 was different from other algorithms, because it used a pattern of BRP rather than improving decimal to binary and binary to decimal conversion. According to the findings of Karp, Elster’s “Linear Bit Reversal Algorithm” (LBRA) performs much better in most of the cases. The publication of Elster (1989)  consists of two algorithms to achieve BRP. One algorithm used a pattern of BRP and the other one used bit shifting operations. Both algorithms are interesting because they eliminate the conventional bit reversing mechanism, which need more computing time. The algorithm by Rubio et al. (BRA-Ru) of 2002  is another approach that uses an existing pattern of BRP. However, the pattern described in Rubio’s algorithm is different from the pattern described in Elster’s . In 2009, Elster and Meyer published an improved version of “Linear Register-Level Bit Reversal” which was published in 1989  as “Elster’s Bit Reversal” (EBR) algorithm. Elster mentioned it is possible to generate the second half of the BRP by incrementing the relevant element of the first half by one. Also, Elster mentioned there can be a serious impact on cache performance of the computer if the said pattern (Figure 1) is used.

Programming languages provide different data structures  which handle memory in different ways. In addition, the performance of the memory depends on machine architecture and operating system . Therefore, the efficiency of the memory is the resultant output of the performances of hardware, operating system, programming language, and selected data structure.

Based on the physical arrangement of memory elements, there are two common ways of allocating memory for a series of variables: “slot of continuous memory elements” and “collection of noncontinuous memory elements,” commonly known as “stack” and “heap” . In most programming languages, the term array is used to refer to a “slot of continuous memory elements.” Arrays are the simplest and most common type of data structure [16, 17] and, due to continuous physical arrangement of memory elements, provide faster access than “collection of noncontinuous memory elements” memory types. However, with the development of programming languages, different types of data structures were introduced with very similar names to the standard names like array, stack, and heap. The names of new data structures sometimes did not agree with the commonly accepted meaning. “Stack,” “Array,” and “ArrayList” provided by Microsoft Visual C++ (VC++) [18, 19] are good examples. According to the commonly accepted meaning they should be a “slot of continuous memory elements,” but they are in fact a “collection of noncontinuous memory elements.” Therefore, it is not a good practice to determine the performance of a certain data structure just by looking at its name. To overcome this ambiguous situation, we use “slot of continuous memory elements” to refer to “primitive array” (or array) type memory structures.

Due to the very flexible nature, vector is the most common one among the different types of data structures . Vector was introduced with C++, which is one of the most common and powerful programming languages which has been used since 1984 . However, as most of other data structures, the term “vector” is used to refer to memory in computers with processor architecture called “vector processor.” In this paper the term vector is used to refer to the vector data structure that is used in the C++ interface.

Index mapping is a technique that can be used to improve the efficiency of an algorithm by reducing the arithmetic load of the algorithm . If and is not prime, can be defined as , where  . This allows the usage of small ranges of    instead of large range of    and maps a function into a multidimensional function .

There are two common methods of implementing index mapping: one-dimensional or multidimensional memory structures. In addition, it is also possible to implement the index mapping using several equal size one-dimensional memory structures. However, this option is not popular as it is uncomfortable for programming. The performance of modern computers is highly dependent on the effectiveness of the cache memory of the CPU . To achieve the best performance of a series of memory elements, the best practice is to maintain sequential access . Otherwise, the effectiveness of the cache memory of the central processing unit (CPU) will be reduced. Index mapping with multidimensional data structures violates sequential access due to switching between columns/rows and thus reduces the effectiveness of the cache memory. Therefore, it is generally accepted that the use of a multidimensional data structure reduces computer performance .

In this paper an efficient BRA is introduced to create the BRP based on multiple memory structures and recursive patterns of BRP. The findings of this paper show that the combination of multiple one-dimensional memory structures, index mapping, and the recessive pattern of BRP can be used to improve the efficiency of BRA. These findings are very important to the field of signal processing as well as any field that is involved in index mapping techniques.

#### 2. Material and Methods

##### 2.1. New Algorithm (BRA-Split)

Elster stated that it is possible to generate the second half of the BRP by incrementing items in the first half by one  (Figure 1), without changing the order and the total number of calculations of the algorithm. Due to the recursive pattern of BRP, it can be further divided into equal size blocks by splitting each block recursively (Maximum log2N times). After splitting times, BRP is divided into equal blocks each containing elements. The relation between the elements in blocks is given as follows: where is th element of block and , .

Table 1 shows the relationship between elements in blocks according to the index mapping shown in (3), after splitting BRP one time and two times for . Depending on the requirement, the number of splitting can be increased.

 Normal order Reverse order Split ( = 1 Split ( ) Blocks ( ) Index of the block Reverse order calculation Calculation method     : Blocks ( ) Index of the block Reverse order calculation Calculation method     : : 0 0 1 0 0 Initialized 0 0 Initialized 1 8 1 8 = 8 + 0 Use Elster’s Linear Bit Reversal method 1 1 8 = 8 + 0 Use Elster’s Linear Bit Reversal method 2 4 2 4 = 4 + 0 2 4 = 4 + 0 3 12 3 12 = 12 + 0 3 12 = 12 + 0 4 2 4 2 = 2 + 0 0 2 Initialized 5 10 5 10 = 10 + 0 2 1 10 = 8 + 2 6 6 6 6 = 6 + 0 2 6 = 4 + 2 For , , 7 14 7 14 = 14 + 0 3 14 = 12 + 2 8 1 2 0 1 Initialized 0 1 Initialized 9 9 1 9 = 8 + 1 For 3 1 9 = 8 + 1 10 5 2 5 = 4 + 1 2 5 = 4 + 1 For 11 13 3 13 = 12 + 1 3 13 = 12 + 1 12 3 4 3 = 2 + 1 0 3 Initialized 13 11 5 11 = 10 + 1 4 1 11 = 10 + 1 14 7 6 7 = 6 + 1 2 7 = 6 + 1 For 15 15 7 15 = 14 + 1 3 15 = 14 + 1

##### 2.2. Evaluation Process of New Algorithm

To evaluate algorithms, we used Windows 7 and Visual C++ 2012 on a PC with multicore CPU (4 cores, 8 logical processors) and 12 GB memory. Detailed specifications of the PC and the software are given in Table 2. To eliminate limits of memory and address space related to the selected platform, the compiler option “/LARGEADDRESSAWARE” was set  and platform was set to “x64.” All other options of the operating system and the compiler were kept unchanged.

 Specifications Processor Intel Core i7 CPU 870 @ 2.93 GHz (4 cores, 8 threads) RAM 12 GB, DDR3-1333, 2 channels Memory bandwidth 21 GB/s L1, L2, and L3 cache KB,  KB, and 8 MB shared L1, L2, and L3 cache line size 64 bit Brand and type Fujitsu, Celsius BIOS settings Default (hyper threading enabled) OS and service pack Windows 7 professional with service pack 1 System type 64 bit operating system OS settings Default Visual Studio 2012 Version 11.0.50727.1 RTMREL .NET Framework Version 4.5.50709

The new algorithm was implemented using single one-dimensional memory structure and the most common multidimensional memory structure. Furthermore, the new BRA was implemented using several equal size one-dimensional memory structures (multiple memory structure).

The next task was to identify a suitable data structure from different types of available data structures. We considered several common techniques as summarized in Table 3. Data structure 1 mentioned in Table 3 is not supporting dynamic memory allocation (need to mention the size of the array when array is being declared). For general bit reversal algorithm, it is a must to have dynamic memory allocation to cater different sizes of samples. Even after setting the compiler option “/LARGEADDRESSAWARE” , data structures 3 and 4 mentioned in Table 3 were not supported for accessing memory greater than 2 GB. Therefore, structures 1, 3, and 4 were rejected and memory structures 2 (array) and 5 (vector) were used to create all one-dimensional memory structures. The same versions of array and vector were used to create multidimensional memory structures.

 Number Name Syntax Nature of memory layout 1 Array int BRP 1000 Slot of continuous memory elements 2 Array int* BRP = new int[ ] Slot of continuous memory elements 3 Array array ∧BRP = gcnew array ( ) Collection of noncontinuous memory elements 4 ArrayList ArrayList ∧BRP = gcnew ArrayList() Collection of noncontinuous memory elements 5 Vector std::vector BRP( ) Collection of noncontinuous memory elements

The new algorithm mentioned in Section 2.1 was implemented using C++ in 24 types of memory structures as shown in Table 4. The performance of these algorithms was evaluated considering the “clocks per element” (CPE) consumed by each algorithm. To retrieve this value, first, average CPE for each sample size of where (11 sample sizes) were calculated after executing each algorithm 100 times. This gave 11 CPE representing each sample size. Finally, the combined averageof CPE was calculated for each algorithm by averaging those 11 values along with “combined standard deviation.” The combined average of CPE was considered as the CPE for each algorithm. The built-in “clock” function of C++ was used to calculate the clocks. Combined standard deviation was calculated using the following: where , is the number of samples, is number of samples in each sample, and is the standard deviation of each sample.

 Split (s) Data structure type Algorithm Single memory structure Multidimensional memory structure Multiple memory structure 1 Array (A) BRA_Split_1_1A BRA_Split_1_2DA BRA_Split_1_2A 1 Vector (V) BRA_Split_1_1V BRA_Split_1_2DV BRA_Split_1_2V 2 Array (A) BRA_Split_2_1A BRA_Split_2_4DA BRA_Split_2_4A 2 Vector (V) BRA_Split_2_1V BRA_Split_2_4DV BRA_Split_2_4V 3 Array (A) BRA_Split_3_1A BRA_Split_3_8DA BRA_Split_3_8A 3 Vector (V) BRA_Split_3_1V BRA_Split_3_8DV BRA_Split_3_8V 4 Array (A) BRA_Split_4_1A BRA_Split_4_16DA BRA_Split_4_16A 4 Vector (V) BRA_Split_4_1V BRA_Split_4_16DV BRA_Split_4_16V
Naming convention for algorithms: “BRA_Split” + <Number of splits> + <nature of memory structure>. xA, xV: x number of arrays and x number of vectors. xDA, xDV: single x-dimensional array and single x-dimensional vector.

Algorithms 1, 2, and 3 illustrate the implementation of new BRA with single one-dimensional memory structure, multidimensional memory structure, and multiple memory structures, respectively. The algorithm illustrated in Algorithm 1 (BRA_Split_1_1A) was implemented using primitive array for split = 1. The algorithm BRM_Split_2_4A (Algorithm 2) was implemented using vectors for split = 2. The algorithm BRM_Split_2_4A (Algorithm 3) was implemented using primitive array for split = 2. A sample permutation filling sequence of algorithms with single one-dimensional memory structures is illustrated in Figure 2. Figure 3 illustrates a sample permutation filling sequence of both multidimensional and multiple memory structures.

 void mf_ BRM_Split_1_1A (unsigned int ui_NS, int ui_log2NS) { unsigned int ui_N unsigned int ui_EB; unsigned int ui_t; unsigned int ui_L; unsigned int ui_DL; ui_N = ui_NS; // Number of samples ui_t = ui_log2NS − 1; ui_EB = ui_N/2; ui_L = 1; unsigned int* BRP = new unsigned int ui_N]; BRP 0 = 0; BRP ui_EB] = 1; for (unsigned int q = 0; q < ui_t; q++) { ui_DL = ui_L + ui_L; ui_N = ui_N/2; for (unsigned int j = ui_L; j < ui_DL; j++) { BRP j] = BRP j − ui_L] + ui_N; BRP ui_EB + j] = BRP j] + 1; } ui_L = ui_L + ui_L; } delete BRP; }

 Void mf_ BRM_Split_2_4DV (unsigned int ui_NS, int ui_log2NS) { unsigned int ui_N; unsigned int ui_EB; unsigned int ui_t; unsigned int ui_L; unsigned int ui_DL; ui_N = ui_NS; // Number of samples ui_t = ui_log2NS − 2; ui_EB = ui_N/4; ui_L = 1; std::vector> BRP(4, std::vector(ui_EB)); BRP 0] 0] = 0; BRP 1] 0] = 2; BRP 2] 0] = 1; BRP 3] 0] = 3; for (unsigned int q = 0; q < ui_t; q++) { ui_DL = ui_L + ui_L; ui_N = ui_N/2; for (unsigned int j = ui_L; j < ui_DL; j++) { BRP 0][j] = BRP 0][j − ui_L] + ui_N; BRP 1][j] = BRP 0][j] + 2; BRP 2][j] = BRP 0][j] + 1; BRP 3][j] = BRP 1][j] + 1; } ui_L = ui_L + ui_L; }   }

 Void mf_ BRM_Split_2_4A (unsigned int ui_NS, int ui_log2NS) { unsigned int ui_N unsigned int ui_EB; unsigned int ui_t; unsigned int ui_L; unsigned int ui_DL; ui_N = ui_NS; // Number of samples ui_t = ui_log2NS − 2; ui_EB = ui_N/4; ui_L = 1; unsigned int* BRP1 = new unsigned int[ui_EB]; unsigned int* BRP2 = new unsigned int[ui_EB]; unsigned int* BRP3 = new unsigned int[ui_EB]; unsigned int* BRP4 = new unsigned int[ui_EB]; BRP1 0] = 0; BRP2 0] = 2; BRP3 0] = 1; BRP4 0] = 3; for (unsigned int q = 0; q < ui_t; q++) { ui_DL = ui_L + ui_L; ui_N = ui_N/2; for (unsigned int j = ui_L; j < ui_DL; j++) { BRP1[j] = BRP1[j − ui_L]+ ui_N; BRP2[j] = BRP1[j] + 2; BRP3[j] = BRP1[j] + 1; BRP4[j] = BRP2[j] + 1; } ui_L = ui_L + ui_L; } delete BRO1; delete BRO2; delete BRO3; delete BRO4; }

Secondly, arithmetic operations per element (OPPE) were calculated for each algorithm. Arithmetic operations within each algorithm were located in three regions of the code: inner FOR loop, outer FOR loop, and outside of the loops. Then, the total number of operations (OP) can be defined as where , , and are the number of operations in inner FOR loop, outer FOR loop, and outside of the loops. and are the number of iterations of outer loop and inner loop. Equation (5) can be represented as where NS is the number of samples and is the number of splits.

The main contribution to calculations comes from the inner loop. Comparing with the contribution of operations in the inner loop, the contribution of operations in rest of the code is very small. For example, consider the algorithm BRA_Split_1_1A shown in Algorithm 1. As sample size is 231, and , where . Therefore, only the operations of inner loop were considered for evaluation. Then (6) can be simplified as The “operations per element” (OPPE) can be defined as For FFT always NS = ; .

Then from (7) For large , . Then,

Because the considered sample size is 221 to 231, the value can be considered as large. Then, from (9)

According to (13), OPPE is . The value (operations in inner loop) and the value (number of splits) are a constant for a certain algorithm.

To evaluate the performance of new BRA, we selected three algorithms (LBRA, EBR, and BRA-Ru) which used a pattern instead of conventional bit reversing method. The performance of vector and array versions of the best version of new BRA was compared with the relevant versions of selected algorithms.

#### 3. Results and Discussion

Our objective was to introduce BRA using recursive pattern of the BRP that we identified. We used multiple memory structures, which is a feasible yet unpopular technique to implement index mapping. According to Table 5, the numbers of operations in all the array and vector versions of both multidimensional and multiple memory structures are the same. Also, Figure 5 shows continuous decrement of OPPE when the number of splits increases. Then, the algorithm with the highest number of splits and the lowest number of operations is the one which is expected to be most efficient. However, results in relation with CPE (Figure 5) show that the new algorithm with four memory structures of array is the fastest and most consistent in the selected range. Two, four, eight, and sixteen multiple array implementations of new BRA reported 25%, 34%, 33%, and 18% higher efficiency, respectively, in relation to the array version of LBRA. The algorithm with eight memory structures has nearly the same CPE as the four-array and four-vector versions, but is less consistent. On the other hand, the four-vector implementation of the new algorithm is the fastest and most consistent among all vector versions. Two, four, eight, and sixteen multiple vector implementations of new BRA reported 13%, 16%, and 16% higher and 23% lower efficiency, respectively, in relation to the vector version of LBRA. This result proves that at a certain point, multiple memory structure gives the best performances in the considered domain. Also, usage of multiple memory structures of primitive array is a good option for implementing index mapping of BRP compared to multidimensional or single one-dimensional memory structures.

 Memory structure type Value of Single one-dimension (array and vector) 8 15 30 61 Multidimension (array and vector) 7 11 19 35 Multiple one-dimension (array and vector) 7 11 19 35

Due to the flexible nature of the vector, it is commonly used for implementing algorithms. According to Figure 4 there is no difference in OPPE between array and vector versions. However, our results in Figure 5 show that the vector versions of BRA always required more CPE (44%–142%) than the array version. The structure of vector gives priority to generality and flexibility rather than to execution speed, memory economy, cache efficiency, and code size . Therefore, vector is not a good option with respect to efficiency.

The results in Table 5 and Figure 4 show that there is no difference between the number of calculations and OPPE for equal versions of algorithms with multidimensional and multiple memory structure. Structure and type of calculations are the same for both types. The only difference is the nature of the memory structure: multidimension or multiple one-dimension. When CPE is considered, it shows 19%–79% performance efficiency from algorithms with multiple one-dimension memory structures. The reason for that observation is that the memory access of multidimensional memory structure is less efficient than one-dimensional memory structure .

We agree with the statement of Elster about index mapping of BRP  and the generally accepted fact (the usage of multidimensional memory structures reduces the performance of a computer)  only with respect to multidimensional memory structures of vector. Our results show that even with multidimensional arrays there are situations where the new BRA performs better than the same type of one-dimensional memory structure. The four, eight, and sixteen dimensional array versions of new BRA perform 8%, 10%, and 2% in relation to one-dimensional array version of new BRA. Some results in relation to single one-dimensional memory structure implementation of new BRA are also not in agreement with the general accepted idea. For example sample size = 231, the two-dimensional vector version of new BRA (BRA_Split_1_2DV) reported CPE which is 389% higher in relation to average CPE of sample size range of 221 to 230. Also, the inconsistency was very high. Therefore, we excluded values related to sample size = 231 for the two-dimensional vector version.

We observed very high memory utilization with the two-dimensional vector version, especially with sample size = 231. Windows task manager showed that the memory utilization of all the considered algorithms was nearly the same for all sample sizes except for multidimension versions of vector. The multidimensional version of vector initially utilizes higher memory and drops down to normal value. The results in relation to sample showed that the extra memory requirement of two dimension vector was higher than that of the four-dimensional vector. Based upon that it can be predicted that BRA_Split_1_2DV needs an extra 3 GB (total 13 GB) for normal execution at sample , but the total memory barrier of 12 GB of the machine slows the process down. The most likely reason for this observation is the influence of memory pooling mechanism. When defining a memory structure it is possible to allocate the required amount of memory. This is known as memory pooling . In all the memory structures used in algorithms discussed in this paper we used memory pooling. Memory pooling allocates the required memory at startup and divides this block into small chunks. If there is no memory pooling, memory will get fragmented. Accessing fragmented memory is inefficient. When the existing memory is not enough for allocating, then it switches to use fragmented memory for allocating memory elements. In the considered situation, the existing memory (12 GB) is not sufficient for allocating the required amount (13 GB) which switches to use fragmented memory.

The total cache size of the machine is 8.25 MB, which is less than the minimum memory utilization of considered algorithms (16 MB to 8 GB) in relation to the sample size range from 222 to 231. Only the sample size 221 occupies 8 MB memory, which is less than the total cache memory. Except BRA_Split_4_1V structure, all algorithms reported constant CPE  in relation to the entire sample size range. The best algorithms of each category, especially, reported very steady behaviour. This observation is in disagreement with the statement of Karp “that a machine with hierarchical memory does not perform well when array size is larger than the cache size” .

Comparison (Figure 6) of best reported version in the considered domain (four memory structure version) and the selected algorithms shows that the array version of EBR performs the best. The four-array version of new BRA reported 1% lower performance than the array version of EBR. However, the four-array version of new BRA reported 34% and 23% higher performances than array versions of LBRA and BRA-Ru. Also, the four-vector version of new BRA is reported to have the best performance among all the vector versions. It reported 16%, 10%, and 22% performances compared to vector versions of LBRA, EBR, and BRA-Ru, respectively.

#### 4. Conclusion and Outlook

The main finding of this paper is the recursive pattern of BPR and the implementation method of it using multiple memory structures. With multiple memory structures, especially, the newly identified index mapping performs much better than multidimensional or single one-dimensional memory structure. Furthermore, findings of this paper show that the performance of primitive array is higher than vector type. The result is in disagreement with the statement of Karp “that a machine with hierarchical memory does not perform well when array size is larger than the cache size.” Almost all the sample sizes we used were higher than the total cache size of the computer. However, multiple memory structure and the multidimensional memory structure versions showed reasonable steady performance with those samples. In general these results show the effects of data structures and memory allocation techniques and open a new window of creating efficient algorithms with multiple memory structures in many other fields where index mapping is involved.

The new bit reversal algorithm with independent memory structures splits the total signal into independent portions and the total FFT process into levels. Then, these signal portions can be processed independently by means of independent processes on the first level. On the next level the results from the previous level stored in independent memory structures can be processed with processes and so on, until the last level. Therefore, we suggest using the concept of multiple memory structures in total FFT process along with the new algorithm with multiple memory structures and suitable parallel processing technique. We expect that it is possible to achieve higher performance from FFT process with proper combination of parallel processing technique and new algorithm compared to using the new algorithm only to create bit reversal permutation. Figure 7 shows such approach with four (when ) independent memory structures for sample size = 16.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

This publication is financially supported by the University Grants Commission, Sri Lanka.

1. C. S. Burrus, “Unscrambling for fast DFT algorithms,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 36, no. 7, pp. 1086–1087, 1988. View at: Publisher Site | Google Scholar
2. A. C. Elster and J. C. Meyer, “A super-efficient adaptable bit-reversal algorithm for multithreaded architectures,” in Proceedings of the IEEE International Symposium on Parallel & Distributed Processing (IPDPS '09), vol. 1–5, pp. 1–8, Rome, Italy, May 2009. View at: Publisher Site | Google Scholar
3. J. W. Cooley and J. W. Tukey, “An algorithm for the machine calculation of complex Fourier series,” Mathematics of Computation, vol. 19, no. 90, pp. 297–301, 1965. View at: Publisher Site | Google Scholar | MathSciNet
4. M. L. Massar, R. Bhagavatula, M. Fickus, and J. Kovačević, “Local histograms and image occlusion models,” Applied and Computational Harmonic Analysis, vol. 34, no. 3, pp. 469–487, 2013. View at: Publisher Site | Google Scholar | MathSciNet
5. R. G. Lyons, Understanding Digital Signal Processing, Prentice Hall PTR, 2004.
6. C. Deyun, L. Zhiqiang, G. Ming, W. Lili, and Y. Xiaoyang, “A superresolution image reconstruction algorithm based on landweber in electrical capacitance tomography,” Mathematical Problems in Engineering, vol. 2013, Article ID 128172, 8 pages, 2013. View at: Publisher Site | Google Scholar
7. Q. Yang and D. An, “EMD and wavelet transform based fault diagnosis for wind turbine gear box,” Advances in Mechanical Engineering, vol. 2013, Article ID 212836, 9 pages, 2013. View at: Publisher Site | Google Scholar
8. C. S. Burrus, Fast Fourier Transforms, Rice University, Houston, Tex, USA, 2008.
9. C. van Loan, Computational Frameworks for the Fast Fourier Transform, vol. 10 of Frontiers in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa, USA, 1992. View at: Publisher Site | MathSciNet
10. A. H. Karp, “Bit reversal on uniprocessors,” SIAM Review, vol. 38, no. 1, pp. 1–26, 1996. View at: Publisher Site | Google Scholar | MathSciNet
11. A. C. Elster, “Fast bit-reversal algorithms,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, pp. 1099–1102, IEEE Press, Glasgow, UK, May 1989. View at: Google Scholar
12. M. Rubio, P. Gómez, and K. Drouiche, “A new superfast bit reversal algorithm,” International Journal of Adaptive Control and Signal Processing, vol. 16, no. 10, pp. 703–707, 2002. View at: Publisher Site | Google Scholar
13. B. Stroustrup, Programming: Principles and Practice Using C++, Addison-Wesley, 2009.
14. B. Stroustrup, “Software development for infrastructure,” IEEE Computer Society, vol. 45, no. 1, Article ID 6081841, pp. 47–58, 2012. View at: Publisher Site | Google Scholar
15. B. Stroustrup, The C++ Programming Language, AT&T Labs, 3rd edition, 1997.
16. S. Donovan, C++ by Example, “UnderC” Learning Edition, QUE Corporation, 2002.
17. S. Mcconnell, Code Complete, Microsoft Press, 2nd edition, 2004.
18. Microsoft, STL Containers, Microsoft, New York, NY, USA, 2012, http://msdn.microsoft.com/en-us/library/1fe2x6kt.aspx.
19. Microsoft, Arrays (C++ Component Extensions), Microsoft, New York, NY, USA, 2012, http://msdn.microsoft.com/en-us/library/vstudio/ts4c4dw6(v=vs.110).aspx.
20. B. Stroustrup, “Evolving a language in and for the real world: C++ 1991–2006,” in Proceedings of the 3rd ACM SIGPLAN History of Programming Languages Conference (HOPL-III '07), June 2007. View at: Publisher Site | Google Scholar
21. Microsoft, Memory Limits for Windows Releases, Microsoft, New York, NY, USA, 2012, http://msdn.microsoft.com/en-us/library/aa366778%28VS.85%29.aspx#memory_limits.
22. A. Fog, “Optimizing software in C++,” 2014, http://www.agner.org/optimize/. View at: Google Scholar
23. Code Project, “C++ Memory Pool,” 2014, http://www.codeproject.com/Articles/15527/C-Memory-Pool. View at: Google Scholar

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. 