Research Article  Open Access
Marwan Abdellah, Ayman Eldeib, Amr Sharawi, "High Performance GPUBased Fourier Volume Rendering", International Journal of Biomedical Imaging, vol. 2015, Article ID 590727, 13 pages, 2015. https://doi.org/10.1155/2015/590727
High Performance GPUBased Fourier Volume Rendering
Abstract
Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are computationally complex. Relying on the Fourier projectionslice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuationonly projections that look like Xray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a perdollarbasis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassinglyparallel algorithms to run efficiently on CUDAcapable GPU architectures. In this work, a high performance GPUaccelerated implementation of the FVR pipeline on CUDAenabled GPUs is presented. This proposed implementation can achieve a speedup of 117x compared to a singlethreaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.
1. Introduction
Volume visualization is an essential tool for exploring and analysing the anatomy of complex structures and phenomena. It has been extensively used in various scientific and engineering arenas such as medical imaging, geoscience, microscopy, mechanical engineering, and others [1–3]. Several volume visualization techniques have been developed and extensively investigated. The main two categories that have gained broad acceptance by scientific communities were volume and surface rendering. Each technique has its specific area of application that is associated with its advantages, but in contrast, it also has its disadvantages that give the opportunity for other techniques to survive [4–6].
Volume rendering has shown great significance on the visual interpretation of large amount of 3D scalar and vector data generated by multidimensional sensors, acquisition devices, and supercomputer simulations. It concentrates on visualizing the desired internal features of volumetric objects and their bounding surfaces at the same time. In literature, several volume rendering algorithms have been presented either to improve the rendering speed of large datasets or to enhance the rendering quality of their reconstructed images. By and large, the rendering speed and quality are tradedoff and no single complete algorithm that can deliver the optimum quality associated with maximum interactivity exists [7].
A radical domainbased categorization of volume rendering algorithms classifies them into spatialdomain and otherdomainbased techniques such as frequency domain, compression domain or the wavelet domain. The rendering pipeline of spatial domain techniques runs entirely in this domain. Domainbased methods operate by switching part of their computations to a different domain either to reduce the complexity of the rendering algorithm or to improve the performance of some operations that take considerable amount of time in the spatial domain. For a volume of size , complexity of spatialdomain algorithms is of order , since all the voxels composing this volume must be visited at least once to render a correct image. Although some algorithms use additional optimization techniques to reduce the number of traversed samples, this optimization is in general data dependent and subject to the size of the datasets. This timecomplexity limits the usage of spatial domain rendering algorithms for interactive environments in some applications. In such cases, frequencydomainbased techniques can be used alternatively.
Frequencydomain volume rendering (FDVR) uses a 3D spectral representation of the volume to compute an image that looks like Xray radiograph in time relying on the projectionslice theorem. It works by transforming the spatial volume into frequency domain. Then it reconstructs 2D projection at any viewing angle by resampling an extracted projectionslice along a perpendicular plane to the viewing direction followed by backtransforming this resampled slice to the spatial domain [8, 9].
Obtaining the spectral representation of the volume is the most computationally intensive step in the algorithm due to its complexity. Nevertheless, this rendering algorithm is extremely efficient because this step is executed once in a preprocessing stage. The rendering loop of this pipeline takes much less computing time.
FDVR is a generic technique that can work with any frequency transform to switch between the spatial and frequency domains. Our rendering pipeline will be based on Fourier transform and so, the rendering method can be called Fourier volume rendering.
The rest of the paper is organized as follows. Section 2 summarizes the related work in the literature. In Section 3, the rendering algorithm is demystified and our implementation strategy is presented. Section 4 elaborates the results gained by mapping the entire rendering pipeline to run on the GPU and Section 5 concludes the paper.
2. Fourier Volume Rendering Literature
Although the technique was introduced around 20 years ago, the literature behind FDVR in general and FVR in particular is quite scarce. In this section, we will try to summarize the most notable contributions that have been presented since that time in a nutshell.
In 1992, Dunne et al. introduced the fundamental idea behind frequency domain volume rendering and its advantages [8]. Malzbender systematically extended the projectionslice theorem to 3D to be a basis for volume rendering [9]. Based on FFT, he presented an implementation of the basic FVR pipeline and briefly discussed some considerations of resampling the frequency domain and their significant effects on the reconstruction quality of the generated projection image. He proposed to use a Hammingwindowed reconstruction filter to reduce the aliasing accompanied with resampling the frequency spectrum. For the same mission, Grosso and Ertl proposed alternatively biorthognal wavelet reconstruction filters [10]. As a drawback, the basic FVR integral does not provide a direct way of modelling emission and scattering in the participating medium. This limitation was a fundamental disadvantage associated with frequency domain rendering that is reflected as lack of occlusion in the resulting reconstructions [11]. Levoy has partially restored the lost visual cues in the basic pipeline by applying several shading models that are linear combinations of Fourier projections [12]. His extension included depth cueing, directional shading, and Lambertian reflection with spherical illumination. The result of Levoy’s work did not exhibit real occlusion, yet it provided acceptable depth and shape cues that can simulate the existence of the missing occlusion. One significant disadvantage that limited the usage of his implementation was the demand of several copies of the volume, which consequently imposes high memory requirements for initial preprocessing. In cooperation with Totsuka, the same results have been obtained after considering alternative frequency domain methods by processing the frequency response of the volume data, which dramatically reduced the memory required before [13].
Until that time, FVR did not gain that broad acceptance by the scientific visualization and medical communities due to the lack of illumination models in the Fourier domain. In 2002, Entezari et al. enhanced the projections quality by incorporating various illumination models into the FVR pipeline [14]. The first model was based on Gammacorrected hemispherical shading that was proposed by Scoggins et al. [15] and the second one adopted spherical harmonic functions for approximating cubic illumination shading. FVR suffered from reduced reconstruction quality that limited its usefulness in particular medical applications. In [16], a solution for enhancing FVR using contour extraction was proposed. It provided a flexible method for extracting material boundaries on surfaces in the Fourier space. Additionally, it included enhancement of several features for revealing important spatial relationships between interior and exterior structures making it an attractive tool for improved Xraylike investigations for a given dataset. Cheng and Ching [17] also presented various methods for designing FVR transfer function based on Bezier curves and Bsplines. Jansen et al. [18] partially accelerated the rendering pipeline on the GPU by mapping the SplitStreamFFT on the GPU. As a major tool in digital radiography, Ntasis et al. [19] provided a webbased Fourier volume renderer for realtime preview of digital reconstructed radiographs (DRRs). Their implementation examined carefully the resampling issues of the frequency domain to generate remote high quality DRRs. Extending the technique beyond operating on volume data with regular grids, FVR has been adapted by Corrigan et al. [20, 21] to directly deal with meshless data. A highlevel MATLABbased FVR framework was presented in [22]. This framework abstracts the complex implementation details of the rendering pipeline to allow imaging researchers to develop image enhancement techniques without having prior knowledge of the OpenGL pipeline. Viola et al. have implemented a FVR heterogeneous rendering pipeline that uses the GPU to execute the rendering stage relying on fragment shaders [23]. However, this approach is limited if the current capabilities of unified computing GPU architectures are considered.
Practical implementation of FVR is complicated by two main factors which arise when the projectionslice theorem is applied to discrete sampled data [8]. First of all, conventional FFT algorithms yield frequencydomain output data which is not ideally structured for resampling. To mitigate this effect, it is necessary to add high performance multidimensional fftshift stages to the rendering pipeline to rearrange the data. Moreover, sampling the frequency domain is equivalent to the replication of the signal in the spatial domain that ultimately accounts for the appearance of ghosting artifacts in the reconstructed images. This issue is resolved by zeropadding the volume in the spatial domain and using highorder interpolation filters in the frequency domain. Our CUDAbased implementation addresses all of these issues.
3. Algorithm and Implementation
The plain FVR algorithm is briefly illustrated to simplify the explanation of the contextual classification of the rendering pipeline. Based on this classification, an efficient strategy has been considered to map this pipeline to run entirely on the GPU.
3.1. Algorithm
The spatial volume must be shifted by a 3D fftshift operation to set its center at the origin of the 3D space. The frequency spectrum of this volume is then obtained by a forward 3D FFT operation. This resulting spectrum is not centered at the origin of the frequency space. Consequently, another 3D fftshift operation is required to center its zerofrequency component to prepare it for correct slice extraction. Afterwards, a 2D projectionslice passing through the origin of the spectral volume, with a normal that is parallel to the viewing direction, is carefully extracted. This slice represents the 2D FFT of the desired projection and thus, the reconstructed image can be directly obtained by a 2D inverse FFT operation. To reduce the aliasing and ghosting artifacts in the reconstructed image, this projectionslice is processed in a further step to have it resampled. The resampling stage is only mandatory if the desired projection was not orthogonal. After resampling the extracted slice, it is backtransformed to the spatial domain by an inverse 2D FFT operation. The resulting image from this inverse transformation is shifted by another 2D fftshift operation to move the center of the image from the edge to origin of the grid used to display the image. Changing the viewing angle implies rotating the 3D spectrum to extract a new projectionslice that corresponds to this angle. The algorithm sequence is graphically depicted in Figure 1.
3.2. Pipeline Classification
According to the sequence of operations in this algorithm, the pipeline could be divided into two consecutive stages: a preprocessing stage and a rendering loop. The preprocessing stage is executed only once to prepare the 3D spectrum of the input spatial volume. The rendering loop is running continuously to generate different projections according to the input viewing angle. The main function of the preprocessing stage is limited to loading a volume of interest, obtaining its frequency spectrum and preparing it for slicing. After the preparation of the spectral volume, the rendering loop is executed to generate different projection images by extracting a projectionslice according to the input viewing angle, then resampling it, and finally backtransforming it to the spatial domain to generate the reconstructed image.
From another perspective, the pipeline could be split according to functionality into two complementary cores or contexts: a computational context and a rendering context. Cooperatively, both contexts complement each other functionalwise, but from the sequence point of view, they are overlapping and one must switch from one context to the other to flow through the pipeline.
3.3. Implementation Strategy
Building the entire FVR pipeline on the GPU directly may be cumbersome and inefficient. In general, an adequate way of constructing GPUbased pipelines is to have a naïve and valid reference implementation of this pipeline on the CPU in advance. Having this CPUbased pipeline tested and validated, every stage of it can be afterwards implemented independently on the GPU and then validated to ensure similar results from the CPUbased implementation until getting the pipeline working integrally on the GPU. This implementation strategy is a rule of thumb to increases flexibility and efficiency for designing and building GPU pipelines. Our proposed FVR pipeline was built based on this strategy.
According to the aforementioned contextual functional classification of the FVR pipeline, the CPU is employed for executing the computational context while the GPU can be used to implement the rendering contexts relying on any graphics application programming interface (API) like OpenGL or Direct 3D. This heterogeneous implementation is easier to handle than a full CPUbased pipeline because the 3D spectral volume can be efficiently represented by 3D OpenGL textures. This efficiency comes from their hardwareacceleration support and builtin interpolation schemes compared to using a 3D array on the CPU and doing the interpolation step manually.
3.4. Hybrid Pipeline
The reference hybrid implementation of the FVR pipeline has been adopted from a previous single threaded one discussed in [24]. This implementation is graphically illustrated in Figure 2. It starts its computational context on the CPU by loading a 3D datasets. This volume is then spatially shifted and its frequency spectrum is obtained by a forward 3D FFT operation. After activating the first OpenGL offscreen rendering context, this spectral volume is mapped to a 3D texture that is allocated on and uploaded to the GPU memory. This texture is intersected with a proxy quadrant passing through the origin of the frequency domain. The resulting projectionslice is then directed to a frame buffer object (FBO) and packed in a 2component 2D texture attached to this FBO. The contexts are then switched to move back to the computational context on the CPU, where the extracted projectionslice is downloaded from the 2D texture. The communication between the computational and offscreen rendering contexts is shown in Figure 3.
Afterwards, the computational context is resumed by resampling the projectionslice to remove the ghosting artifacts. The resampled slice is then backtransformed by an inverse 2D FFT operation to the spatial domain to produce a shifted image of the projection. This image is rearranged by a 2D fftshift operation. The final reconstructed image is then packed into a 2D OpenGL texture and uploaded via the command glTex2D to reside on the GPU memory. Finally, the onscreen rendering context is activated and the texture that contains the final projection image is sent to the frame buffer to be displayed. This context is illustrated in Figure 4.
3.5. Hybrid Implementation Bottlenecks
As usual, each naïve approach has its accompanied bottlenecks that have to be investigated for either their complete removal or at least minimizing their performance overheads. The hybrid implementation of this technique obviously lacked the soul of interactivity due to several bottlenecks. In this section, the hybrid pipeline is analyzed to suppress its main bottlenecks and also to optimize its flow in order to get ready to have it entirely mapped to the GPU with both of its computational and rendering contexts.
A main concern that significantly affects the performance in the computational context is the FFT operations. These operations were expressed relying on the FFTW library [25–29]. The 3D FFT operation takes a considerable amount of time if the size of the input volume was or more. Although it is executed once during the preprocessing stage, performing this operation on several volumes to reveal any depth cues will introduce a real bottleneck and this will consequently elongate the preprocessing stage. Additionally, the inverse 2D FFT operation is executed on a perframe basis. In turn, reducing the time consumed by this operation will be reflected as an order of magnitude enhancement in the frame rate.
The performance of the rendering loop is affected by the resampling stage, which represents the most critical bottleneck in this pipeline. This operation involves four nested for loops. Two of them are used to iterate on each dimension of the slice and the other two are used for the filter kernel. For a projectionslice of size , the resampling operation takes a considerable amount of time that eventually blocks a realtime rendering loop. Additionally, executing the computational context on the CPU and the rendering ones on the GPU requires communicating each other to transfer the data back and forth between them. This transfer limits the overall performance of the pipeline to the bandwidth of the communication channel between the CPU and the GPU, which is much less than that on the GPU internally.
3.6. Algorithm Mapping to the GPU
Mapping the FVR pipeline to be entirely running on the GPU is a significant step that will leverage the overall performance of the reconstruction process. This mapping procedure is feasible, yet, challenging. Although programming the GPU for executing nongraphics algorithms has been simplified by the introduction of generic GPU APIs like CUDA and OpenCL, but it is far from trivial to effectively exploit this titanic computing power provided by the GPU [30].
The hybrid implementation had a similar flow of typical OpenGL applications. This point makes the communication between OpenGL contexts and the CPU comparatively simple. In the GPUbased approach, the implementation of OpenGL contexts remains the same as that of the previous one, while the computational core of the pipeline will be implemented in a CUDA context. However, both contexts run on the GPU, but the interoperability between them takes a different mechanism and some workarounds for successful and efficient communication. The issues behind OpenGL interoperability with CUDA are discussed with some sort of details in [31, 32].
3.7. CUDA Kernels
The first stage for mapping the computational core of the pipeline into a CUDA context is writing analogous device kernels to the respective C functions of the CPUbased computational context. In our mapping, the following kernels have been written.(1)FFT_SHIFT_3D_REAL, which wraps around the 3D spatial volume.(2)FFT_SHIFT_3D_COMPLEX, which wraps around the 3D complex spectrum.(3)FFT_SHIFT_2D_REAL, which wraps around the resulting image from the inverse 2D FFT operation.(4)RESAMPLE_SLICE, which executes the highorder resampling operation.(5)REPACK_ARRAY, which replaces the complex array resulting from the 3D CUFFT operation by another 1D alternative to match the format of the OpenGL 3D spectrum texture.
The fftshift kernels have been adopted from the proposed implementations in [33, 34]. The following FFT kernels were designed to encapsulate the FFT plans that were originally implemented within the CUFFT library [35].(1)CUDA_FFT_3D, which executes the forward 3D FFT operation.(2)CUDA_FFT_2D, which executes the inverse 2D FFT operation.
3.8. GPUBased Pipeline
In this GPUaccelerated pipeline, shown in Figure 5, the CPU is only used for loading the input volume and controlling the flow of the pipeline. This control includes switching between the different contexts, copying data from the CPU to the GPU, and dispatching OpenGL commands. Nevertheless, the entire flow of the pipeline is started and terminated on the GPU. The OpenGL rendering contexts employed in the hybrid pipeline will be reused in this implementation to interoperate with the CUDA computational context. However, there will be slight modifications concerned with the communication mechanisms between the different contexts.
Once the volume dataset of interest is loaded to the CPU side, it is directly shoved to reside on the GPU global memory by a cudaMemcpy operation. On the GPU side, a CUDA context is created and activated. Then the FFT_SHIFT_3D_REAL kernel is executed for centering the 3D spatial volume. CUDA_FFT_3D function is then invoked to obtain the 3D frequency spectrum of the input spatial volume. Afterwards, the FFT_SHIFT_3D_COMPLEX kernel is invoked to set the center of the spectral volume at the origin of the kspace. This array will be mapped to the OpenGL offscreen rendering context for performing the projectionslice extraction operation.
This complex array is converted to an alternative one that is compatible with an OpenGL 3D texture with two components using the REPACK_ARRAY kernel. Although the originaligned spectral volume array is located on the GPU memory, it cannot be accessed directly by OpenGL because it is located in the CUDA address space. This array can be read back to the CPU and then uploaded again to the GPU to reside on the texture memory of the CUDA context. This memory transfer operation is extremely useless and can be solved by using an OpenGL pixel buffer object (PBO) that is registered in the CUDA memory space. This workaround makes it possible to directly copy the spectral array to a buffer that is shared between the different contexts of the pipeline. This mechanism is illustrated in Figure 6.
After mapping the spectral texture to OpenGL memory space, we have to switch contexts to load the OpenGL offscreen rendering context. The 3D spectral texture is bound and intersected with a polygon, with the same sizes of the reconstructed image, and the result of this intersection is directed to the FBO and stored in the attached 2D texture. By projecting the desired slice in this FBO, the OpenGL offscreen rendering context is terminated and the rendering pipeline switches back to the CUDA context. A reference to the attached texture to the FBO is mapped to the CUDA context using CUDA array in order to be accessible for subsequent kernel calls. This CUDA array that carries the projectionslice is then mapped to a CUDA texture object. The RESAMPLE_SLICE kernel is then invoked, so the resampling operation is performed on this texture object and the results are stored directly into another 1D array with compatible format with the next CUFFT operation. Afterwards, the CUDA_FFT_2D function is executed on the resampled slice to generate the projection image. This resulting image is shifted, and thus, an fftshift operation is considered by invoking the FFT_SHIFT_2D_REAL kernel. This correct image is ready to be displayed, but it has to be mapped in advance to an OpenGL 2D texture. This requires mapping the resulting array from the previous fftshift operation to an OpenGL texture using a PBO.
After this mapping operation, the onscree OpenGL rendering context is activated and the final image is pushed to the frame buffer for display. This context is illustrated in Figure 7. Changing the viewing angle repeats this flow by switching back to the OpenGL offscreen context to extract another projectionslice going through all the subsequent stages over and over until the termination of the running process.
3.9. GPUBased Implementation Considerations
Although the OpenGL offscreen context is implemented the same way as it was done in the hybrid pipeline, the extracted projectionslice in the frame buffer object can not be read back by the CPU any more. Alternatively, it is mapped directly to the CUDA memory space by creating another PBO and registering this buffer object with the CUDA memory space. This allows direct attachment of the extracted slice data from the OpenGL offscreen context to be accessible and callable from the different CUDA kernels.
Due to the optimization of the texture caches for 2D spatial locality, reading device memory via texture fetches is more performing than reading from the global memory [36]. In that essence and to exploit the lowlatency memory access associated with the texture memory, the extracted projectionslice is directly packed in a CUDA array. This array can be easily mapped to a CUDA texture which is writable from the OpenGL context via the CUDA array. It is only readable from the CUDA side. This texture leads to a significant speedup in the resampling stage compared to the naïve for loop implementation in the hybrid pipeline.
The context switching operations were accelerated due to the direct connection between the different CUDA and OpenGL contexts relying on the shared buffer objects. These buffer objects are registered within all the contexts to allow direct mapping of the resulting textures from the computational context to the rendering ones. This permits efficient data sharing between the different contexts without any memory copies between the CPU and the CPU at all.
4. Results and Discussion
4.1. Reconstruction Results
Figure 8 shows the resulting radiographs of four medical datasets with different sizes and three intensity scaling factors. All the datasets are organized in regular 3D Cartesian grids [37, 38].
(a)
(b)
(c)
(d)
To show the artifacts associated with resampling the extracted projection slice, three different interpolation schemes have been applied: pointsampling, trilinear interpolation, and windowed interpolation. Figure 9 reflects how the selection of the interpolation scheme can significantly affect the rendering quality of the projection image. The projection reconstructed in Figure 9(a) does not exhibit any ghosts because it is orthogonal and all the replicas are not present in the view. In Figure 9(b), the ghosting artifacts are very apparent due to the usage of the point sampling filter, which clamps the missing sample to the nearest available value. In Figure 9(c), the intensities of the replicas are reduced when trilinear interpolation scheme is applied. In fact, the nearestneighbour and trilinear interpolation schemes have almost the same performance for resampling projectionslices with sizes less than or equal to , but for larger volumes, the performance cost of trilinear interpolation is a bit high. However, this overhead is not significantly degrading the performance of the entire pipeline.
(a)
(b)
(c)
Figure 10 shows the result of zeropadding the spatial volume to suppress the overlapping between the central projection image and the surrounding replicas in the spatial domain. It has to be noticed that the zeropadding operation comes with no overhead on the performance although it dramatically increases the memory requirements by a factor of 8 for 100% zeropadding.
(a)
(b)
(c)
To minimize the artifacts associated with the nature of the technique, the rendering pipeline should afford a high order interpolation filter combined with a preprocessing zeropadding operation. A practical filter that can perform this operation is a Hamming windowed interpolation filter with width 5. Figure 11 shows the result of combining this reconstruction filter with zeropadding the spatial volume on an oblique projection of the visible male datasets.
4.2. Performance Analysis
On the performance side and in order to highlight the improvements gained by porting the computational context of the pipeline from the CPU to an alternative CUDA context, we have analysed the profiling results for every stage individually and then we demonstrate and accounted for the overall speedups gained for the entire pipeline.
The rendering pipeline has been benchmarked on a workstation shipped with an Intel Core i74770 CPU running at 3.4 GHz with 8 MByte of cache and 12 GBytes of DDR3 memory. The application was compiled on Ubuntu 14.04 using GCC 4.7.3 and NVIDIA CUDA compiler nvcc 6.0.1. Two GPUs have been selected to profile the accelerated pipeline. The first one was an NVIDIA GeForce GT 640, which is considered a midrange commodity GPU that costs now almost $100. This GPU has 384 CUDA cores and four GBytes of 128Bit DDR3 memory. It has been referred to in the text by GPU 1. The other GPU was an NVIDIA QUADRO K5000. This one is much more powerful than the first GPU and can be ranked as a stateoftheart one since it has 1536 CUDA cores and four GBytes of GDDR5 memory. It will be referred later in the text and the benchmarks by GPU 2.
Additionally, and on the software side, the application was tested with CUDA 6.0. The benchmarks were generated for different volumes of sizes , , and . Although higher speedups can be achieved if the volume size goes beyond this limit, unfortunately, larger volumes cannot fit in the memory of any of the employed GPUs. Figure 12 shows the benchmarking results for every stage in the pipeline to elaborate the difference between the CPU performance in comparison to the two GPUs.
(a)
(b)
(c)
(d)
(e)
(f)
Table 1 summarizes the average execution times for the different stages of the pipeline and aggregates the entire pipeline performance for a volume dataset of size . We also reference the hybrid implementation to relatively show that our implementation has gained a speedup of x for the entire pipeline and x for the rendering loop. The GPUaccelerated pipeline requires uploading the spatial volume to the device global memory. This step takes on average less than 100 and 40 milliseconds for a volume of size on GPU 1 and GPU 2, respectively. Thanks to the interoperability mechanisms between CUDA and OpenGL, there are no further data transfer operations between the different stages of the rendering pipeline.

These profiles can be reduced if compared to a multithreaded implementation. However, building a multithreaded OpenGL application is quite complex, and thus a pure GPU implementation is much more appropriate to consider rather than constructing a multithreaded application for benchmarks comparison.
5. Conclusion and Future Work
This paper presented a high performance implementation of the Fourier volume rendering algorithm on CUDAcapable GPUs. The literature of FVR was briefly covered and followed by a detailed explanation of the plain rendering algorithm by introducing the notions of computational and rendering contexts. Our implementation strategy considered in advance building a reference hybrid rendering solution where the computational context is executed on the CPU and the rendering one on the GPU using OpenGL. This reference pipeline was then imported in a stepwise fashion to run entirely on the GPU by mapping the computational context to run within a CUDA context and redesigning the rendering contexts to communicate between CUDA and OpenGL to optimize the performance of the rendering loop. Using a dataset, the pure GPU implementation outperformed the hybrid one by a factor of x for the entire pipeline and nearly x of speed up for the rendering loop. The GPUaccelerated pipeline will be extended in the future to include depth cues and different shading models that can improve the visual appearance of the resulting digital radiographs. A clientserver distributed model of the pipeline will be also considered to allow rendering largescale datasets on multiple computing nodes.
Acronyms
1D:  Onedimensional 
2D:  Twodimensional 
3D:  Threedimensional 
API:  Application programming interface 
CPU:  Central processing unit 
CUDA:  Compute unified device architecture 
DRR:  Digital reconstructed radiograph 
FDVR:  Frequency domain volume rendering 
FFT:  Fast Fourier transform 
FBO:  Frame buffer object 
FVR:  Fourier volume rendering 
GPU:  Graphics processing unit 
PBO:  Pixel buffer object. 
Conflict of Interests
The authors have declared no potential conflict of interests with respect to the authorship and/or publication of this paper.
Acknowledgment
The authors would like to gratefully thank the anonymous reviewers for their kind feedback and constructive comments that have helped them in improving the quality of the paper.
References
 T. T. Elvins, “A survey of algorithms for volume visualization,” ACM SIGGRAPH Computer Graphics, vol. 26, pp. 194–201, 1992. View at: Publisher Site  Google Scholar
 M. Ikits, J. Kniss, A. Lefohn, and C. Hansen, “Volume rendering techniques,” in GPU Gems, chapter 39, AddisonWesley, 2007. View at: Google Scholar
 B. Preim and D. Bartz, Visualization in Medicine: Theory, Algorithms, and Applications, The Morgan Kaufmann Series in Computer Graphics, Morgan Kaufmann, Boston, Mass, USA, 1st edition, 2007.
 G. Dougherty, Digital Image Processing for Medical Applications, chapter 12, Cambridge University Press, Cambridge, UK, 2009.
 K. H. Kim, M. J. Kwon, S. M. Kwon, J. B. Ra, and H. W. Park, “Fast surface and volume rendering based on shearwarp factorization for a surgical simulator,” Computer Aided Surgery, vol. 7, no. 5, pp. 268–278, 2002. View at: Publisher Site  Google Scholar
 J. K. Udupa, “Surface versus volume rendering: a comparative assessment,” in Proceedings of the 1st Conference on Visualization in Biomedical Computing, pp. 83–91, May 1990. View at: Google Scholar
 C. D. Hansen and C. R. Johnson, The Visualization Handbook, chapter 7, Elsevier, New York, NY, USA, 2005.
 S. Dunne, S. Napel, and B. Rutt, “Interactive display of volumetric data by fast fourier projection,” Computerized Medical Imaging and Graphics, vol. 16, no. 4, pp. 237–251, 1992. View at: Publisher Site  Google Scholar
 T. Malzbender, “Fourier volume rendering,” ACM Transactions on Graphics, vol. 9, no. 7, pp. 233–250, 2000. View at: Google Scholar
 R. Grosso and T. Ertl, “Biorthogonal wavelet filters for frequency domain volume rendering,” in Visualization in Scientific Computing '95: Proceedings of the Eurographics Workshop in Chia, Italy, May 3–5, 1995, Eurographics, pp. 81–95, Springer, Vienna, Austria, 1995. View at: Publisher Site  Google Scholar
 T. Theußl, “An implementation of frequency domain volume rendering using the Hartley transform,” in The Course of Special Topics in Computer Graphics, 1999. View at: Google Scholar
 M. Levoy, “Volume rendering using the Fourier projectionslice theorem,” in Proceedings of the Conference on Graphics Interface, Technical Report from Stanford University CSLTR92521, pp. 61–69, May 1992. View at: Google Scholar
 T. Totsuka and M. Levoy, “Frequency domain volume rendering,” in Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '93), Technical Report from Stanford University CSLTR93570, pp. 271–278, 1993. View at: Publisher Site  Google Scholar
 A. Entezari, R. Scoggins, T. Moller, and R. Machiraju, “Shading for Fourier volume rendering,” in Proceedings of the IEEE/ACM SIGGRAPH Symposium on Volume Visualization and Graphics, pp. 131–138, Boston, Mass, USA, October 2002. View at: Publisher Site  Google Scholar
 R. K. Scoggins, R. Machiraju, and R. J. Moorhead, “Approximate shading for the reillumination of synthetic images,” in Proceedings of the IEEE Visualization, pp. 379–386, October 2001. View at: Google Scholar
 Z. Nagy, M. Novotni, and R. Klein, “Enhancing Fourier volume rendering using contour extraction,” in Proceedings of the 7th International Conference on Medical Image Computingand Computer Assisted Intervention (MICCAI ’04), pp. 470–477, September 2004. View at: Google Scholar
 C.C. Cheng and Y.T. Ching, “Realtime adjustment of transfer function for Fourier volume rendering,” Journal of Electronic Imaging, vol. 20, no. 4, Article ID 043004, 2011. View at: Publisher Site  Google Scholar
 T. Jansen, B. von RymonLipinski, N. Hanssen, and E. Keeve, “Fourier volume rendering on the GPU using a splitstreamFFT,” in Proceedings of the 9th International Fall Workshop on Vision, Modeling, and Visualization (VMV '04), pp. 395–403, 2004. View at: Google Scholar
 E. Ntasis, T. A. Maniatis, and K. S. Nikita, “Fourier volume rendering for real time preview of digital reconstructed radiographs: a webbased implementation,” Computerized Medical Imaging and Graphics, vol. 26, no. 1, pp. 1–8, 2002. View at: Publisher Site  Google Scholar
 A. Corrigan and J. Wallin, “Visualization of meshless simulations using Fourier volume rendering,” in Proceedings of the ECCOMAS Thematic Conference on Meshless Methods, pp. 65–70, July 2007. View at: Google Scholar
 A. Corrigan, J. Wallin, and M. Vesenjak, “Visualization of meshless simulations using Fourier volume rendering,” in Progress on Meshless Methods, vol. 11 of Computational Methods in Applied Sciences, pp. 291–305, Springer, Cham, Switzerland, 2009. View at: Publisher Site  Google Scholar
 M. Abdellah, A. Eldieb, and A. Sharawi, “Matlabbased Fourier volume rendering framework,” in Proceedings of the 7th Cairo International Biolmedical Engineering Conference (CIBEC '14), December 2014. View at: Google Scholar
 I. Viola, A. Kanitsar, and M. E. Gröller, “GPUbased frequency domain volume rendering,” in Proceedings of the 20th Spring Conference on Computer Graphics (SCCG '04), pp. 55–64, April 2004. View at: Google Scholar
 M. Abdellah, A. Eldeib, and A. Sharawi, “Offline large scale fourier volume rendering on lowend hardware,” in Proceedings of the Cairo International Biomedical Engineering Conference (CIBEC '14), pp. 59–62, Giza, Egypt, December 2014. View at: Publisher Site  Google Scholar
 M. Frigo and S. G. Johnson, “FFTW,” V. 3.3, June 2014, http://www.fftw.org/. View at: Google Scholar
 M. Frigo and S. G. Johnson, “The fastest Fourier transform in the west,” Technical Report MITLCSTR728, Massachusetts Institute of Technology, Cambridge, Mass, USA, 1997. View at: Google Scholar
 M. Frigo and S. G. Johnson, “FFTW: an adaptive software architecture for the FFT,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 3, pp. 1381–1384, IEEE, May 1998. View at: Google Scholar
 S. G. Johnson and M. Frigo, “Implementing FFTs in practice,” in Fast Fourier Transforms, C. S. Burrus, Ed., chapter 11, Connexions, Rice University, Houston, Tex, USA, 2008. View at: Google Scholar
 M. Frigo and S. G. Johnson, “The design and implementation of FFTW3,” Proceedings of the IEEE, vol. 93, no. 2, pp. 216–231, 2005. View at: Publisher Site  Google Scholar
 D. B. Kirk and W. W. Hwu, Programming Massively Parallel Processors: A Handson Approach, Morgan Kaufmann, San Francisco, Calif, USA, 1st edition, 2010.
 J. Stam, What every CUDA programmer should know about OpenGL, Online, Recorded Session ID 1055, 2009, http://developer.download.nvidia.com/compute/cuda/docs/GTC09Materials.htm.
 R. Farber, CUDA Application Design and Development, chapter 9, Elsevier, New York, NY, USA, 1st edition, 2011.
 M. Abdellah, S. Saleh, A. Eldeib, and A. Shaarawi, “High performance multidimensional (2D/3D) FFTShift implementation on Graphics Processing Units (GPUs),” in Proceedings of the 6th Cairo International Biomedical Engineering Conference (CIBEC '12), pp. 171–174, Giza, Egypt, December 2012. View at: Publisher Site  Google Scholar
 M. Abdellah, “cufftShift: high performance CUDAaccelerated FFTshift library,” in Proceedings of the High Performance Computing Symposium (HPC '14), pp. 5:1–5:8, Society for Computer Simulation International, San Diego, Calif, USA, 2014. View at: Google Scholar
 NVIDIA, “CUFFT library,” CUDA toolkit 6.0. View at: Google Scholar
 NVIDIA, CUDA C Best Practice Guide (Design Guide), 2012.
 The Volume Library, “Online library for Volume visualization datasets,” http://lgdv.cs.fau.de/External/vollib/. View at: Google Scholar
 “Online library for medical MR and CT datasets,” January 2013, http://www.volvis.org/. View at: Google Scholar
Copyright
Copyright © 2015 Marwan Abdellah et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.