Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2014, Article ID 523862, 14 pages
http://dx.doi.org/10.1155/2014/523862
Research Article

3D Data Denoising via Nonlocal Means Filter by Using Parallel GPU Strategies

Department of Mathematics and Applications “R. Caccioppoli”, University of Naples “Federico II”, Via Cintia, 80126 Napoli, Italy

Received 4 March 2014; Revised 2 May 2014; Accepted 2 May 2014; Published 16 June 2014

Academic Editor: Lev Klebanov

Copyright © 2014 Salvatore Cuomo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Nonlocal Means (NLM) algorithm is widely considered as a state-of-the-art denoising filter in many research fields. Its high computational complexity leads researchers to the development of parallel programming approaches and the use of massively parallel architectures such as the GPUs. In the recent years, the GPU devices had led to achieving reasonable running times by filtering, slice-by-slice, and 3D datasets with a 2D NLM algorithm. In our approach we design and implement a fully 3D NonLocal Means parallel approach, adopting different algorithm mapping strategies on GPU architecture and multi-GPU framework, in order to demonstrate its high applicability and scalability. The experimental results we obtained encourage the usability of our approach in a large spectrum of applicative scenarios such as magnetic resonance imaging (MRI) or video sequence denoising.

1. Introduction

Image denoising represents one of the most common tasks of image processing. Several techniques have been developed in the last decades to face the problem of removing noise from images, still preserving the small structures from an excessive blurring (see [1] for a review). All those schemes share the belief that an improved value of a given image point can be expressed as a function of the image itself.

One of the most performing and robust denoising approaches is the Nonlocal Means (NLM) filter, introduced in 2005 by Buades et al. [2]. Since its first appearance, the family of NLM algorithm and implementation variants has enormously grown (just to mention some of the most relevant improvements, see [310]); nevertheless, all of them assume that the restoring function for a given point is a mean of all the image values, largely weighted according to the radiometric similarity between values and only weakly tied to a spatial proximity criterion.

The result is a general-purpose denoising scheme, whose performances are widely accepted to be better with respect to the previous state-of-the-art algorithms, such as the total variation, the wavelet thresholding, or the anisotropic filtering [11]. In particular, it has been shown that NLM filter guarantees the homogeneity of flat zones, preserves edges and fine structures, and transforms white noise into white noise, thus avoiding the introduction of artifacts and spurious correlated signal [1]. Unsurprisingly, the NLM algorithm is simple but computationally very heavy, because it requires, for denoising of each pixel, a high number of elementary operations, and even some fast versions of the scheme are quite demanding on 2D images and almost daunting on 3D datasets. The huge amount of computational demand has been recently addressed by using accelerated hardware [12, 13], the graphic processor units (GPUs) in particular.

The GPUs are massively parallel architectures that efficiently work with impressive performance improvements but at the same time require a deep understanding of the underlying architecture and ad hoc thread based algorithms. Obtaining reasonable performance in terms of execution times of the NLM algorithm allows its implementation in a large spectrum of applicative scenarios such as magnetic resonance imaging (MRI) or video sequence denoising where, especially in the latter one, the input datasets can reach a considerable size. Moreover, in 3D datasets, for example, in the context of the magnetic resonance imaging (MRI), the use of fully 3D filters is more appropriate than a 2D-based slice-by-slice filtering approach to exploit all the information contained in the image. To the best of our knowledge, although there are several 2D GPU-based NLM versions [1416], the 3D version of NLM filter has been poorly investigated in terms of both implementation and performance on GPUs.

In this paper we design and implement a fully 3D Nonlocal Means parallel approach based on Compute Unified Device Architecture (CUDA) [16] that relies on the graphic processor unit (GPU) architecture. Moreover, we adopt a simple but effective strategy to avoid the GPU memory bound issue, due to oversizing 3D datasets, for example, in a video sequence denoising scenario. We report and analyze the performance of our implementation for several 3D datasets. The parallelization strategy of the filter via GPUs gives clinically-feasible MRI denoising execution times with a significant increase in speed.

The plan of the paper is as follows. In Section 2 we briefly describe the NLM algorithm. To follow, in Section 3 we provide the implementation details. Moreover, in Section 4 we report a brief review about the related works and in Section 5 we present and discuss the results of our work. Finally, in Section 6 we draw conclusions and outline the future work.

2. Theory

2.1. General Description

An -D image can be considered as a real function with a bounded support . The NLM filter [2] is a class of endomorphisms of the image space, identified by 2 parameters ( and ), that acts as follows: where The intensity of a given point of the new image is a mean of the intensities of the original image, according to a weight function that disregards any explicit criterion of spatial proximity and only considers a measure (ruled by ) of self-similarity between windows of radius centered on each point (radiometric proximity).

If the image is defined on a discrete, regular grid , From (1)-(2) it follows that the filtered dataset is given by Moreover, from (4)-(5) it follows that the complexity of the filter is .

2.2. Actual Algorithm

Both computational issues and the convenience to introduce a geometric proximity criterion in addition to the pure radiometric distance measure led to a change in the original version of the NLM filter [6].

Therefore, given a search radius , for each voxel located at we define a search box as The search box associated with the th voxel defines the ensemble of voxels whose intensities will be available in the following for restoring (denoising) of the intensity , thus reducing the search freedom of (4) (in that case, ).

Analogously, given a similarity radius , for each voxel within a given search box , we can define a similarity box In this case, plays the role of in (4), provided that the original smooth Gaussian kernel is replaced by a binary cut-off.

Finally, the denoised image is where it results that the algorithm complexity is .

The filter strength, which is determined by , can be tuned to obtain an optimized denoising, independently from the search radius and the standard deviation of noise , where is an adimensional constant to be tuned: In Algorithm 1 the pseudocode of the NLM filter is showed.

alg1
Algorithm 1: Pseudocode of the NLM algorithm.

3. Parallel GPU Implementation

General-purpose computing on graphics processing units (GPGPU) is the use of GPUs to perform highly parallelizable computations that would normally be handled by CPU devices. Programming with GPUs requires both a deep understanding of the underlying computing architecture and a massive rethinking of existing CPU based algorithms. The basic idea behind GPGPU is handling the sequential code by CPU and processing parallel code on GPU, still having CPU be the coordinator of the data processing. Additionally, the use of multiple GPU in one system, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.

We implement the 3D NLM filter on the NVIDIA parallel computing architecture, which consists in a set of cores, or scalar processors (SPs), performing simple mathematical operations. SPs are organized into a streaming multiprocessor (SM), which have a shared memory, a shared L1-cache, and several other units. A SM executes one or more thread blocks, while CUDA cores and the other units execute thread instructions; in particular, each SM executes threads in groups of 32, called warps [17]. In Figure 1 we show the NVIDIA Fermi architecture, where each SM (vertical rectangular strip) has scheduler and dispatch units (orange portion), execution units (green portion), and a configurable memory of 64 KB (light blue portions), which consists of a register file, an internal shared memory, and an L1 cache. This memory is configurable in 16 KB (or 48 KB) for shared memory and 48 KB (or 16 KB) for L1 cache. Moreover there are 16 load/store (LD/ST) units per SM and 4 special function units (SFUs) (nuances of green portions) available to handle transcendental and other special operations such as  sin,  cos,  exp, and  rcp (reciprocal).

523862.fig.001
Figure 1: View of a NVIDIA FERMI GPU architecture.
3.1. GPU Thread Strategy Mapping

As shown in Algorithm 1, the computational factor that influences the overall performance of this algorithm is represented by a nested iteration structure (see the statement at line 1). A basic serial implementation is presented in Algorithm 2.

alg2
Algorithm 2: C code of NLM algorithm.

As we can observe, the nested for loops represent the main bottleneck in the algorithm computation; according to this, the first step of our parallelization strategy is to unroll the outermost for loops that, in the serial version of the algorithm, perform the scanning of the 3D image.

With the aim of achieving the best performance of the algorithm and to optimize the usage of this massively parallel architecture, we perform two different unrolling strategies. In the first one, illustrated in Figure 2, we unroll the for loops that scan on -axis and -axis, leaving the for loop on -axis (partial unrolling); in this way the GPU model proposes that each thread is in charge of performing the denoising process of all the voxels in its direction (see Algorithm 3).

alg3
Algorithm 3: CUDA code of NLM algorithm with partial unrolling strategy.

523862.fig.002
Figure 2: Graphic representation of the partial unrolling strategy.

In the second strategy, illustrated in Figure 3, we unroll all the for loops that scan the 3D images along the directions , , and (full unrolling); in this way the GPU model proposes that each thread is in charge of performing the denoising process of the single voxel associated to it (see Algorithm 4).

alg4
Algorithm 4: CUDA code of NLM algorithm with full unrolling strategy.

523862.fig.003
Figure 3: Graphic representation of the full unrolling strategy.

These two strategies require different kernel grid configurations, in order to take full advantage of the GPU architecture; while the first strategy aims at creating a smaller number of threads with a longer computing time, the second one aims to create a larger grid and therefore a greater number of threads, which, however, have a shorter computing time.

We clearly observe that these implementations are bounded by the GPU global memory size. Since, for a fixed dataset , we store two objects to manage the overall data (the first one for the input dataset and the second one for the denoised one), the maximum size of available memory is , where is the GPU global memory size in byte. For datasets that exceed we implement an ad hoc function that tunes and splits the data before starting the denoising. In particular, this function follows the following.(1)Compute the available memory and the size of the dataset .(2)Compute the size of each subdataset (with , where is the number of subdatasets) that depends on the search () and similarity () window cardinalities.(3)Split the dataset into subdatasets .

The function, in a first phase, splits the input dataset along the -axis; then the 3D NLM GPU kernel is executed for each , exactly times, and finally, at the end of the denoising process, it performs a gathering phase, in order to reassemble the entire dataset collecting the different denoised parts from each execution of the kernel function. It should be noted that the size of each subdataset is computed taking into account an overlapping zone between two adjacent subdatasets, to adapt the NLM algorithm 3D on suitable search and similarity windows. In Figure 4 the graphic representation of the split along the -axis is showed, where, to simplify the discussion, we do not consider the overlapped portions.

523862.fig.004
Figure 4: Graphic representation of the split.

For a better understanding of the entire process and its different steps that compose the 3D NLM GPU algorithm, we show in Figure 5 its work flow, where the blue colored charts represent the CPU workload and the green colored charts represent the GPU one. The input dataset is splitted in subdatasets , by the “tune and split” function. One at a time, each is copied from the CPU (host) to the GPU (device), then it is processed by the 3D NLM kernel and the resulting denoised subdataset is copied from the GPU to the CPU. Finally, all the are merged in order to obtain the denoised dataset .

523862.fig.005
Figure 5: 3D NLM work flow.
3.2. Multi-GPU Implementations

To exploit even larger degrees of parallelism in our 3D dataset denoising, we deploy a new version of the 3D NLM algorithm that is compatible with multi-GPU architectures. The multiple GPUs avoid many of the limitations of single GPU utilization (e.g., on-chip memory resources) by exploiting the combined resources of multiple devices. Dividing the algorithm across multiple GPUs requires partitioning the computational space and allowing each GPU to operate on a subsection of the data. Although, during all our experiments, we consider only configurations with one or two GPU, the framework is fully compatible to exploit the use of more than two GPUs.

It is important to underline that the input dataset will be partitioned among the global memories of the GPUs by means of the ad hoc function for tuning and splitting, above described. Also in this case, we split the input dataset along the -axis, thereby distributing the slices among the available GPUs; then we allocate on each GPU global memory only the required space to store the subset of the input data. At the end of the denoising process, we perform a gathering phase, in order to reassemble the entire dataset collecting the different denoised parts from each GPU. As we will show in the next section, the transfer time between GPU global memory and the host RAM memory minimally affects the total computing time of the algorithm.

In Figure 6 we report a simple flowchart that summarizes our multi-GPU strategy. Let be the number of available GPU units and the number of subdatasets (where and ). If the sizes of the input and output datasets, respectively, and , are bounded by the total GPU global memory size, then is a multiple of , with ; otherwise . As we can observe, the input dataset is splitted into subdatasets , by the “tune and split” function. At this point, the are copied from the CPU to the available GPU units that, independently of one another, perform the denoising of the assigned . Then the resulting denoised subdataset is copied from the specific GPU unit to the CPU. Finally, all the are merged in order to obtain the denoised dataset .

523862.fig.006
Figure 6: 3D NLM multi-GPU work flow.

Exploiting the multi-GPU support will lead to a drastic reduction of the execution times as we will detail in Section 5.1. This kind of approach allows us to take advantage of a second type of parallelism; indeed we observe that the denoising of a single slice along the -axis is independent from the denoising of the other slices along the same axis; therefore it is possible to split the dataset into subsets of contiguous slices that can be processed in parallel by several GPUs.

The basic idea consists in the identification of the number of GPU devices on the architecture; this information can be returned by means of a CUDA library function and stored in the variable n_gpus. Then, the third dimension of the image is “splitted” between the available GPUs. In Algorithm 5, we report a scratch of the multi-GPU implementation with a partial unrolling strategy. For the multi-GPU algorithm with full unrolling strategy, the implementation is in a similar way.

alg5
Algorithm 5: CUDA multi-GPU code of NLM algorithm with partial unrolling strategy.

As data access times in memory can vary depending on the kernel configuration, we focus our attention on testing several configurations, both mono- and bidimensional, for the thread block size in which each slice is divided; therefore, each thread processes sequentially the voxels along the third dimension. The workload is divided along the third dimension for multi-GPU configurations. Inside each GPU the workload is divided along the first and second dimensions, in strips (monodimensional configurations) and tiles (bidimensional configurations) of threads. Strips or tiles are allowed to cover entirely or only partially the slice grid. In Figure 7 we report a schematic representation of the data processing subdivision.

fig7
Figure 7: Schematic representation of thread organization inside a single GPU and between GPUs. Thread workload highlights the voxels that will be processed by a single thread. Moreover, threads can be organized in strips or tiles of the specified Block size. The red cube depicts the search window and the smaller blue one depicts the similarity window.

Moreover, we also test the performance impact of L1-cache usage, by using the binary L1-prefer setting, which allows choosing between two possible configurations: 48 KB of shared memory and 16 KB of L1-cache (no L1-prefer), or 16 KB of shared memory and 48 KB of L1-cache (L1-prefer).

The computing environment we used is equipped with 2 Intel Xeon CPU E5620 (2.4 GHz) and an NVIDIA TESLA S2050 card, based on the FERMI GPU. This device consists of 4 GPGPU units, each of which with 3 GB of RAM memory and 448 processing cores working at 1.15 GHz. Moreover, the middleware framework has OS Linux 2.6.18-194.el5, CUDA release 3.2, and C compiler gcc v.4.1.2. The numerical code is implemented by using the single precision arithmetic. The CPU system is equipped with an INTEL core i5-2500S (Sandy Bridge) at 2.7–3.7 GHz.

4. Related Work

General-purpose computing on graphics processing units (often termed GPGPU or GPU computing) supports a broad range of scientific and engineering applications, including physical simulation, signal and image processing, database management, and data mining [1822].

Recently, a number of authors have implemented the 2D Nonlocal Means algorithm on a GPU; in this section, we will describe representative ones. In [23] a locally constant weight assumption is used in the GPU implementation to speed up the basic algorithm. In [15], a GPU extension of the Nonlocal Means algorithm is proposed to denoise ultrasound images. In this approach, the maximum patch size is limited by the amount of shared memory of the GPU. A technique to incorporate both temporal information and color information into the Nonlocal Means algorithm, in order to restore video sequences, is described in [16]. From a parallel computational point of view, they did not report in depth analysis of the accelerated algorithm; moreover the algorithm they consider is basically a bidimensional implementation, that although it is computationally less expensive, is not able to reach the same denoising quality results of the 3D version we consider and implement in this paper.

5. Results and Discussions

In this section we want to investigate and discuss the performance and the scalability of the proposed GPU algorithm, taking into account also the splitting strategy. To better point out the effective application of this implementation, we report, in Section 5.1, a magnetic resonance imaging case of study.

5.1. A Case of Study: MRI Consistency

As demonstrated in these research articles [24, 25], magnetic resonance imaging can take great advantage of the Nonlocal Means filter, since the MRI images always suffer of Rician noise. Therefore, in order to produce a strictly equivalent GPU implementation of the sequential CPU NLM algorithm, we check the implementation consistency by comparing voxel-by-voxel the images obtained by CPU and GPU denoising. In Figure 8 we show the 3D NLM filtering results, where the overall simulations are carried out with a 3D brain MRI dataset (http://www.bic.mni.mcgill.ca/brainweb) of . The difference between the GPU and CPU restored images falls within machine precision order of magnitude, which are likely to be due to the arithmetic logic unit precision. Since the differences are negligible (six orders of magnitude smaller than signal), they do not impair visual inspection.

523862.fig.008
Figure 8: From left to right the frames show a slice of the original dataset, the CPU restored image, the GPU restored image, and the difference between CPU and GPU filtered images (enhanced by a scaling factor of ). From top to bottom, there are initial, central. and final slices.
5.2. Performance

To demonstrate the advantages and benefits of using our parallelization strategy we report several performance experiments. From a memory usage optimization point of view, we focus our attention on the impact of the cache size on the overall execution time and perform several test runs by varying L1-prefer modality; promising results are shown in Table 1, both for the partial unrolling algorithm and the full unrolling algorithm. Clearly, L1-prefer choice gives a benefit on lager dataset, with a performance improvement that ranges from fraction of percent in the smallest dataset to some 5% in the largest ones. From the other side, as we can observe in Table 1, forcing the L1-prefer setting gives a performance that benefits only the partial unrolling strategy. These results suggest that the L1 miss rate, still with 16 KB, is low enough to have high performance processing even with old generation cards having small amount of cache.

tab1
Table 1: L1-prefer switch influence on execution times for the partial and full unrolling algorithm with (128, 1, 1) block size configuration and 3D datasets of normally distributed random numbers.

As we mentioned in the previous section, the kernel grid configuration plays a key role if we want to optimize the executions of the GPU NLM filter. The strip or tile thread division influences the filter performance in terms of computing time, due to the different type of data access. Experimental results we carry out prove that an optimal configuration is given by the strip subdivision and in Table 1, both for the partial unrolling algorithm and the full unrolling algorithm; we report the running times of configuration on 2 GPUs. Adopting these different kinds of grid organization, Tables 2 and 3 (both for the partial unrolling algorithm) and Tables 4 and 5 (both for the full unrolling algorithm) summarize a detailed comparison between running times of CPU, single GPU, and multi-GPU implementation of 3D NLM filter with various thread block size. Speed-up values suggest that the bigger the dataset to be filtered, the better the scalability of the implementation. In particular, for datasets size typical of MRI clinical practice, the scalability of the proposed implementation is close to be ideal. Moreover, the optimal thread configuration seems to be strips of thread between 128 and 256 elements. This result is strongly consistent with NVIDIA guidelines [26] about optimal thread block size. Finally, the obtained results suggest that a strip configuration should be preferred to tile one of the same size, because of sequential memory access.

tab2
Table 2: Partial unrolling algorithm on a single GPU unit. Execution times and speed-up values for several block size configurations and 3D datasets of normally distributed random numbers. Search and similarity windows have been set according to and .
tab3
Table 3: Partial unrolling algorithm on 2 GPU units. Execution times and speed-up values for several block size configurations and 3D datasets of normally distributed random numbers. Search and similarity windows have been set according to and .
tab4
Table 4: Full unrolling algorithm on a single GPU unit. Execution times and speed-up values for several block size configurations and 3D datasets of normally distributed random numbers. Search and similarity windows have been set according to and .
tab5
Table 5: Full unrolling algorithm on 2 GPU units. Execution times and speed-up values for several block size configurations and 3D datasets of normally distributed random numbers. Search and similarity windows have been set according to and .

In Figure 9 we outline the CPU (blue curve), 1 GPU (red curve), and 2 GPU (green curve) GFlops for the full unrolling algorithm with variable dataset sizes and thread block configuration (see Tables 4 and 5), since the performance calculation of an algorithm through GFLOPs is very indicative in areas where the floating point calculation is the predominant feature of the calculation itself. It should be remarked that we are able to exploit up to 55% of single precision floating point peak performance of the used GPU.

523862.fig.009
Figure 9: Outline of the CPU (blue), 1 GPU (red), and 2 GPU (green) GFlops values for the full unrolling algorithm with different dataset sizes and thread block configuration .

Another factor that can strongly influence the performance trend of the algorithm is the size of the search and similarity windows. Hence we investigate the behavior of the full unrolling algorithm, with a 3D dataset of normally distributed random numbers (size = 512 × 512 × 128), against the search and similarity window cardinalities. In our experiments we consider the search and similarity window size to be larger than the common dimension in the real scenarios. Observing Tables 6 and 7, respectively, for executions on 1 GPU unit and 2 GPU units, we note a high and almost constant speed-up among the various experiments, which makes feasible large window filter testing in a reasonable time.

tab6
Table 6: Full unrolling algorithm on a single GPU unit. Execution times and speed-up values for a 3D dataset of normally distributed random numbers (size ) for several window configurations.
tab7
Table 7: Full unrolling algorithm on 2 GPU units. Execution times and speed-up values for a 3D dataset of normally distributed random numbers (size ) for several window configurations.

At the end of this section we draw some practical remarks. To have the best hardware performance and results closer to NVIDIA guidelines [26], configuration tricks are needed. It is strongly recommended to use the L1-prefer feature in the partial unrolling algorithm. This fact is related to the principle locality of the threads for data accesses. Moreover, in overall experiments, we have observed that the optimal thread configuration seems to be strips of thread between 128 and 256 elements.

5.3. GPU NLM Video Denoising

The performance results showed in the previous section encourage the application of this denoising filter to a number of different scenarios, formerly far away from being able to be taken into account, due to the filter high computational demand on large datasets. Denoising video sequences represent one, where the increase of resolution and time of a video sequence would lead to denoising times that do not take into account adopting a serial version of the filter. According to this, we perform several performance tests on different video in resolution and fps, in order to demonstrate the scalability of our splitting approach on huge input datasets. Selecting the most commonly used video formats, PAL (), HD (), and FULL-HD (), we report the execution times for the 3D NLM GPU kernel with the partial unrolling strategy, in Table 8, and with the full unrolling one in Table 9.

tab8
Table 8: Partial unrolling algorithms for different video formats. Observe that the execution times include the time needed to copy data from CPU to GPU and from GPU to CPU.
tab9
Table 9: Full unrolling algorithms for different video formats. Observe that the execution times include the time needed to copy data from CPU to GPU and from GPU to CPU.

Investigating more in depth the scalability of our full unrolling approach, in Figure 10, we fixed the third dimension (fps) and tuned the first two dimensions (see the related Tables 8 and 9). In this case, the algorithm well scales with each research windows configuration. Moreover, in this chart we correlate the ratio between the HD format execution times with respect to the PAL one and the FULL-HD format with respect to the HD one; each color represents a research window configuration of the NLM algorithm. In both histograms, the colored cubes have approximately the same dimension between them, which indicates that the ratio between the execution time remains almost constant.

523862.fig.0010
Figure 10: Scalability of the full unrolling approach, fixed the third dimension (fps).

In Figure 11, we fixed the PAL format and tuned the fps along the third dimension, in order to investigate the scalability of the full unrolling algorithm, for a selected research window of the NLM algorithm.

523862.fig.0011
Figure 11: Scalability of the full unrolling approach, fixed the PAL format, and tuning the fps along the -axis.

On the -axis we have four PAL videos differing in fps (, , , and ), on -axis we have four research window sizes , , , and , and on -axis we have the execution times. The below surface represents the ideal execution times that would be obtained if the algorithm perfectly scales, growing the third video dimension (fps). The top surface instead represents the execution times actually obtained by the algorithm. As we can observe, the two surfaces deviate from each other and this phenomenon grew considerably with the increase of the values along the third dimension. Accordingly, it was necessary to modify the algorithm to increase scalability, adopting the splitting strategy described in the previous section. In detail, splitting a video sequence, in subsequences at 50 fps, we obtain execution times proportional to increasing fps, while augmenting the overall fps number. As example, considering a video sequence of 100 fps, the execution time of the algorithm will be approximately twice with respect to a sequence at 50 fps (both the same format; see Table 10); the time necessary to transfer the data from the CPU to the GPU and conversely is negligible.

tab10
Table 10: Full unrolling algorithm with split for different video formats. Observe that the number of splits is intended on the same GPU and the time needed to copy data from CPU to GPU and from GPU to CPU is reported in bold.

Finally, in Figure 12, we report a very encouraging result in terms of scalability; on -axis we have 4 FULL-HD videos differing in fps (, , , and ) and on -axis we have our GPU NLM denoising execution times. In the chart, the dashed line represent the ideal scalability, the red line the scalability obtained by using the fully unrolled strategy, and the green line the scalability by using the splitting strategy. As we can observe, our splitting strategy applied on the GPU version of NLM algorithm presents a scalability very close to the ideal one.

523862.fig.0012
Figure 12: Scalability comparison between the fully unrolled strategy, the splitting strategy, and the ideal, for the FULL-HD video format.

6. Conclusions

NLM filter is a state-of-the-art denoising algorithm. However, the huge amount of computational load prevents the large-scale diffusion of most common implementations of the algorithm, in particular for 3D datasets of great sizes.

In this paper we design and implement a fully 3D Nonlocal Means parallel approach, adopting different algorithm mapping strategies on a GPU and multi-GPU architecture in order to demonstrate its high scalability and consequent applicability. We focus our attention on designing and testing several kernel configurations that rely on two different threads usage schemes: bidimensional and tridimensional CUDA kernels adopting a multi-GPU framework. In this way, we argue to identify a set of optimal settings that guarantee high performance results in terms of execution time and scalability of this approach, for a wide spectrum of application scenarios. Moreover, we propose two different cases of studies; the first simulates an MRI scenario, where the Nonlocal Means algorithm is widely applied to remove the Rician noise in this kind of medical images; the second simulates a video sequence that represents a suitable example of huge 3D dataset.

The obtained running times and the scalability demonstrate that, for most common 3D dataset sizes, thanks to our parallelizing approaches, we are able to drastically reduce the inapplicability of the Nonlocal means algorithm, that is, due to its huge amount of computational demand. The experimental scenarios we showed obtain great benefits by adopting a parallel approach using the GPU architecture and therefore encourage the exploration of more sophisticated algorithm variants and reduce the gap between the previous execution times and acceptable performance for real-time scenarios.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors thank Drs. G. Guarnieri (ENEA-UTICT-HPC Portici Research Center) and M. Chinnici (ENEA-UTICT-PRA Casaccia Research Center) for support in GPU utilization. The computing resources used for this work have been provided by CRESCO/ENEAGRID High Performance Computing infrastructure and its ITC staff; see http://www.cresco.enea.it for information. CRESCO/ENEAGRID High Performance Computing infrastructure is funded by ENEA, the “Italian National Agency for New Technologies, Energy and Sustainable Economic Development” and by national and European research programs.

References

  1. A. Buades, B. Coll, and J. M. Morel, “Image denoising methods. A new nonlocal principle,” SIAM Review, vol. 52, no. 1, pp. 113–147, 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling and Simulation, vol. 4, no. 2, pp. 490–530, 2005. View at Publisher · View at Google Scholar · View at Scopus
  3. S. Zimmer, S. Didas, and J. Weickert:, “A rotationally invariant block matching strategy improving image denoising with non-local means,” in Proceedings of the International Workshop on Local and Nonlocal Approximation in Image Processing (LNLA '08), pp. 135–142, Lausanne, Switzerland, August 2008.
  4. M. Ebrahimi and E. Vrscay, “Examining the role of scale in the context of the non-local-means filter,” in Image Analysis and Recognition, vol. 5112 of Lecture Notes in Computer Science, pp. 170–181, 2008. View at Publisher · View at Google Scholar
  5. H. Xu, J. Xu, and F. Wu, “On the biased estimation of nonlocal means filter,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '08), pp. 1149–1152, Hanover, Germany, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  6. P. Coupé, P. Yger, S. Prima, P. Hellier, C. Kervrann, and C. Barillot, “An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images,” IEEE Transactions on Medical Imaging, vol. 27, no. 4, pp. 425–441, 2008. View at Publisher · View at Google Scholar · View at Scopus
  7. N. Wiest-Daesslé, S. Prima, P. Coupé, S. Morrissey, and C. Barillot, “Rician noise removal by non-local means filtering for low signal-to-noise ratio MRI: applications to DT-MRI,” in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2008, vol. 5242 of Lecture Notes in Computer Science, pp. 171–179, 2008. View at Publisher · View at Google Scholar
  8. J. V. Manjón, N. A. Thacker, J. J. Lull, G. Garcia-Martí, L. Martí-Bonmatí, and M. Robles, “Multicomponent MR image denoising,” International Journal of Biomedical Imaging, vol. 2009, Article ID 756897, 10 pages, 2009. View at Publisher · View at Google Scholar
  9. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. M. Maggioni, V. Katkovnik, K. Egiazarian, and A. Foi, “Nonlocal transform-domain filter for volumetric data denoising and reconstruction,” IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 119–133, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Seo, P. Chatterjee, H. Takeda, and P. Milanfar, “A comparison of some state of the art image denoising methods,” in Proceedings of the 41st Asilomar Conference on Signals, Systems and Computers (ACSSC '07), pp. 518–522, Pacific Grove, Calif, USA, November 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. R. Farina, S. Cuomo, and P. de Michele, “A CUBLAS-CUDA implementation of PCG method of an ocean circulation model,” in Proceedings of the AIP Conference on Numerical Analysis and Applied Mathematics (ICNAAM '11), vol. 1389, pp. 1923–1926, Halkidiki, Greece, September 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. F. Piccialli, S. Cuomo, and P. de Michele, “A performance evaluation of a parallel biological network microcircuit in NEURON,” International Journal of Parallel Emergent and Distributed Systems, vol. 4, pp. 15–31, 2013. View at Google Scholar
  14. K. Huang, D. Zhang, and K. Wang, “Non-local means denoising algorithm accelerated by GPU,” in Medical Imaging, Parallel Processing of Images, and Optimization Techniques (MIPPR '09), vol. 7497 of Proceedings of SPIE, Yichang, China, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. F. P. X. de Fontes, G. A. Barroso, P. Coupé, and P. Hellier, “Real time ultrasound image denoising,” Journal of Real-Time Image Processing, vol. 6, no. 1, pp. 15–22, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. B. Goossens, H. Luong, J. Aelterman, A. Piurica, and W. Philips, “A GPU-accelerated real-time NLMeans algorithm for denoising color video sequences,” in Advanced Concepts for Intelligent Vision Systems, vol. 6475 of Lecture Notes in Computer Science, pp. 46–57, 2010. View at Publisher · View at Google Scholar
  17. NVIDIA, “NVIDIA CUDA programming guide,” Tech. Rep., 2012, http://www.nvidia.com/content/cudazone/download/OpenCL/NVIDIA_OpenCL_ProgrammingGuide.pdf. View at Google Scholar
  18. T. Valkonen and M. Liebmann, “GPU-accelererated regularisation of large diffusion-tensor volumes,” Computing, vol. 95, no. 1, pp. 771–784, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. F. Piccialli, S. Cuomo, and P. de Michele, “A regularized MRI image reconstruction based on hessian penalty term on CPU/GPU systems,” Procedia Computer Science, vol. 18, pp. 2643–2646, 2013. View at Publisher · View at Google Scholar
  20. A. Nasridinov, Y. Lee, and Y.-H. Park, “Decision tree construction on GPU: ubiquitous parallel computing approach,” Computing, vol. 96, no. 5, pp. 403–413, 2014. View at Publisher · View at Google Scholar
  21. G. Palma, M. Comerci, B. Alfano et al., “3D non-local means denoising via multi-GPU,” in Proceedings of the Federated Conference on Computer Science and Information Systems (FedCSIS '13), pp. 495–498, Kraków, Poland, September 2013.
  22. S. Cuomo, P. de Michele, R. Farina, and F. Piccialli, “A smart GPU implementation of an elliptic kernel for an ocean global circulation model,” Applied Mathematical Sciences, vol. 7, no. 61–64, pp. 3007–3021, 2013. View at Google Scholar · View at Scopus
  23. A. Kharlamov and V. Podlozhnyuk, “Image denoising,” Tech. Rep., NVIDIA, Santa Clara, Calif, USA, 2007. View at Google Scholar
  24. H. Gudbjartsson and S. Patz, “The rician distribution of noisy MRI data,” Magnetic Resonance in Medicine, vol. 34, no. 6, pp. 910–914, 1995. View at Google Scholar
  25. S. Basu, T. Fletcher, and R. Whitaker, “Rician noise removal in diffusion tensor MRI,” Medical Image Computing and Computer-Assisted Intervention, vol. 9, part 1, pp. 117–125, 2006. View at Google Scholar
  26. NVIDIA, “NVIDIA CUDA C best practices guide,” NVIDIA Corporation, 2007–2012, http://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html.