Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2021 / Article
Special Issue

Biomedical Applications of Computer Vision using Artificial Intelligence

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5577956 | https://doi.org/10.1155/2021/5577956

Liang Wu, Shunbo Hu, Changchun Liu, "Denoising of 3D Brain MR Images with Parallel Residual Learning of Convolutional Neural Network Using Global and Local Feature Extraction", Computational Intelligence and Neuroscience, vol. 2021, Article ID 5577956, 18 pages, 2021. https://doi.org/10.1155/2021/5577956

Denoising of 3D Brain MR Images with Parallel Residual Learning of Convolutional Neural Network Using Global and Local Feature Extraction

Academic Editor: Vahid Rakhshan
Received23 Jan 2021
Revised15 Apr 2021
Accepted21 Apr 2021
Published04 May 2021

Abstract

Magnetic resonance (MR) images often suffer from random noise pollution during image acquisition and transmission, which impairs disease diagnosis by doctors or automated systems. In recent years, many noise removal algorithms with impressive performances have been proposed. In this work, inspired by the idea of deep learning, we propose a denoising method named 3D-Parallel-RicianNet, which will combine global and local information to remove noise in MR images. Specifically, we introduce a powerful dilated convolution residual (DCR) module to expand the receptive field of the network and to avoid the loss of global features. Then, to extract more local information and reduce the computational complexity, we design the depthwise separable convolution residual (DSCR) module to learn the channel and position information in the image, which not only reduces parameters dramatically but also improves the local denoising performance. In addition, a parallel network is constructed by fusing the features extracted from each DCR module and DSCR module to improve the efficiency and reduce the complexity for training a denoising model. Finally, a reconstruction (REC) module aims to construct the clean image through the obtained noise deviation and the given noisy image. Due to the lack of ground-truth images in the real MR dataset, the performance of the proposed model was tested qualitatively and quantitatively on one simulated T1-weighted MR image dataset and then expanded to four real datasets. The experimental results show that the proposed 3D-Parallel-RicianNet network achieves performance superior to that of several state-of-the-art methods in terms of the peak signal-to-noise ratio, structural similarity index, and entropy metric. In particular, our method demonstrates powerful abilities in both noise suppression and structure preservation.

1. Introduction

Medical image information is playing an increasingly important role in disease diagnosis. However, during the image acquisition process, due to the improper actions of patients or staff, strong random noise will inevitably be generated. This noise not only reduces the resolution of the image but also affects the precision of clinician diagnosis [1, 2].

At present, popular magnetic resonance (MR) imaging technology is commonly used as a medical imaging technology for visualizing human tissues and organs. It does not pose any radiation hazard, unlike CT imaging [3], and it achieves multiaspect, multiparameter, and high-contrast-resolution images without bone artifacts. However, the random noise will affect the inspection quality in clinical diagnosis, as well as image processing and analysis tasks such as image segmentation, registration, and visualization. Hence, solving the problem of MR image denoising is critical.

The purpose of image denoising is to remove background noise and retain valuable information [4]. Many conventional filtering techniques are often used, such as Wiener filtering [5], bilateral filtering [6], and total variation filtering [7]. Yang and Fei proposed a multiscale wavelet denoising method based on the Radon transform to denoise MR images [8]. Phophalia et al. mitigated the problem of medical image denoising by using rough set theory (RST) [9]. Awate and Whitaker devised a Bayesian denoising method and verified it on diffusion-weighted MR images [10]. Satheesh et al. developed an MR image denoising algorithm using the contourlet transform, which achieved a higher peak signal-to-noise ratio than the wavelet transform [11]. Zhang et al. used an improved singular value decomposition method to denoise simulated and real 3D images. The experimental results showed that their method was superior to the existing denoising methods [12]. Leal et al. presented a method based on sparse representations and singular value decomposition (SVD) for nonlocally denoising MR images. This method prevents blurring, artifacts, and residual noise [13]. In addition, by extending the local region to a nonlocal scheme, the nonlocal means (NLM) strategy was used for MR image denoising [1416]. Gautam et al. proposed a novel denoising technique for MR images based on the advanced NLM method with non-subsampled shearlet transform (NSST) [17]. Kanoun et al. proposed an enhanced NLM filter using the Kolmogorov-Smirnov (KS) distance. The experimental results provided excellent noise reduction and image-detail preservation [18].

In recent years, the explosive development of deep learning has suggested a new methodology for image denoising. It can use multiple convolution filters to automatically extract features, with large receptive fields, to reconstruct high-resolution images. In [19], the authors used the self-encoder to train the image features of different resolutions to achieve adaptive denoising. Zhang et al. exploited denoising convolutional neural networks (DnCNNs) for Gaussian noise removal and achieved excellent performance by using residual learning strategy [20]. Cherukuri et al. applied a deep learning network that leveraged the prior spatial structure of images to reconstruct high-resolution images [21]. Manjn et al. proposed a novel automatic MR image denoising method by combining a convolutional neural network (CNN) with a traditional filter [22].

Deep learning-based denoising methods can grasp richer contextual information in large regions to improve performance. With very deep architectures, it can expand the receptive field of the network to capture more global contextual information over large image regions. Liu et al. utilized the multiscale fusion convolution network (MFCN) to perform super-resolution reconstruction of MR images [23]. Pham et al. used a deep 3D CNN model with residual learning to reconstruct MR images [24]. Their model exploited a very deep architecture with a large receptive field to acquire a powerful learning ability. Jiang et al. described a multichannel denoising convolutional neural network (MCDnCNN) that directly learned the process of denoising and performed experiments on simulation and real MR data [25]. In [26], Ran et al. suggested a residual encoder-decoder Wasserstein generated countermeasure network (RED-WGAN) for MR image denoising. Hong et al. designed a spatial attention mechanism to obtain the area of interest in MR images, which made use of the multilevel structure and boosted the expressive ability of the network [27]. Tripathi and Bag proposed a novel CNN for MR image denoising. The proposed model consisted of multiple convolutions that captured different image features while separating inherent noise [28]. Li et al. designed a progressive network learning strategy by fitting the distribution of pixel-level and feature-level intensities. Their experimental results demonstrated the great potential of the proposed network [29]. Gregory et al. created HydraNet, a multibranch deep neural network architecture that learned to denoise MR images at a multitude of noise levels, and proved the superiority of the network on denoising complex noise distributions compared to some deep learning-based methods [30]. Aetesam and Maji proposed a neural framework for MR images denoising, using an ensemble-based residual learning strategy. High metric value and high-quality visual results were obtained in both synthetic and real noisy datasets [31].

In the above reported deep learning denoising tasks, the depth and width of the networks were often increased to capture more contextual information. However, these methods introduced a number of parameters, which made it difficult to train the denoising models. Some of the methods learned the Rician noise distribution solely by stacking convolution layers, which easily overlooked much local information and led to unsatisfactory denoising results at some key local anatomical positions.

To address the above shortcomings, this work proposes a novel network termed 3D-Parallel-RicianNet that is used to remove the noise of MR images. First, to expand the receptive field without introducing more parameters, we design a dilated convolution residual (DCR) module and use it to build a subnetwork (DCRNet) that can extract global information by cascading. Then a depthwise separable convolution residual (DSCR) module is designed and used to construct a subnetwork (DSCRNet) to extract local information. Finally, the features of each module of DCRNet and DSCRNet are merged and cascaded to obtain full-scale mappings between image appearances and noise deviation.

The main contributions of this work are summarized as follows:(1)DCRNet expands the receptive field to extract rich context information through cascading DCR modules, which capture the real Rician distribution in the global area(2)DSCRNet uses the DSCR module to focus on the local area of the image and effectively removes local anatomical noise. Each DSCR module of this subnetwork is added to the output part of each DCR module of the corresponding DCRNet(3)The 3D-Parallel-RicianNet uses a residual learning mechanism to prevent vanishing and exploding gradient problems

The remainder of this work is organized as follows. In Section 2, we describe the proposed denoising networks and loss function. Then, in Section 3, we present the experimental tests of our approach on synthetic and real MR noisy data. Additionally, a comparison of our method with state-of-the-art algorithms is provided. Finally, in Section 4, we discuss our conclusions and give future directions.

2. Materials and Methods

2.1. Noise Reduction Model

MR magnitude image is corrupted by independent Gaussian distribution noise in the real part and the imaginary part of images [3234]. Previous studies suggest that the probability distribution of noisy MR image pixel intensity can be represented as a Rician distribution [35, 36]. Deep learning can ignore the physical process and model this procedure corruption by learning from the samples [26]. Hence, the MR image degradation model with noise can be described aswhere is the noisy MR image, is the noise-free image, and is the deviation between and influenced by the Rician distribution. According to equation (1), can be expressed as , so it was employed to train a residual mapping , and we can obtain . Figure 1 shows that the probability density distribution (PDF) of noisy MR images varies in global and local regions. It can be seen from the top left image that the noise reduces the quality of the MR image and blurs the boundaries of some tissue structures, which results in increased difficulty in recognizing the image details. Liu et al. pointed out that the PDFs of Rician noise vary spatially in different anatomical regions of brain MR images [37]. Hence, the nonlinear mappings between image appearances and Rician distributions vary in global and local regions. Based on this conclusion, we propose the 3D-Parallel-RicianNet MR image denoising model, which combines the global and local feature information on global regions and local regions.

2.2. DCR Module for Global Feature Representation

It is known that context information is important to reconstruct corrupted pixels for image denoising. Specifically, it is a common way to capture more global context information by expanding the receptive field [38]. In the reported deep learning denoising tasks, increasing the depth and width of the deep networks can enlarge the receptive field. However, the width-adding methods may produce more parameters, which results in overfitting of the network. The depth-adding methods may lead to vanishing gradients when the depth of the network is enormous.

To solve these problems, dilated convolutions have been developed [39]. The dilation rate of the convolution kernel can be controlled to obtain receptive fields of different sizes, as shown in Figure 2. The size of the receptive field, , is denoted aswhere is the size of the filter, is the dilated rate, and is the dimension (2 or 3) of the image. The receptive field of the convolution operation can be expanded by setting different . This creates a tradeoff between increasing the depth and width of CNNs. In [40], Peng proposed dilated residual networks with symmetric skip connection (DSNet). The experiments demonstrated that the model was more feasible for the task of image denoising, especially for Gaussian noise. Zhang et al. proposed a dual-domain multiscale CNN (DMCNN) for JPEG artifacts based on dilated convolution. This also proved that dilated convolution had advantages in restoring image quality [41].

In this study, we construct the DCR module as one component of our 3D-Parallel-RicianNet. It exploits dilated convolutions to extract global features, as shown in Figure 3. The DCR module consists of dilated convolution, residual learning, batch normalization (BN), and leaky rectified linear unit (LeakReLU). Residual learning fundamentally breaks the symmetry of the network, thereby improving the ability of the representation network. By setting the BN layer, the generalization ability of the network is improved. Due to the problem of vanishing gradients using the ReLU activation function, we use LeakReLU as the activation function of the network. The input and output of a two-level dilated convolution are briefly connected to construct a DCR module.

2.3. DSCR Module for Local Feature Representation

It is very important to recover the local fine details in image denoising. When some local features are not well extracted, the local denoising effect will be degraded. Recently, depthwise separable convolution (DSConv) has been used in many advanced neural networks, such as Xception [42], MobileNets [43], and MobileNets2 [44], to replace the standard convolutional layer, aiming to reduce CNN computational cost and to extract local features [45].

DSConv consists of two parts: depthwise convolution and pointwise convolution. As shown in Figure 4, the depthwise convolution acts on each input channel separately, to exact local features, followed by a pointwise convolution that uses convolution to weight the features among channels at every point. Hence, this would efficiently extract the local features among different channels. The input feature map is . First, using depthwise convolutions with filters , an intermediate result is produced, which is then processed into the output feature map by means of the pointwise convolutions using filters.

DSConv can extract local delicate features of the image by considering the information of the position and channel separately. Imamura et al. designed a denoising network for hyperspectral images using DSConv and demonstrated its ability to realize efficient restoration [46]. The advantage of DSConv is that it reduces the number of network parameters and the computational complexity in convolution operations [4244].

The model designed by using dilated convolution can restore the image quality globally [40, 41] but can easily ignore local information. To solve this problem, inspired by DSConv, we extend the technique to the DSCR module to extract the local information of the MR images, as shown in Figure 5. We utilize the residual strategic idea and take the depthwise separable convolutions as the main construction module. On the one hand, we design two continuous depthwise separable convolutions with the BN layer after each convolution layer to improve the generalization ability of the network. On the other hand, we use another depthwise separable convolution to shortcut the module to prevent vanishing gradients.

2.4. The Proposed 3D-Parallel-RicianNet Model

The proposed 3D-Parallel-RicianNet framework consists of a global feature extraction network DCRNet, a local feature extraction network DSCRNet, and a reconstruction (REC) module. Under this framework, the pipeline of MR image denoising is composed of three major steps (see Figure 6). First, we apply DCRNet and DSCRNet to extract the global features and local features, respectively. Then, we fuse the global and local features through an additional layer to obtain real Rician distribution features. Finally, we use the REC module to obtain a predicted clean MR image .

DCRNet. The proposed DCRNet framework is a cascade of 18 DCR modules with different . The kernel size is for 2D slices/3D patches. Dilated convolution with a large behaves well for low-frequency noise removal. When is too large, it is difficult to capture some small contextual information, which will cause the waste of receptive fields. If is 1, it is the same as the traditional convolution in each channel. In DCRNet, to ensure that all feature maps have the same size as the input, we symmetrically pad zeros around the boundaries before applying the convolution operation. As the convolutional layer increases, the range of the receptive field will gradually increase. In addition, a gridding problem is known to exist in dilated convolution [47]. To solve these problems, considering the size of the input in our experiments, we applied DCR modules with different dilation rates. Therefore, the dilated rate of each layer is set to 1, 1, 1, 1, 1, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, and 1. The final receptive field is 61. Multiscale global features are extracted by using multiple DCR modules with different dilation rates. Each module has 16 filters. The implementations can avoid the gridding effects and reduce the influence of unrelated information.

DSCRNet. DSCRNet is further used to compensate for the local information ignored by expanding the receptive field. It is a cascade of 18 DSCR modules. The size of the convolution kernel of each module is . Each module also has 16 filters.

We fused the features extracted from each module of DCRNet and DSCRNet to gradually realize the complementarity of global and local information. This process particularly helps to preserve critical image features in global regions and local regions. Therefore, the proposed 3D-Parallel-RicianNet model will have better denoising ability than other methods.

REC Module. After a convolution layer, we obtain the estimated deviation and then use to obtain a predicted clean MR image.

2.5. Loss Function

Our loss function uses the mean squared error (MSE) as follows:where is the ith noise-free image, is the corresponding noisy image, and denotes the network parameters. We minimize this loss function to learn the output noise-free image .

3. Experiments Results and Analysis

3.1. Dataset Description

To validate the performance of the proposed 3D-Parallel-RicianNet, extensive experiments were performed on both public simulated and clinical datasets.

For simulated experiments, the BrainWeb dataset [48, 49] was used. In this work, we obtained 18 T1-weighted (T1w) MR images with different noise levels (1%, 3%, 5%, 7%, and 9%). The size of the image is , and its resolution is . The brain skull is stripped by the skull mask. To further speed up the training process and obtain fewer redundant areas, we cropped the edges of the image, and the image size is 160 × 192 × 160.

One critical problem of the deep learning approach is weak generalization applicability. Networks trained on one dataset from a specific manufacturer or setting may not perform well for a different dataset. The noise in the simulation dataset is assumed to come from single coil acquisition systems. However, clinical MR image noise distributions come from multiple coils, and these noises are subject to a noncentral Chi distribution with a sum-of-squares (SoS) reconstruction. Actually, the Rician distribution is a special case of the noncentral Chi distribution [50] and varies spatially in real MR images [37].

To verify the generalization ability of the proposed model, we carried out experiments on real datasets. For the first clinical experiment, the well-known IXI dataset [51] was used, which was collected from 3 different hospitals. We randomly selected 100 T1w brain images from the Hammersmith dataset. The image size is , and the voxel resolution is . Sixty images were randomly selected as the training set, 20 images for validation, and the other 20 images for testing. In this dataset, we manually added different levels of Rician noise to simulate the noisy image [26]. The brain skull was stripped by the VolBrain method [52].

For another experiment, we randomly selected 35 T1w images in ADNI [53]. Each of these samples contained voxels with voxel resolution. For the experiment, the original scan was resized to dimensions of . The brain skull was also stripped by the VolBrain method. Due to the lack of knowledge about the noise level in real data, we used the variance-stabilization approach to estimate the Rician noise level of ADNI data, which was approximately 3% [54]. Hence, we selected IXI models trained with a 3% noise level to test ADNI data.

The last dataset comes from the Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge [55, 56]. The dataset included 40 abdominal T1w MR images. On average, each volume size is , and the noise level is unknown. We adjusted the image to 256 × 256 × 64 through zero-padding operations to be uniform. To substantiate the robustness and generalization capability of the proposed framework, we employed this dataset for our experiments, splitting it into subsets of 25, 5, and 10 subjects that were used for training, validation, and testing.

3.2. Training Details

We use two strategies for training on the three datasets of BrainWeb, IXI-Hammersmith, and CHAOS: 2D slice-based training and 3D patch-based training. For 2D training, we extracted 2D coronal slices from 3D data in the BrainWeb dataset. We obtained 2880 slices by rotating and mirroring, with 1920 slices for training, 384 slices for validation, and 576 slices for testing. In the IXI-Hammersmith dataset, we cropped the image to 256 × 256 × 128 and tested it in all clinical brain datasets. We extracted 7680 slices for training, 2560 slices for validation, and 2560 slices for testing in the sagittal plane. In the CHAOS dataset, using rotation and mirroring to expand the data, we obtained 6400 sagittal slices for training, 1280 sagittal slices for validation, and 640 sagittal slices for testing.

For patch-based training, 3D data in the BrainWeb dataset was also expanded by rotation and mirroring. To reduce memory burden, we used patches with a size of 64 × 64 × 64 voxels. A sliding window strategy with a stride of was then used to obtain 3675 patches to train the 3D model. Using the same strategy as the BrainWeb dataset, 4500 training patches, 1500 validation patches, and 1500 test patches were extracted from IXI-Hammersmith with a step size of . We used rotating and mirroring to expand the CHAOS dataset before extracting patches and finally obtained 4900 training patches, 980 validation patches, and 490 test patches with a stride of . In the training stage, since the CHAOS dataset did not have clean images and the noise level was unknown, we used the 5% noise model trained by IXI-Hammersmith to estimate clean images as ground truth. In the testing stage, we applied the trained network to patches of the test set. The resultant predictions were averaged in the overlapping regions.

All training was conducted using a deep learning acceleration computing service, which is configured with a 2.20 GHz Core i7-8750H CPU, an NVIDIA GeForce GTX 1070 (8G) GPU, and 16 GB RAM. All the deep learning models were implemented with the publicly available TensorFlow framework and Keras artificial neural network library. In the training process, the learning rate was set to 1e-3. We used Adam optimization.

3.3. Evaluation Methods

Six kinds of deep learning models were trained: CNN-DMRI [28], RicianNet [29], 2D-DCRNet, 2D-Parallel-RicianNet, 3D-DCRNet, and 3D-Parallel-RicianNet. We compared these six deep learning models with four traditional denoising methods: NLM, BM3D, ODCT3D [57], and PRI-NLM3D [57]. In the NLM method, the fastNLMMeansDenoising function is selected, where the template size is and the filter strength is 15.

Three quantitative metrics were employed to evaluate the denoising performance of these methods. The first was the peak signal-to-noise ratio (PSNR). A high PSNR generally denotes good denoising performance. The second was the structural similarity index (SSIM), which measured the structural similarity between the ground-truth and denoised images. The last one was entropy, which reflected the amount of image information. We used the natural logarithm in the entropy metric.

3.4. Simulated Results

The quantitative results of NLM, BM3D, ODCT3D, PRI-NLM3D, CNN-DMRI, 2D-DCRNet, 2D-Parallel-RicianNet, 3D-DCRNet, and 3D-Parallel-RicianNet on T1w images with different noise levels (1%, 3%, 5%, 7%, 9%) are illustrated in Tables 13.


Methods1%3%5%7%9%

NLM34.138132.125929.971729.459927.8319
BM3D35.315133.193731.014430.489028.9093
ODCT3D48.129536.275631.585730.512428.2452
PRI-NLM3D49.892336.819231.959730.941828.5961
CNN-DMRI47.812535.472030.895829.353327.2152
RicianNet43.414537.009527.480727.677728.8616
2D-DCRNet46.816838.678634.627732.466430.5232
2D-Parallel-RicianNet50.907241.809938.821835.976734.5207
3D-DCRNet48.512039.133635.146632.728031.0916
3D-Parallel-RicianNet51.719243.995040.921837.689637.1069


Methods1%3%5%7%9%

NLM0.96690.96050.95390.95210.9454
BM3D0.97680.97250.96730.96490.9584
ODCT3D0.99920.99590.99030.98570.9778
PRI-NLM3D0.99950.99670.99220.98890.9823
CNN-DMRI0.99870.98910.97270.95330.9312
RicianNet0.96060.89060.92210.72460.6319
2D-DCRNet0.99860.99000.99060.95350.9346
2D-Parallel-RicianNet0.99920.99330.98640.96850.9722
3D-DCRNet0.99850.98980.95870.95610.9373
3D-Parallel-RicianNet0.99950.99820.99420.98830.9859


Methods1%3%5%7%9%

Noisy image2.47872.50672.52662.54822.5556
NLM2.47212.45812.44742.44702.4324
BM3D2.45322.44762.48492.46762.4803
ODCT3D2.45162.48442.49882.51022.5033
PRI-NLM3D2.44472.44152.42912.43302.4201
CNN-DMRI2.44992.48692.50862.53132.5406
RicianNet2.46292.47152.45192.45112.4606
2D-DCRNet2.46242.47012.47422.34812.4142
2D-Parallel-RicianNet2.46932.39952.40732.46312.1374
3D-DCRNet2.45562.44882.43312.33402.1787
3D-Parallel-RicianNet2.43642.34692.32752.28902.0642

Tables 1 and 2 depict the PSNR and SSIM results, respectively. We can observe that the PSNR values of 3D-Parallel-RicianNet are obviously higher than those of the other methods at all noise levels. In Table 2, the SSIM values of 3D-Parallel-RicianNet are closer to 1, which is higher than those of the other methods under all noise levels except PRI-NLM3D at the 7% noise level. This indicates that our proposed model has good denoising performance with good anatomical structure preservation.

Table 3 shows the entropy results of 10 methods. We find that the proposed 3D-Parallel-RicianNet can obtain the lowest entropy under all five noise levels. Hence, considering the three metrics in 3 tables, we find that our method has better noise reduction performance. In addition to visual quality, another important aspect of the MR image denoising method is the time complexity. We give running times for different methods in Table 4. It is clear that 3D-DCRNet and our proposed 3D-Parallel-RicianNet are much faster than other methods. Once the deep learning-based method finishes training, forward propagation is very fast. In Table 5, our method has the fewest parameters, which means that our network does not need too much computational power. From this, we can see that our model has competitive advantages for small data sets.


Methods1%3%5%7%9%Average

NLM10.5510.5410.5810.5410.5510.55
BM3D28.2628.1327.2927.3728.2527.86
ODCT3D23.5417.5618.0214.7514.5417.68
PRI-NLM3D14.0712.5012.9213.5913.2813.27
CNN-DMRI1.131.111.131.121.121.12
RicianNet1.711.651.651.691.661.67
2D-DCRNet1.271.321.341.331.291.31
2D-Parallel-RicianNet1.181.141.131.121.121.14
3D-DCRNet0.940.920.910.910.920.92
3D-Parallel-RicianNet0.890.890.890.890.890.89


MethodCNN-DMRIRicianNet3D-Parallel-RicianNet

Number of parameters1,444,9295,346,114395,405

Figures 7 and 8 provide a visual comparison for T1w images from testing data under 3% and 9% noise levels using 10 methods. The zoomed-in regions of the denoised images are shown to observe noticeable details. In Figure 7, all methods can achieve good performance under low-level noise circumstances. However, traditional methods suffer from obvious oversmoothing effects and distort some important details. Among deep learning methods, the images processed by CNN-DMRI, 2D-DCRNet, 2D-Parallel-RicianNet, and 3D-DCRNet have obvious Rician noise. RicianNet increases the brightness of the brain area and makes it difficult to clearly observe the anatomical structure. Figure 7 shows that the 3D-Parallel-RicianNet denoising method gives better results and preserves the key information in the image.

While the noise level increases, the traditional methods suffer from obvious oversmoothing effects, as shown in Figure 8. CNN-MRI and RicianNet models still have some noise and suffer from slight oversmoothing of textured regions. By using the DCR module, 2D-DCRNet and 3D-DCRNet have a strong denoising ability globally for 2D slice-based and 3D patch-based cases. However, without considering local structural features, the DCRNet model loses some important local details in the denoising process. Hence, by combining global features of DCRNet and local features of DSCRNet, the proposed 3D-Parallel-RicianNet can preserve finer detailed structures in homogeneous areas, and it obtains the most consistent results with noise-free images. Hence, our 3D-Parallel-RicianNet method can better retain the key information in denoised MR images, which is useful for improving the precision of clinician diagnosis.

3.5. Clinical Results
3.5.1. Results from the IXI-Hammersmith Dataset

To validate the performance of the proposed 3D-Parallel-RicianNet, ten denoising methods were compared on different clinical data sets.

Figures 911 summarize the three metrics in the IXI-Hammersmith dataset with 10 methods under different noise levels. At a noise level of 1%, the PRI-NLM algorithm achieves denoising performance comparable to that of 3D-Parallel-RicianNet in terms of PSNR. At noise levels above 5%, the proposed model produces higher PSNRs than the competing methods. In particular, in Figure 10, we can see that the 3D-Parallel-RicianNet model consistently yields SSIMs higher than the other nine methods for all noise levels. From the perspective of entropy, our method had a low entropy value. These results indicated that the 3D-Parallel-RicianNet model had a strong denoising ability.

Figure 12 shows an example of denoising results using 10 methods on the IXI-Hammersmith dataset with 3% noise. It can be seen in the figure that the proposed 3D-Parallel-RicianNet model gives the best denoising results and the denoised image is virtually identical to the ground-truth image. After visual inspection, it can be deduced that the outcome of our proposed 3D-Parallel-RicianNet is improved compared to the others in terms of fine-structure retention and edges.

3.5.2. Results from the IXI-Guys Dataset

Figures 1315 summarize the PSNR, SSIM, and entropy values using 10 methods on the IXI-Guys dataset. We test the trained model with the IXI-Hammersmith dataset on this IXI-Guys dataset, which reflects network generalization on other nontrained datasets. The 3D-Parallel-RicianNet shows the most robust performance among the tested methods in terms of PSNR, SSIM, and entropy. In particular, our model still achieves better denoising ability than other methods at higher noise levels.

Figure 16 shows an example of denoising results obtained with 10 methods on data from the IXI-Guys dataset at the 3% noise level. Consistent with the denoising performance on the IXI-Hammersmith dataset, the proposed 3D-Parallel-RicianNet method provided the best denoising result and removed the image noise more robustly than the other methods on the IXI-Guys dataset. Particularly in the region indicated by the red line, the 3D-Parallel-RicianNet model achieved better visual results.

3.5.3. Results from the ADNI Dataset

This subsection is devoted to verifying the consistency of the proposed approach on the ADNI dataset. Because noise-free images are unavailable, entropy is measured and used as the quantitative metric. The results are shown in Figures 17 and 18.

As shown in Figure 17, although the RicianNet and 2D-DCRNet remove noise, they suffer from obvious oversmoothing effects, and it is difficult to identify the key anatomical structures. In addition, the denoising effect is not satisfactory when using BM3D, ODCT3D, PRI-NLM3D, CNN-DMRI, 2D-Parallel-RicianNet, and 3D-DCRNet. The results of these methods still contain substantial noise and miss some of the structural details. It can be noted that 3D-Parallel-RicianNet retains the details better than other methods.

According to Figure 18, the entropy results of denoised MR images in the ADNI dataset using different processing methods are compared. We find that 3D-Parallel-RicianNet achieves the lowest entropy value. Combined with Figure 17, we find that our method not only effectively removes noise but also preserves more useful key information in images. Hence, our 3D-Parallel-RicianNet method has strong generalization ability and strong robustness. These experimental results once again demonstrate the advantages of our proposed model.

3.5.4. Denoising of Real Abdominal MR Data

In this subsection, we performed denoising for abdominal MR images by the proposed network. We compared three denoising methods, and the experimental results are shown in Table 6.


MethodBM3DCNN-DMRIRicianNet3D-Parallel-RicianNet

PSNR31.916732.065535.257739.7090
SSIM0.98620.98670.92580.9941

Table 6 shows that the PSNR of our method can reach 39.7090, which is higher than those of BM3D, CNN-DMRI, and RicianNet. On SSIM, RicianNet is lower than BM3D and CNN-DMRI, indicating that although RicianNet can remove noise, it cannot retain the structure information of the image. Our method can still obtain the highest SSIM value. We show the denoising results of the four methods in Figure 19. It can be seen from the figure that our method can not only remove noise but also preserve the key anatomical position information in the image completely.

3.6. Comparisons of the Results with Different Spatial Resolutions

The image resolution affects the quality of the image. Generally, when the image resolution is smaller, the denoising ability of the model is significantly reduced. In this part, we use the BrainWeb dataset to verify the denoising effect of images with different resolutions at a noise level of 3%. The results are shown in Table 7.


Spatial resolutions ()Method
BM3DCNN-DMRIRicianNet3D-Parallel-RicianNet

31.8908 0.970934.7330 0.989035.7466 0.923341.5169 0.9970
32.1794 0.971835.2350 0.990336.4030 0.911741.9756 0.9973
33.1937 0.972535.4720 0.989137.0095 0.890643.9950 0.9982
31.7986 0.967735.6006 0.991736.6172 0.907842.2193 0.9976
31.3591 0.963334.0402 0.991736.0542 0.907041.9259 0.9974
29.8656 0.948135.1568 0.991831.4012 0.918040.3297 0.9961
28.2362 0.923034.5695 0.990023.4731 0.903838.4990 0.9933

From Table 7, it can be observed that the proposed 3D-Parallel-RicianNet outperforms other methods tested among the different spatial resolutions. For BM3D and RicianNet, they are difficult to remove noise at low spatial resolutions. It is noted that noise cleaning appears to have a consistent effect when different spatial resolutions are relatively close, such as and . However, it should be noted that some loss of contrast and spatial resolution is possible. Once the difference between the resolutions becomes larger, the denoising effect will also change significantly, such as and . In addition, the PSNRs of deep learning methods decrease significantly with decreasing spatial resolution, while the SSIM values are relatively close, indicating that deep learning methods recover most of the complex anatomical structures. Compared to other methods, our model has a more balanced denoising ability at different spatial resolutions, and the mean value of PSNR can reach 41.6373. Since the proposed 3D-Parallel-RicianNet can extract the global and local features in the noisy image and restore the clean image, it can still maintain denoising ability at low spatial resolution.

3.7. Comparisons of the Results with Different Brain Tissues

Based on MR imaging technique, key brain tissues like gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) become visible. These three tissues help visualize brain structures and guide surgery but noise can affect the interpretation of brain tissue [58]. To evaluate the denoising effectiveness of 3D-Parallel-RicianNet on different brain tissues, state-of-the-art methods BM3D, CNN-DMRI, and RicianNet are compared in Table 8. The proposed model can achieve better PSNR and SSIM results than the competing methods in different brain tissues. In particular, in CSF, we can see that the PSNR of 3D-Parallel-RicianNet can reach 51.9105. We also show the different brain tissue denoising results of four denoising methods in Figure 20. These experimental results once again demonstrate the advantages of the proposed model.


MethodCSFGMWM
PSNRSSIMPSNRSSIMPSNRSSIM

BM3D27.07650.921319.17090.911418.27410.9350
CNN-DMRI45.83220.998139.17900.997638.74200.9973
RicianNet44.84540.998241.98530.998843.60820.9986
3D-Parallel-RicianNet51.91050.999647.52880.999648.62380.9998

3.8. Variants of the Setting in the DCR Module

In our model, the DCR module of different is our key component. The PSNR and SSIM are recorded in Table 9 by different settings at the 3% noise level in the BrainWeb dataset. We conducted three experiments, each using the same dilation rate for the 18 DCR modules. The final receptive fields are 37, 73, and 109. Combining Tables 1 and 2 and Table 8, we can find that our hybrid dilation rate can reach the highest PSNR and SSIM. When and 3, the receptive field has already caused waste. In addition, using the same dilation rates can easily cause gridding effects. There is a lack of correlation between the feature maps extracted in this way, and an accurately predicted result cannot be obtained in the end. Therefore, our model can achieve superior denoising performance.



PSNR41.025340.478839.3627
SSIM0.99560.96610.9946

4. Discussion and Conclusions

In this work, we propose a parallel denoising residual network based on cascaded DCR and DSCR modules to address the random noise in MR images. The global and local features are extracted by the designed DCRNet and DSCRNet, and then these features are fused together. Hence, global and local information is captured to drive the denoising progress of brain MR images by supervised network learning.

The PSNR, SSIM, and entropy are calculated to compare the proposed method with many existing methods, and the denoising effect of the proposed method is verified on the BrainWeb simulation data under different noise levels. To test the practicability of the proposed network, experiments on real clinical MR images show that the proposed method is superior to other methods for the IXI-Hammersmith, IXI-Guys, ADNI, and CHAOS datasets.

In this work, one of our limitations is that although structural information can be retained at high noise levels, there is still a small amount of local noise, as shown in Figure 8. Next, we will continue to study to find a balance between noise removal and structure maintenance at different noise levels. Another critical limitation of our method is the requirement for high-quality noise-free ground-truth images, which are difficult to obtain in real applications. Incorporation of prior knowledge about organ shape and location is key to improving the performance of image analysis approaches. However, in most recently developed medical image analysis techniques, it is not obvious how to incorporate such prior knowledge [59]. Oktay et al. incorporated anatomical prior knowledge into a deep learning method through a new regularization model, and this method showed that the approach can be easily adapted to different medical image analysis tasks (e.g., image enhancement and segmentation) [59]. Furthermore, in [60], the author used morphological component analysis (MCA) to decompose noisy images into cartoon, texture, and residual parts that were considered noise components. Therefore, to circumvent the limitations of our method, we will verify it using multimodality images and incorporate other meaningful priors, such as residual parts, organ shape, and location to mitigate semisupervised denoising tasks in the future.

In conclusion, the results obtained in this paper are encouraging and efficiently demonstrate the potential of our 3D-Parallel-RicianNet method for MR image denoising. This method can not only effectively remove noise in MR images but also preserve enough detailed structural information, which can help to provide high-quality MR images for clinical diagnosis.

Data Availability

The BrainWeb, IXI, ADNI, and CHAOS datasets are publicly available (BrainWeb, https://brainweb.bic.mni.mcgill.ca/brainweb/; IXI, http://brain-development.org/ixi-dataset/; ADNI, http://adni.loni.usc.edu/; and CHAOS, https://chaos.grand-challenge.org/).

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). This research was supported in part by the National Natural Science Foundation of China under Grant 61771230, the Shandong Provincial Natural Science Foundation under Grant ZR2016FM40, the Shandong Provincial Jinan Science and Technology Project under Grant (201816082 and 201817001), and the Youth Program of Shandong Provincial Natural Science Foundation under Grant ZR2020QF011.

References

  1. Z. Zhang, D. Xia, X. Han et al., “Impact of image constraints and object structures on optimization-based reconstruction,” in Proceedings of the 4nd International Conference on Image Formation in X-Ray Computed Tomography, pp. 487–490, Bamberg, Germany, January 2016. View at: Google Scholar
  2. J. Mohan, V. Krishnaveni, and Y. Guo, “A survey on the magnetic resonance image denoising methods,” Biomedical Signal Processing and Control, vol. 9, pp. 56–69, 2014. View at: Publisher Site | Google Scholar
  3. J. D. Kumar and V. Mohan, “Edge detection in the medical MR brain image based on fuzzy logic technique,” in Proceedings of the International Conference on Information Communication and Embedded Systems (ICICES2014), pp. 1–9, Chennai, India, February 2014. View at: Publisher Site | Google Scholar
  4. K. Gupta and S. K. Gupta, “Image denoising techniques: a review paper,” International Journal of Innovative Technology & Exploring Engineering, vol. 2, no. 4, pp. 6–9, 2013. View at: Google Scholar
  5. H. M. Ali, “MRI medical image denoising by fundamental filters,” in High-Resolution Neuroimaging-Basic Physical Principles and Clinical Applications, A. M. Halefoğlu, Ed., pp. 111–124, InTech, Zagreb, Croatia, 2018. View at: Google Scholar
  6. K. N. Chaudhury and S. D. Dabhade, “Fast and provably accurate bilateral filtering,” IEEE Transactions on Image Processing, vol. 25, no. 6, pp. 2519–2528, 2016. View at: Publisher Site | Google Scholar
  7. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1-4, pp. 259–268, 1992. View at: Publisher Site | Google Scholar
  8. X. Yang and B. Fei, “A wavelet multi-scale denoising algorithm for magnetic resonance (MR) images,” Measurement Science and Technology, vol. 22, no. 2, Article ID 025803, 2011. View at: Publisher Site | Google Scholar
  9. A. Phophalia, A. Rajwade, and S. K. Mitra, “Rough set based image denoising for brain MR images,” Signal Processing, vol. 103, pp. 24–35, 2014. View at: Publisher Site | Google Scholar
  10. S. P. Awate and R. T. Whitaker, “Feature-preserving MRI denoising: a nonparametric empirical Bayes approach,” IEEE Transactions on Medical Imaging, vol. 26, no. 9, pp. 1242–1255, 2007. View at: Publisher Site | Google Scholar
  11. S. Satheesh and K. V. S. V. R. Prasad, “Medical image denoising using adaptive threshold based on contourlet transform,” 2011, https://arxiv.org/abs/1103.4907. View at: Google Scholar
  12. X. Zhang, Z. Xu, N. Jia et al., “Denoising of 3D magnetic resonance images by using higher-order singular value decomposition,” Medical Image Analysis, vol. 19, no. 1, pp. 75–86, 2015. View at: Publisher Site | Google Scholar
  13. N. Leal, E. Zurek, E. Leal, and Esmeide, “Non-Local SVD denoising of MRI based on sparse representations,” Sensors, vol. 20, no. 5, p. 1536, 2020. View at: Publisher Site | Google Scholar
  14. H. V. Bhujle and B. H. Vadavadagi, “NLM based magnetic resonance image denoising - a review,” Biomedical Signal Processing and Control, vol. 47, pp. 252–261, 2019. View at: Publisher Site | Google Scholar
  15. J. Hu, Y. Pu, X. Wu, Y. Zhang, and J. Zhou, “Improved DCT-based nonlocal means filter for MR images denoising,” Computational and Mathematical Methods in Medicine, vol. 2012, Article ID 232685, 14 pages, 2012. View at: Publisher Site | Google Scholar
  16. X. Zhang, G. Hou, J. Ma et al., “Denoising MR images using non-local means filter with combined patch and pixel similarity,” PLoS One, vol. 9, no. 6, Article ID e100240, 2014. View at: Publisher Site | Google Scholar
  17. A. Gautam and M. M. Mathur, “Implementation of NLM and PNLM for de-noising of MRI images,” International Journal of Computer Science and Mobile Computing, vol. 8, no. 11, pp. 31–37, 2019. View at: Google Scholar
  18. B. Kanoun, M. Ambrosanio, F. Baselice, G. Ferraioli, V. Pascazio, and L. Gomez, “Anisotropic weighted KS-NLM filter for noise reduction in MRI,” IEEE Access, vol. 8, pp. 184866–184884, 2020. View at: Publisher Site | Google Scholar
  19. K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: a deep autoencoder approach to natural low-light image enhancement,” Pattern Recognition, vol. 61, pp. 650–662, 2017. View at: Publisher Site | Google Scholar
  20. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017. View at: Publisher Site | Google Scholar
  21. V. Cherukuri, T. Guo, S. J. Schiff et al., “Deep MR brain image super-resolution using spatio-structural priors,” IEEE Transactions on Image Processing, vol. 29, pp. 1368–1383, 2020. View at: Publisher Site | Google Scholar
  22. J. V. Manj n and P. Coupe, ““MRI denoising using deep learning and nonlocal averaging,” 2019, https://arxiv.org/abs/1911.04798. View at: Google Scholar
  23. C. Liu, X. Wu, X. Yu et al., “Fusing multiscale information in convolution network for MR image super-resolution reconstruction,” Biomedical Engineering Online, vol. 17, no. 1, p. 114, 2018. View at: Publisher Site | Google Scholar
  24. C. H. Pham, A. Ducournau, R. Fablet et al., “Brain MRI super-resolution using deep 3D convolutional networks,” in Proceedings of the IEEE International Symposium on Biomedical Imaging IEEE, pp. 197–200, Melbourne, Australia, April 2017. View at: Publisher Site | Google Scholar
  25. D. Jiang, W. Dou, L. Vosters, X. Xu, Y. Sun, and T. Tan, “Denoising of 3D magnetic resonance images with multi-channel residual learning of convolutional neural network,” Japanese Journal of Radiology, vol. 36, no. 9, pp. 566–574, 2018. View at: Publisher Site | Google Scholar
  26. M. Ran, J. Hu, Y. Chen et al., “Denoising of 3D magnetic resonance images using a residual encoder-decoder Wasserstein generative adversarial network,” Medical Image Analysis, vol. 55, pp. 165–180, 2019. View at: Publisher Site | Google Scholar
  27. D. Hong, C. Huang, C. Yang et al., “FFA-DMRI: A network based on feature fusion and attention mechanism for brain MRI denoising,” Frontiers in Neuroscience, vol. 14, Article ID 577937, 2020. View at: Publisher Site | Google Scholar
  28. P. C. Tripathi and S. Bag, “CNN-DMRI: a convolutional neural network for denoising of magnetic resonance images,” Pattern Recognition Letters, vol. 135, pp. 57–63, 2020. View at: Publisher Site | Google Scholar
  29. S. Li, J. Zhou, D. Liang, and Q. Liu, “MRI denoising using progressively distribution-based neural network,” Magnetic Resonance Imaging, vol. 71, pp. 55–68, 2020. View at: Publisher Site | Google Scholar
  30. S. Gregory, Y. Gan, H. Cheng et al., “HydraNet: a multi-branch convolutional neural network architecture for MRI denoising,” Medical Imaging 2021: Image Processing, vol. 11596, 2021. View at: Google Scholar
  31. H. Aetesam and S. K. Maji, “Noise dependent training for deep parallel ensemble denoising in magnetic resonance images,” Biomedical Signal Processing and Control, vol. 66, Article ID 102405, 2021. View at: Publisher Site | Google Scholar
  32. J. Yang, J. Fan, D. Ai, S. Zhou, S. Tang, and Y. Wang, “Brain MR image denoising for Rician noise using pre-smooth non-local means filter,” Biomedical Engineering Online, vol. 14, no. 1, p. 2, 2015. View at: Publisher Site | Google Scholar
  33. L. He and I. R. Greenshields, “A non-local maximum likelihood estimation method for Rician noise reduction in MR images,” IEEE Transactions on Medical Imaging, vol. 28, no. 2, pp. 165–172, 2008. View at: Google Scholar
  34. T. Kalaiselvi and N. Kalaichelvi, “Investigation on image denoising techniques of magnetic resonance images,” International Journal of Computer Sciences and Engineering, vol. 6, no. 4, pp. 104–111, 2018. View at: Google Scholar
  35. S. Aja-Fernandez, C. Alberola-Lopez, and C.-F. Westin, “Noise and signal estimation in magnitude MRI and rician distributed images: a LMMSE approach,” IEEE Transactions on Image Processing, vol. 17, no. 8, pp. 1383–1398, 2008. View at: Publisher Site | Google Scholar
  36. H. Gudbjartsson and S. Patz, “The Rician distribution of noisy MRI data,” Magnetic Resonance in Medicine, vol. 34, no. 6, pp. 910–914, 1995. View at: Publisher Site | Google Scholar
  37. R. W. Liu, L. Shi, W. Huang et al., “Generalized total variation-based MRI Rician denoising model with spatially adaptive regularization parameters,” Magnetic Resonance Imaging, vol. 32, no. 6, pp. 702–720, 2014. View at: Publisher Site | Google Scholar
  38. M. Xu, J. Alirezaie, and P. Babyn, “Low-dose CT denoising with dilated residual network,” in Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 5117–5120, Honolulu, HI, USA, July 2018. View at: Publisher Site | Google Scholar
  39. F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” 2015, https://arxiv.org/abs/1511.07122. View at: Google Scholar
  40. Y. Peng, L. Zhang, S. Liu, X. Wu, Y. Zhang, and X. Wang, “Dilated residual networks with symmetric skip connection for image denoising,” Neurocomputing, vol. 345, pp. 67–76, 2019. View at: Publisher Site | Google Scholar
  41. X. Zhang, W. Yang, Y. Hu et al., “DMCNN: Dual-domain multi-scale convolutional neural network for compression artifacts removal,” in Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 390–394, Athens, Greece, October 2018. View at: Publisher Site | Google Scholar
  42. F. Chollet, “Xception: deep learning with depthwise separable convolutions,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800–1807, Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar
  43. A. G. Howard, M. Zhu, B. Chen et al., “Mobilenets: efficient convolutional neural networks for mobile vision applications,” 2017, https://arxiv.org/abs/1704.04861. View at: Google Scholar
  44. M. Sandler, A. Howard, M. Zhu et al., “Mobilenetv2: inverted residuals and linear bottlenecks,” in Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, Salt Lake City, UT, USA, June 2018. View at: Publisher Site | Google Scholar
  45. S. Angizi, Z. He, A. S. Rakin et al., “CMP-PIM: an energy-efficient comparator-based processing-in-memory neural network accelerator,” in Proceedings of the 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), pp. 1–6, San Francisco, CA, USA, June 2018. View at: Publisher Site | Google Scholar
  46. R. Imamura, T. Itasaka, and M. Okuda, “Zero-Shot Hyperspectral Image Denoising with Separable Image Prior,” in Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 1416–1420, Seoul, South Korea, October 2019. View at: Publisher Site | Google Scholar
  47. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2018. View at: Publisher Site | Google Scholar
  48. C. A. Cocosco, V. Kollokian, R. K. S. Kwan et al., “Brainweb: online interface to a 3D MRI simulated brain database,” NeuroImage, vol. 5, p. 425, 1997. View at: Google Scholar
  49. R. K.-S. Kwan, A. C. Evans, and G. B. Pike, “An extensible MRI simulator for post-processing evaluation,” in Lecture Notes in Computer Science, pp. 135–140, Springer, Berlin, Germany, 1996. View at: Publisher Site | Google Scholar
  50. C. D. Constantinides, E. Atalar, and E. R. McVeigh, “Signal-to-noise measurements in magnitude images from NMR phased arrays,” Magnetic Resonance in Medicine, vol. 38, no. 5, pp. 852–857, 1997. View at: Publisher Site | Google Scholar
  51. A. Hammers, C.-H. Chen, L. Lemieux et al., “Statistical neuroanatomy of the human inferior frontal gyrus and probabilistic atlas in a standard stereotaxic space,” Human Brain Mapping, vol. 28, no. 1, pp. 34–48, 2007. View at: Publisher Site | Google Scholar
  52. J. V. Allom and P. Coupé, “volBrain: an online MRI brain volumetry system,” Frontiers in Neuroinformatics, vol. 10, no. 54, pp. 1–14, 2016. View at: Publisher Site | Google Scholar
  53. C. R. Jack and Jr., Alzheimer’s disease neuroimaging initiative dataset, http://adni.loni.usc.edu/.
  54. A. Foi, “Noise estimation and removal in MR imaging: the variance-stabilization approach,” in Proceedings of the 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1809–1814, Chicago, IL, USA, March 2011. View at: Publisher Site | Google Scholar
  55. A. E. Kavur, N. S. Gezer, M. Barış et al., “CHAOS Challenge - combined (CT-MR) healthy abdominal organ segmentation,” Medical Image Analysis, vol. 69, Article ID 101950, 2021. View at: Publisher Site | Google Scholar
  56. A. E. Aslan, M. A. Selver, O. Dicle, M. Barış, and N. S. Gezer, “CHAOS—Combined (CT-MR) Healthy Abdominal Organ Segmentation Challenge Data,” 2019.
  57. J. V. Manjón, P. Coupé, A. Buades et al., “New methods for MRI denoising based on sparseness and self-similarity,” Medical Image Analysis, vol. 16, no. 1, pp. 18–27, 2012. View at: Publisher Site | Google Scholar
  58. S. Louis Collins, K. R. L. Reddy, and D. S. Rao, “Denoising and segmentation of MR images using fourth order non-linear adaptive PDE and new convergent clustering,” International Journal of Imaging Systems and Technology, vol. 29, no. 3, pp. 195–209, 2019. View at: Publisher Site | Google Scholar
  59. O. Oktay, E. Ferrante, K. Kamnitsas et al., “Anatomically constrained neural networks (ACNNs): application to cardiac image enhancement and segmentation,” IEEE Transactions on Medical Imaging, vol. 37, no. 2, pp. 384–395, 2018. View at: Publisher Site | Google Scholar
  60. Y. Heinrich, B. Zhang, W. Zhao et al., “Magnetic resonance image denoising algorithm based on cartoon, texture, and residual parts,” Computational and Mathematical Methods in Medicine, vol. 2020, Article ID 1405647, 10 pages, 2020. View at: Publisher Site | Google Scholar

Copyright © 2021 Liang Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views791
Downloads654
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.