Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article
Special Issue

Advanced Intelligent Fuzzy Systems Modeling Technologies for Smart Cities

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 6681202 | https://doi.org/10.1155/2021/6681202

Caiwei Liu, Guohua Zhao, Jiale Dong, Yusong Lin, Meiyun Wang, "MIE-NSCT: Adaptive MRI Enhancement Based on Nonsubsampled Contourlet Transform", Mathematical Problems in Engineering, vol. 2021, Article ID 6681202, 12 pages, 2021. https://doi.org/10.1155/2021/6681202

MIE-NSCT: Adaptive MRI Enhancement Based on Nonsubsampled Contourlet Transform

Academic Editor: Yi-Zhang Jiang
Received15 Dec 2020
Revised06 Jan 2021
Accepted15 Jan 2021
Published31 Jan 2021

Abstract

Image enhancement technology is often used to improve the quality of medical images and helps doctors or expert systems identify and diagnose diseases. This paper aimed at the characteristics of magnetic resonance imaging (MRI) with complex and difficult-to-enhance details and to propose a nonsubsampled contourlet transform- (NSCT-) based enhancement algorithm called MIE-NSCT. NSCT was used for MRI sub-band decomposition. For high-pass sub-bands, four fuzzy rules were proposed to enhance multiscale and multidirectional edge contour details from adjacent eight directions, whilst for low-pass sub-bands, a new adaptive histogram enhancement algorithm was proposed. The problem of noise amplification and loss of details during the enhancement process was solved. The algorithm was verified on the public dataset BraTS2017 and compared with other advanced methods. Experimental results showed that MIE-NSCT had obvious advantages in improving the quality of medical images, and high-quality medical images showed enhanced performance in grading tumour. MIE-NSCT is suitable for integration into an interactive expert system to provide support for the visualization of disease diagnosis.

1. Introduction

Medical images could visually and noninvasively describe the structure of the human body. With the continuous development of medical imaging technology, the reliance of disease detection and diagnosis on the information provided by medical images continues to increase [1]. Experienced radiologists and clinicians could obtain useful information, such as the shape and texture of certain tissue organs, from medical images to diagnose and identify complex diseases. The kinds of medical imaging include CT, magnetic resonance imaging (MRI), and ultrasound. MRI could provide a wealth of physiological tissue information because of its great advantages in soft-tissue imaging. However, the original MRI is usually affected by factors, such as equipment and acquisition conditions in the imaging process, thus resulting in image quality degradation [2]. Noise, artifacts, and low contrast are the main problems of MRI. Low-quality medical imaging could affect the postprocessing of images and the diagnosis by doctors [3].

Medical image enhancement technology has become one of the key technologies that make up the computer-aided design system [4]. Medical image enhancement involves the modification of images to discover missing features [5]. By using computer technology to enhance medical images, the edge and detailed characteristics of the region of interest (ROI) could be highlighted [6], the contrast between ROI and background could be improved, and the foundation for medical image recognition, feature extraction, and organ segmentation could be laid [7]. The image enhancement technology is divided into two main categories in accordance with the scope of the processing object: spatial domain enhancement method and spatial frequency domain enhancement method [8].

The object of the spatial domain enhancement method is the grey value of the images, and the core lies in the selection strategy of the mapping transformation function. This method has a great application in contrast improvement effect. However, it may lead to excessive enhancement when the histogram is multi peak. Due to the unique fuzzy properties of medical images, many scholars used fuzzy theory in the medical image enhancement. Enhanced image can achieve a good sharpening effect, but the enhancement will also cause noise amplification [9]. With the development of multiresolution technology, many methods based on the spatial frequency domain method have appeared, including wavelet, curvelet, and contourlet. Wavelet transform can decompose the image signal at different scales, but the image is often decomposed into a fixed wavelet base, resulting in the high- and low-frequency information of the image that cannot be distinguished according to the nature of the image, and the direction information is insufficient [10]. Curvelet transform cannot achieve the optimal nonlinear approximation of high-order regular singular edges, and scratches will appear on the edges after image processing [11]. In order to fully obtain the direction information, Wang et al. proposed contourlet transform for medical image enhancement. However, due to the down-sampling process in the Laplacian pyramid and the directional filter bank, the contourlet transform is limited in translation invariance, and this shortcoming may lead to the pseudo-Gibbs phenomenon around the singularity, which reduces the local information and weakens the characteristics of direction selection [12]. Furthermore, Kollem et al. proposed to use nondown-sampled contourlet transform for image enhancement, which has the characteristics of translation invariance so that each pixel of the transformed sub-band corresponds to the pixel of the original image in the same space, while suppressing the interference of new noise [13]. However, it could not distinguish the noise point and it causes noise amplification when transforming, thus making the edge of the enhanced images blurred. It is also not suitable for processing noisy images [14]. The spatial frequency domain enhancement method enhances images indirectly by adjusting the transformation coefficient in the image transformation domain. It has unique advantages in noise decomposition and suppression, but the contrast improvement effect is poor. The edges of the tissue in MRI are blurred, and the contrast is not high. In addition, MRI enhancement needs to improve the contrast whilst suppressing image noise and preserving image detail. However, both methods do not meet the needs of MRI enhancement.

In view of the problem of the loss of details between tissue boundary blur and enhanced images in MRI, a medical image enhancement method based on nonsubsampled contourlet transform (NSCT) called MIE-NSCT was proposed in this article. Firstly, this method utilised the superior multiresolution characteristics and local analysis performance of multiscale analysis. MRI is broken down into low- and high-pass sub-band by using NSCT multistage. Low-pass sub-band contains the contour part of the images, whilst high-pass sub-band contains the detail and edge information in the images. NSCT decomposition could preserve the contour structure of the images. In the low-pass sub-band, images are enhanced based on the histogram. In accordance with the valley value to divide the greyscale segment, each segment of the greyscale transformation enhancement not only considers the phenomenon of sub-block overlap but also enhances the contrast of different parts. In the high-pass sub-band, using the fuzzy theory could characterise the boundary, area, and texture information. It could also suppress noise to a certain extent whilst improving the contrast. Secondly, fuzzy logic is used to extract and enhance the details and edges of the Qualcomm sub-band. Finally, the high- and low-pass sub-bands are reconstructed with inverse NSCT. The core contribution is to build image enhancement on the basis of multiscale decomposition, make full use of the direction information of the image, and divide the multisub histogram by the valley value to reduce the loss of large neighborhood details. The combination of these methods can preserve more details of the original image, thereby improving the quality of the enhanced image. It solves the problem of loss of details that is easy to produce in MRI enhancement. MIE-NSCT not only enhances the detail characteristics of the images but also retains the structured information of the images. Thus, it is a multiscale medical image enhancement method that combines global and local MRI. MIE-NSCT could be clinically used as a pretreatment step for expert systems.

Medical image enhancement has always been a hotspot in clinical medicine and biomedical engineering. Medical image algorithms are mainly divided into three categories: based on spatial domain, spatial frequency, and deep learning.(1)Medical image enhancement algorithm based on the spatial domain: Histogram equalization (HE) is the most typical spatial domain enhancement method [15] that could quickly and intuitively improve image contrast. In accordance with the different enhancement regions, the histogram could be divided into global HE (GHE) [1619] and local HE (LHE) [2024]. GHE has low computational complexity and good brightness retention effect. However, the single-mapping function ignores the local characteristics of the images, which may overenhance some areas and limit the enhancement effect. In particular, the enhanced images may have blurred edges along with noise. LHE divides the images into blocks and sets a different mapping function for each image sub-block, which could improve the contrast of local areas and solve the problem of excessive enhancement in some areas. A type of the LHE algorithm represented by CLAHE introduces the limit threshold, which limits the contrast to suppress the problem of edge blur caused by noise amplification. However, the images enhanced by the LHE algorithm inevitably have blocking effects, and the continuity of block boundaries could not be guaranteed. Subramani and Veluchamy [25] proposed a new adaptive fuzzy grey-level difference histogram equalization algorithm. First, the binary similarity mode is used to calculate the grey-level difference of the input image, and then the grey-level difference is blurred to deal with the uncertainty in the input image. After fuzzification, calculate the limit of fuzzy grayscale difference to control the situation where the contrast enhancement is not obvious. Finally, the fuzzy clipping histogram is equalized to obtain a contrast-enhanced medical image, which will inevitably amplify noise. Zhoa et al. [26] proposed a medical image enhancement method based on brightness modulation and gradient modulation (LM&GM). The algorithm adjusts brightness and improves contrast by reducing the global dynamic range of the input image. On the basis of brightness modulation, GM is used to enhance the details and texture of the image. The influence of noise cannot be avoided.(2)Medical image enhancement algorithm based on the spatial frequency domain: The use of multiple domain transformations to enhance images is flexible. Li et al. [27] proposed an image enhancement algorithm based on dual-tree complex wavelet transform (DTCWT) and morphology. DTCWT is used to decompose the image into two parts, low pass and high pass. High-pass images are decomposed in multiple directions and scales, and the optimal threshold is adaptively selected to denoising. The top-hat operator is used to enhance the edge details of the low-pass images. This method could considerably improve the image quality globally and locally. However, given that the low-frequency sub-band is not considered, the contour part of the enhanced images may become more blurred. In addition, for the unique fuzzy characteristics of medical images, Deng et al. [28] proposed a medical image enhancement method based on intuitionistic fuzzy sets. This method uses a global threshold to segment the images and then uses the membership function to exaggerate and blur the images. Subsequently, filtered images are obtained through normalization. Finally, the original images and the filtered images are fused to obtain an enhanced image. However, this kind of method relies on the membership function and fuzzy operator. If the selection is not appropriate, the enhancement effect is heavily affected. Wadhwa and Bhardwaj [29] used the fractional derivative G-L to define two different scale masks to maintain the correlation of adjacent pixels. The input image is divided into edge area, texture area, and smooth area using gradient. For each pixel in the three regions, the order of the fractional derivative is selected, and the input image is framed and masked to obtain an enhanced image. This method enhances the edges and textures of the image while maintaining the smooth area of the image. The amount of calculation is large, and weak edges in the image are omitted, and the enhanced image loses part of the real information.(3)Medical image enhancement algorithm based on deep learning: This algorithm uses the network to learn a mapping rule, and the original images are enhanced through this rule. Chen et al. [30] proposed a high-resolution reconstruction method-feedback adaptively weighted dense network (FAWDN). A feedback mechanism is used to input low-level features into the hidden unit of FAWDN, and adaptively weighted dense block is used to select convolutional layer features. At present, this method is only applicable to 2D images. Jung et al. [31] proposed a new PVS enhancement algorithm that uses a deep dense network of jump connections for enhancement without parameter adjustment; this algorithm could effectively alleviate the problem of gradient disappearance due to layer depth. Although the effect of deep learning in image enhancement is very good, deep learning training is complex; it requires high data sets, and it is quite time-consuming. Zhu et al. [32] proposed an MRI enhancement method based on visual attention, which enhances the image through contrast adjustment and illumination component maintenance. The outgoing framework includes image generation and image fusion to address the limitations of a single image. Assume that the MRI image is composed of tissue and details. An adaptive attenuation weight matrix based on the input MRI image is designed. A light-preserving image is introduced into the model as compensation for the attenuation image.

In summary, the spatial domain enhancement method focuses on improving the contrast of the images, but it could not suppress noise and edge blur may occur. The spatial frequency domain enhancement method focuses on noise suppression, but the contrast enhancement is weak. Although the image enhancement algorithms based on deep learning work well, they are slow to train and consume more resources. In this paper, an MRI medical image enhancement algorithm combining spatial and frequency domains was proposed. This algorithm combines the advantages of spatial domain enhancement and spatial frequency domain enhancement to suppress noise whilst enhancing image edges and details. It has a good contrast enhancement effect (Algorithm 1) [33].

(i)Step 1. Valley segment: If , is a valley point. Calculate the position of the segment point and divide the histogram into .
(ii)Step 2. Change between segments: Whilst , if ,  = . Otherwise, proceed to step 3.
(iii)Step 3. Change between segments: Whilst , if ,  = . Otherwise, proceed to step 4.
(iv)Step 4. Intrasegment transformation: Assuming the length of the segment is , whilst , . Otherwise, turn .

3. Materials and Methods

The MIE-NSCT’s work of image enhancement is based on image decomposition, which is a multiscale image enhancement method combining global and local MRI. The method is first decomposed into eight high-pass sub-bands and one low-pass sub-band. For high-pass sub-bands, the edge and detail sections are enhanced using fuzzy rules. For low-pass sub-bands, the sections are enhanced using improved adaptive histogram equalization. Finally, the image is reconstructed using inverse NSCT. The framework of the MIE-NSCT method is shown in Figure 1.

3.1. Decomposition of MRI

Image enhancement aims to improve the overall or partial characteristics of the images, highlight texture details, suppress noise, and make the images easier for human eyes to observe or recognise. However, the difficulty lies in how to suppress image noise whilst improving the contrast. A single-image enhancement method could not achieve both. Comparison showed that the spatial frequency domain image enhancement method has unique advantages in separating and suppressing noise. However, it is inferior in improving image contrast and brightness. The spatial frequency domain image enhancement method acts on the grey level of the images. It could quickly and intuitively increase the contrast and brightness of the images. However, it could not suppress the image noise well. In this paper, spatial frequency domain enhancement methods and spatial domain enhancement methods were combined to improve the MRI quality. NSCT [34] has good multidirectional, multiscale, and multiresolution decomposition characteristics. It does not down-sample the images, decomposes the image size to be the same, and reduces the edge loss problem very well; it has good translation invariance, and no Burgess effect is present [35]. No false contour could be observed in the decomposition and fusion process, and the edge and detail information of the images could be enhanced. NSCT is very suitable for transformation between the MRI spatial domain and spatial frequency domain.

NSCT is used for multiscale decomposition to obtain the high-pass and low-pass sub-bands of MRI. In the spatial frequency domain, the MRI is continuously divided into high- and low-frequency sub-bands through a nonsubsampled pyramid filter. The high-frequency sub-band is continuously decomposed into multidirectional high-frequency sub-bands through the nonsubsampled directional filter bank. In the decomposition process, the high-frequency sub-band’s multidirectional detail information could be obtained. The image contour information could also be obtained through few nonzero coefficients. The decomposition process is shown in Figure 2.

3.2. Low-Pass Sub-Band Enhancement
3.2.1. Valley Segment

The histogram of the low-pass sub-band is the statistics of each part of the images, that is, the nonedge content part of the images, including all the tissues after smoothing the edge. Figure 3(a) shows that the brain tissues mainly include brain grey matter, white matter, and cerebral effusion. Given the different grey levels and cumulative pixel differences of the three tissues, they appear as multiple peaks and valleys in their histograms. In this paper, the trough points in the histogram were filtered as threshold points for the division of different regions, and the contrast between different tissues could be enhanced without affecting the overall contrast. The valley value is defined as follows:where is the number of pixels contained in the i-th grey level and is a certain valley point. After all the threshold points, is determined for the division of different regions according to the low-pass sub-band histogram; the segment regions are divided by as a grey scale and as the m-th trough point.

3.2.2. Change between Segments

After the trough threshold is used to divide the segment area, multiple histogram segments are obtained. The increased grey level of the segment is calculated in accordance with two indicators to enhance the tissue area corresponding to these histogram segments, and the contrast of the segment is improved.(i)Index 1. Segment length of the current segment area: The longer the segment length is, the more grey levels are affected.(ii)Index 2. Accumulation of pixels in the current segment area: The larger the accumulated value of the pixels in the segment area is, the more pixels contained in the segment is. The more brain tissue regions are included, the greater the effect on the entire images is.

The two indicators are compared to prevent the images from being overenhanced, and the maximum value is selected as the grey level to increase in the area as follows:where is the number of pixels in the i-th segment, is the total number of pixels in the picture, is the segment length of the i-th segment, is the total grey level, is the total number of segments, and is the grey level added by the segment.

3.2.3. Intrasegment Transformation

Through the transformation between segments, the overall grey intensity of the tissues corresponding to different segments is enhanced, and the overall contrast is improved. However, the local contrast of different parts needs to be improved to obtain clearer images. The n segment regions divided by valley points are used as n subhistograms; these subhistograms are then enhanced. For the i-th subhistogram hi, the increased grey level of a certain grey level in the subhistogram is calculated in accordance with an index to increase the contrast of the grey level.

Index. Current grey level pixel accumulation: The larger the accumulated value of the current grey level pixels is, the more pixels the current grey level contains and the greater the effect on the entire subhistogram is. The grey level transformation distance of the i-th grey level increase is calculated as follows:where is the transformation greyscale distance within the segment, is the number of pixels contained in the i-th grey level in the current segment, is changed to the j-th of the n segments, and is the segment length of the i-th segment. The effect is shown in Figure 4.

3.3. High-Pass Sub-Band Enhancement

The high-pass sub-band represents the edge and detail information of the images. The edge details of MRI are often the boundary of two uniform tissues. Clear edges could help further distinguish adjacent tissues. The boundary could be detected by comparing the gradients of neighbouring pixels. However, only using the gradient between two neighbouring pixels to determine the uniform area is not accurate; artifacts may be detected as edges. This paper defined four fuzzy rules for the edge of the images and determined whether the current point is an edge point from the eight-direction gradient near the pixel point. Image noise points are suppressed whilst enhancing the edges. The fuzzy rules are defined in Table 1.


R1 = “If Ix is zero and Iy is zero, then Iout is white”;
R2 = “if Ix is not zero or Iy is not zero, then Iout is black”;
R3 = “if Ix is not zero or Iy is zero, then Iout is black”;
R4 = “if Ix is zero or Iy is not zero, then Iout is black”;

The Gaussian membership function is defined as follows:

The high-pass sub-bands are normalized, and gradient filters and (equal transpose) are used. Convolution is performed to obtain the gradient matrix of the images along the horizontal and vertical directions and as input to the fuzzy inference system. For each input, the zero-mean Gaussian membership function is used to map the gradient value of the point to between 0 and 1, that is, the degree to which the gradient belongs to 0. If the gradient value of the pixel is 0, its membership of zero is 1. is set to 1, whilst c is the mean value set to 0. The change of the value affects the performance of edge detection. If the value is too large, the blur system is insensitive to edge detection, and this value makes the edge detection susceptible to noise.

Fuzzy logic edge detection is performed from the eight-direction sub-bands of the high-pass sub-band. The resulting edge images are merged. The fusion rule is to select the maximum value of the same pixel position in the eight-direction edge images as the grey level of the pixel position in the enhanced images. The high-pass sub-band enhancement process is shown in Figure 5.

4. Results and Discussion

This article used BraTS2017 as the experimental data set [1, 36, 37]. This data set contained 210 cases of high-grade tumour (HGG) and 75 cases of low-grade tumour (LGG), including four sequences of T1, T1ce, T2, and FLAIR and marking documents. The image size was 240 × 240 × 155, and the lesion area accounted for 0.5%–3% of the entire images. From the four sequences of each patient in BraTS2017, slices containing tumour components were extracted. For each sequence, four tumour slices at the same location were selected, for a total of 4560 tumour slices. The experimental data set contained 3360 slices of HGG and 1200 slices of LGG. The experimental software and hardware environment is shown in Table 2.


Relevant configurationParameter

Operating systemWindows 10
CPUIntel (R) Core (TM), 3.20 GHz
Programming languagePython, Matlab
IDEPycharm, Matlab
Image algorithm libraryOpenCV, SimpleITK, Nibabel

4.1. Image Enhancement

Three image enhancement methods, namely, CLAHE [21], DTCWT [27], and FAWDN [30], were compared with MIE-NSCT to verify the results of image enhancement. Four evaluation indicators, namely, structure similarity (SSIM) [38], absolute average brightness error (AMBE) [39], Entropy [40], and peak signal-to-noise ratio (PSNR) [41], were selected. Structure, brightness, information richness, and distortion were used to quantitatively evaluate the enhanced image quality.where and are the mean values of the input and output images and and are the variances of the input and output images. SSIM is a method that measures the similarity of two images.where represents the average value of the input images and represents the average value of the output images. AMBE is used to evaluate the brightness preservation effect of the method.where represents the grey value of the input images pixel and represents the grey value of the output images pixel. PSNR is usually used to evaluate the quality of an image after compression compared with that of original images. The higher the PSNR, the smaller the image distortion after compression.

Entropy is used to evaluate the richness of image enhancement information. The larger the entropy value, the richer the detailed information contained in the processed images.

The enhancement of the experimental contrast is shown in Figure 6. The evaluation results are shown in Table 3. The original T1 sequence has high brightness, but the details are blurred. The other three sequences have low brightness, and the overall darkness is blurred. After CLAHE was used to enhance the four sequence images, the tumour area is prominent but the contrast of the contour part is not improved; only the overall brightness of the image is improved. After DTCWT was used to enhance the four sequence images, the image contrast and contour contrast are significantly improved but the brightness retention effect is slightly worse than that of FAWDN and MIE-NSCT. The definition of the contour structure of FAWDN is lower than that of MIE-NSCT. Compared with the original images, the average values of CLAHE, DTCWT, FAWDN, and MIE-NSCT on AMBE are 39.755, 25.745, 2.487, and 3.772, respectively. CLAHE and DTCWT have poor brightness retention effects. FAWDN maintains slightly better brightness than MIE-NSCT, but its enhanced images have the lowest structural similarity to the original images, that is, 0.241 lower than that of MIE-NSCT. Entropy is the largest, which means that the content is the best and the retention of the image content is improved. The average values of CLAHE, DTCWT, FAWDN, and MIE-NSCT on AMBE are 17.636, 23.212, 17.333, and 28.997, respectively. For multisequence image data sets, the variance fluctuations are all small. The DTCWT and MIE-NSCT methods that pay attention to contour information and detailed information have relatively high Entropy. MIE-NSCT is approximately 5.787 higher than DTCWT, which has better results. The four enhancement methods have little difference in terms of PSNR but the variance of MIE-NSCT is relatively small, indicating that MIE-NSCT enhances the multisequence data set more stably.


Contrast evaluationSSIMAMBEEntropyPSNR
MeanStd devMeanStd devMeanStd devMeanStd dev

CLAHE0.7780.33639.7552.64417.6360.46125.4774.783
DTCWT0.8740.42625.7455.93523.2120.78224.3765.331
FAWDN0.7260.3622.4870.76117.3330.55123.7924.463
MIE-NSCT0.9670.1613.7720.24728.9970.37928.3572.544

4.2. Classification of Tumour

In addition to visual analysis and quantitative evaluation, raw data and enhanced data were used to conduct tumour contrast experiments as a supplement to the quality evaluation of MRI enhancement and verify the usability of MIE-NSCT enhanced images. The BraTS2017 data were divided into training set and test set, where 80% of the data set was used as the training set and 20% of the data set was used as the test set. The open-source software ITKSNAP [42] was used to segment the tumour area of the data set. Pyradiomics [43] was used to extract the imaging features of the tumour from the tumour area. Minimum redundancy and maximum correlation were used to reduce the image feature dimensions. The SVM classifier [44] was used to train and predict the acquired features and generate a predictive model. The experiment was a binary classification problem. Binary labels were used to mark tumour grades. HGG was defined as a positive sample, labelled as 1. LGG was defined as a negative sample and marked as 0. In the experiment, a five-fold cross-validation method was used to train and test the data set. The classification results were evaluated in accordance with the following indicators to verify the effect of the original data set and the enhanced data set on the results of the classification experiment: area under curve (AUC), accuracy (ACC), sensitivity (SEN), specificity (SPE), positive predictive value (PPV), and negative predictive value (NPV). True positive (TP) predicts the positive class as a positive class. False positive (FP) predicts the negative class as a positive class. False negative (FN) predicts a positive class as a negative class. False positive (FP) predicts the negative class as a negative class.

Figure 7 shows the ROC curve of the original images and the MIE-NSCT-enhanced images for grading. The average ROC of the original images is 0.85, and the average ROC of the enhanced images is 0.91, an increase of approximately 6%. Figure 8 depicts the confusion matrix between the original images and the MIE-NSCT-enhanced images. Figures 8(a) and 8(b) are the confusion matrix of the original images during training and testing, whilst Figures 8(c) and 8(d) are the confusion matrix of the original images during training and testing. The FP of the original data during the training process is 276 and the FN is 168. The FP of the original data during the test is 82 and the FN is 45. The FP of the MIE-NSCT-enhanced data during the training process is 167 and the FN is 27. The FP of the MIE-NSCT-enhanced data during the test is 37 and the FN is 20. The MIE-NSCT misclassification probability is 50% lower than that of the original images. Table 4 shows the evaluation results calculated using the confusion matrix. The ACC, SEN, SPE, PPV, and NPV of the MIE-NSCT-enhanced images increased by approximately 8%, 4%, 10%, 6%, and 14%, respectively, compared with those of the original images. Under the same classifier, the MIE-NSCT-enhanced image classification effect was better than that of the original images. This finding proved the effectiveness of MIE-NSCT in image enhancement.


Assessment indicatorsOriginal train (%)Original test (%)MIE-NSCT train (%)MIE-NSCT test (%)

AUC88.7585.6990.7791.82
ACC87.8386.0794.6893.75
SEN93.7593.3098.9997.02
SPE71.2565.8382.6084.58
PPV90.1388.4394.1094.63
NPV80.2877.8396.7191.03

5. Conclusions

In this paper, a new technology called MIE-NSCT for MRI enhancement of high- and low-pass sub-bands was proposed. This technology has two main contributions. The first contribution is the introduction of NSCT strategy, which decomposes the MRI of the tumour into high-pass and low-pass sub-bands. It helps complete the decomposition and reconstruction of the images and further preserves the image details. The second contribution is the MIE-NSCT image enhancement algorithm. For high-pass sub-bands, four fuzzy logics were defined to accurately determine the edges in eight directions and multiscale and multidirectional transformation is achieved. For low-pass sub-bands, global and local structure adaptive histogram equalization technology was improved to divide the images into intersegment and intrasegment areas. MIE-NSCT enhances the contrast between different parts and the clarity of each part. Compared with BPDHE, AGCWD, and NPE, MIE-NSCT considerably improved the enhancement effect of the MRIs of the tumour. The evaluation results of SSIM, AMBE, Entropy, and PSNR showed that MIE-NSCT has good brightness retention and structural retention. In the tumour grading experiment, the classification performance of the enhanced data set was significantly improved. The method improves the overall brightness and contrast of the image, making the image visually effective. the image. In general, MIE-NSCT could be used to improve the visual quality of images and as a preprocessing step for image segmentation, feature extraction, and classification. This article classifies the original image into multiple subimages for processing; the complexity of multiscale and multidirectional transformation affects the real-time performance of the algorithm to a certain extent. How to reduce the complexity of this transformation is the future research direction. And, because the types of medical images are complex and their organizational structures have their own characteristics, it is difficult to guarantee the versatility of the method. In the future research work, we will try to divide the image into different tissues and then enhance the contrast to improve the universality of the method.

Data Availability

The data used in this paper are public dataset (BraTS2017); BraTS2017 can be obtained through the following URL:https://www.med.upenn.edu/sbia/brats2017/data.html.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank the Digital Database for the Center of Biomedical Image Computing and Analytics—CBICA open-source provider. This work was supported by the National Natural Science Foundation of China under grant nos. 81720108021 and 81772009 and the Scientific and Technological Research Project of Henan Province under grant no. 182102310162.

References

  1. N. K. Batmanghelich, B. Taskar, C. Davatzikos et al., “Generative-discriminative basis learning for medical imaging,” IEEE Transactions on Medical Imaging, vol. 31, no. 1, pp. 51–59, 2011. View at: Publisher Site | Google Scholar
  2. A. Khatami, M. Babaie, A. Khosravi et al., “Parallel deep solutions for images retrieval from imbalanced medical imaging archives,” Applied Soft Computing, vol. 63, pp. 197–205, 2018. View at: Publisher Site | Google Scholar
  3. X. Guo, Y. Li, and H. Ling, “LIME: low-light image enhancement via illumination map estimation,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 982–993, 2017. View at: Publisher Site | Google Scholar
  4. G. G. E. Gielen and R. A. Rutenbar, “Computer-aided design of analog and mixed-signal integrated circuits,” IN Proceedings of the IEEE, vol. 88, no. 12, pp. 1825–1854, 2000. View at: Publisher Site | Google Scholar
  5. S. Park, S. Yu, B. Moon, S. Ko, and J. Paik, “Low-light image enhancement using variational optimization-based retinex model,” IEEE Transactions on Consumer Electronics, vol. 63, no. 2, pp. 178–184, 2017. View at: Publisher Site | Google Scholar
  6. A. Jog, A. Carass, S. Roy, D. L. Pham, and J. L. Prince, “Random forest regression for magnetic resonance image synthesis,” Medical Image Analysis, vol. 35, pp. 475–488, 2017. View at: Publisher Site | Google Scholar
  7. Y. HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski, “Non-rigid dense correspondence with applications for image enhancement,” ACM Transactions on Graphics, vol. 30, no. 4, pp. 1–10, 2011. View at: Publisher Site | Google Scholar
  8. K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: a deep autoencoder approach to natural low-light image enhancement,” Pattern Recognition, vol. 61, pp. 650–662, 2017. View at: Publisher Site | Google Scholar
  9. P. K. Saha, J. K. Udupa, and D. Odhner, “Scale-based fuzzy connected image segmentation: theory, algorithms, and validation,” Computer Vision and Image Understanding, vol. 77, no. 2, pp. 145–174, 2000. View at: Publisher Site | Google Scholar
  10. R. N. Strickland and H. I. Hahn, “Wavelet transforms for detecting microcalcifications in mammograms,” IEEE Transactions on Medical Imaging, vol. 15, no. 2, pp. 218–229, 1996. View at: Publisher Site | Google Scholar
  11. H. R. Shahdoosti, “Combined ripplet and total variation image denoising methods using twin support vector machines,” Multimedia Tools and Applications, vol. 77, no. 6, pp. 1–19, 2018. View at: Publisher Site | Google Scholar
  12. X. Wang, W. Chen, J. Gao, and C. Wang, “Hybrid image denoising method based on non‐subsampled contourlet transform and bandelet transform,” Iet Image Processing, vol. 12, no. 5, pp. 778–784, 2018. View at: Publisher Site | Google Scholar
  13. S. R. Kollem, K. Reddy, and D. S. Rao, “Improved partial differential equation-based total variation approach to non-subsampled contourlet transform for medical image denoising,” Multimedia Tools and Applications, vol. 80, pp. 2663–2689, 2021. View at: Publisher Site | Google Scholar
  14. R. Moreno and Ö. Smedby, “Gradient-based enhancement of tubular structures in medical images,” Medical Image Analysis, vol. 26, no. 1, pp. 19–29, 2015. View at: Publisher Site | Google Scholar
  15. S. H. Fang, C. H. Wang, and Y. Tsao, “Compensating for orientation mismatch in robust wi-fi localization using histogram equalization,” IEEE Transactions on Vehicular Technology, vol. 64, no. 11, 2015. View at: Publisher Site | Google Scholar
  16. C. Liu, X. Sui, X. Kuang, Y. Liu, G. Gu, and Q. Chen, “Optimized contrast enhancement for infrared images based on global and local histogram specification,” Remote Sensing, vol. 11, no. 7, p. 849, 2019. View at: Publisher Site | Google Scholar
  17. J. R. Tang and N. A. Mat Isa, “Bi-histogram equalization using modified histogram bins,” Applied Soft Computing, vol. 55, pp. 31–43, 2017. View at: Publisher Site | Google Scholar
  18. Y. Li, Y. Zhang, A. Geng et al., “Generative-discriminative basis learning for medical imaging,” Optics & Laser Technology, vol. 83, pp. 99–107, 2016. View at: Google Scholar
  19. K. Liang, Y. Ma, Y. Xie, B. Zhou, and R. Wang, “A new adaptive contrast enhancement algorithm for infrared images based on double plateaus histogram equalization,” Infrared Physics & Technology, vol. 55, no. 4, pp. 309–315, 2012. View at: Publisher Site | Google Scholar
  20. S. Li, W. Jin, L. Li, and Y. Li, “An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization,” Infrared Physics & Technology, vol. 90, pp. 164–174, 2018. View at: Publisher Site | Google Scholar
  21. Y. Chang, C. Jung, P. Ke, H. Song, and J. Hwang, “Automatic contrast-limited adaptive histogram equalization with dual gamma correction,” IEEE Access, vol. 6, pp. 11782–11792, 2018. View at: Publisher Site | Google Scholar
  22. J. Huang, Y. Ma, Y. Zhang, and F. Fan, “Infrared image enhancement algorithm based on adaptive histogram segmentation,” Applied Optics, vol. 56, no. 35, pp. 9686–9697, 2017. View at: Publisher Site | Google Scholar
  23. W. Minjie, G. Guohua, Q. Weixian et al., “Infrared image enhancement using adaptive histogram partition and brightness correction,” Remote Sensing, vol. 10, no. 5, p. 682, 2018. View at: Publisher Site | Google Scholar
  24. B. Xiao, H. Tang, Y. Jiang, W. Li, G. Wang et al., “Brightness and contrast controllable image enhancement based on histogram specification,” Neurocomputing, vol. 275, pp. 2798–2809, 2018. View at: Publisher Site | Google Scholar
  25. B. Subramani and M. Veluchamy, “Fuzzy gray level difference histogram equalization for medical image enhancement,” Journal of Medical Systems, vol. 44, no. 6, 2020. View at: Publisher Site | Google Scholar
  26. C. Zhao, Z. Wang, H. Li et al., “A new approach for medical image enhancement based on luminance-level modulation and gradient modulation,” Biomedical Signal Processing and Control, vol. 48, pp. 189–196, 2019. View at: Publisher Site | Google Scholar
  27. D. Li, L. Zhang, C. Sun, T. Yin, C. Liu, and J. Yang, “Robust retinal image enhancement via dual-tree complex wavelet transform and morphology-based method,” IEEE Access, vol. 7, pp. 47303–47316, 2019. View at: Publisher Site | Google Scholar
  28. H. Deng, W. Deng, X. Sun et al., “Mammogram enhancement using intuitionistic fuzzy sets,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 8, pp. 1803–1814, 2017. View at: Publisher Site | Google Scholar
  29. A. Wadhwa and A. Bhardwaj, “Enhancement of MRI images of brain tumor using Grünwald Letnikov fractional differential mask,” Multimedia Tools and Applications, vol. 79, no. 2, pp. 1–24, 2020. View at: Publisher Site | Google Scholar
  30. L. Chen, X. Yang, G. Jeon, M. Anisetti, and K. Liu, “A trusted medical image super-resolution method based on feedback adaptive weighted dense network,” Artificial Intelligence in Medicine, vol. 106, p. 101857, 2020. View at: Publisher Site | Google Scholar
  31. E. Jung, P. Chikontwe, X. Zong, W. Lin, D. Shen, and S. H. Park, “Enhancement of perivascular spaces using densely connected deep convolutional neural network,” IEEE Access, vol. 7, pp. 18382–18391, 2019. View at: Publisher Site | Google Scholar
  32. R. Zhu, X. Li, X. Zhang et al., “MRI enhancement based on visual-attention by adaptive contrast adjustment and image fusion,” Multimedia Tools and Applications, In press. View at: Publisher Site | Google Scholar
  33. Y. Xu, Y. Qian, and F. Yang, “DC cable feature extraction based on the PD image in the non-subsampled contourlet transform domain,” IEEE Transactions on Dielectrics and Electrical Insulation, vol. 77, no. 21, pp. 33–540, 2018. View at: Google Scholar
  34. G. Bhatnagar, Q. M. J. Wu, and Z. Liu, “Directive contrast based multimodal medical images fusion in NSCT domain,” IEEE Transactions on Multimedia, vol. 15, no. 5, pp. 1014–1024, 2013. View at: Publisher Site | Google Scholar
  35. A. Harten, B. Engquist, S. Osher et al., “Uniformly high order accurate essentially non-oscillatory schemes, III,” Journal of Computational Physics, vol. 71, no. 1, pp. 231–303, 1987. View at: Publisher Site | Google Scholar
  36. B. H. Menze, A. Jakab, S. Bauer et al., “The multimodal brain tumor images segmentation benchmark (BRATS),” IEEE Transactions on Medical Imaging, vol. 34, no. 10, 2015. View at: Publisher Site | Google Scholar
  37. S. Bakas, H. Akbari, A. Sotiras et al., Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge, 2017, In press.
  38. J. M. Boone, T. R. Nelson, K. K. Lindfors, and J. A. Seibert, “Dedicated breast CT: radiation dose and image quality evaluation,” Radiology, vol. 221, no. 3, p. 657, 2001. View at: Publisher Site | Google Scholar
  39. K. Bashir, T. Xiang, and S. Gong, “Gait recognition without subject cooperation,” Pattern Recognition Letters, vol. 31, no. 13, pp. 2052–2060, 2010. View at: Publisher Site | Google Scholar
  40. B. Gupta and M. Tiwari, “Minimum mean brightness error contrast enhancement of color imagess using adaptive gamma correction with color preserving framework,” Optik-International Joumal for Light and Electron Optics, vol. 127, no. 4, pp. 1671–1676, 2015. View at: Publisher Site | Google Scholar
  41. A. C. Brooks, X. Xiaonan Zhao, and T. N. Pappas, “Structural similarity quality metrics in a coding context: exploring the space of realistic distortions,” IEEE Transactions on Image Processing, vol. 17, no. 8, pp. 1261–1273, 2008. View at: Publisher Site | Google Scholar
  42. P. A. Yushkevich and G. Gerig, “ITK-SNAP: an intractive medical image segmentation tool to meet the need for expert-guided segmentation of complex medical images,” IEEE Pulse, vol. 8, no. 4, pp. 54–57, 2017. View at: Publisher Site | Google Scholar
  43. J. J. M. Griethuysen, A. Fedorov, C. Parmar et al., “Computational radiomics system to decode the radiographic phenotype,” Cancer Research, vol. 77, no. 21, pp. 104–107, 2017. View at: Publisher Site | Google Scholar
  44. V. K. Chauhan, K. Dahiya, and A. Sharma, “Problem formulations and solvers in linear SVM: a review,” Artificial Intelligence Review, vol. 52, pp. 803–855, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Caiwei Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views41
Downloads81
Citations

Related articles