Abstract

In this study, pulse coupled neural network (PCNN) was modified and applied to the enhancement of blur images. In the transform domain of nonsubsample shearlet transform (NSST), PCNN was used to enhance the details of images in the low- and high-frequency subbands, and then the enhanced low- and high-frequency coefficients were used for NSST inverse transformation to obtain the enhanced images. The results showed that the proposed method can produce higher-quality images and suppress noise better than traditional image enhancement strategies.

1. Introduction

Uncertainty has become one of the great challenges for information processing. Many methods and theories including evidence reasoning and fuzzy set theory have been proposed to deal with uncertainty [15]. Because image acquisition can be affected by various factors such as the imaging environment, system equipment, and transmission medium, the quality of images is often reduced to varying degrees because of problems such as image blurring, low contrast, and insufficient definition, which renders subsequent image analysis, target extraction, classification, and identification inconvenient. The decrease of image quality also may increase the uncertainty of the information conveyed by an image. The enhancement of image quality is a process of reducing uncertainty, which is significant to image processing. To improve image quality, increase the amount of information collected, strengthen the effects of image interpretation and recognition, and fundamentally improve the practical application value of images, it is important to purposefully emphasize the overall and local characteristics of the image, render unclear images clear, highlight features of note, enlarge the differences between features of different objects in the image, and suppress the features people may not be interested in [6]. Spatial domain enhancement and frequency domain enhancement are currently the two most commonly used image enhancement methods. Dealing directly with the gray value of the image, spatial domain enhancement mainly includes grayscale transformation, histogram equalization, and unsharp masking. Although such methods improve the image quality to a certain extent, the processing of details such as image edge texture is often unsatisfactory. Moreover, frequency domain enhancement can be used, but the method used directly determines the final enhancement quality of the image. Traditional image enhancement methods such as histogram equalization and median filtering fail to have satisfactory performance in retaining detail features of the image [7]. To solve these problems, an image enhancement method based on improved nonsubsample shearlet transform (NSST) and pulse coupled neural network (PCNN) was proposed in this study.

NSST consists of a nonsubsample Laplacian Pyramid transformation and a shear filter with translation invariance. The multiscale decomposition of NSST is similar to the nonsubsample contourlet transform (NSCT) [8], but NSST solved a limited number of directions of the NSDFB of NSCT due its application of shear filter in directional decomposition, and the shear filter can be represented by a window function in the matrix form [9]. In addition, with NSST, wavelet transform applies to anisotropic objects as well as isotropic ones. The length–width ratio in the interval can vary with scale and determine the number of directions according to the need of time complexity and requirement on the quality of image enhancement. However, like NSCT, in the multiscale decomposition mechanism of NSST, the high-frequency and low-frequency subbands were decomposed by the nonsubsampled pyramid (NSP) with poor ability to capture detailed information [10]. In view of this, improved NSST (INSST) [11] was applied to the field of image enhancement in this study. The NSP in NSST was replaced by redundant lifting nonseparable wavelet (RLNSW) [12] so that the decomposed subbands were equipped with the excellent characteristics of quincunx sampling and lifting algorithms.

Inspired by the visual cortex model, Eckhorn et al. established a feedback-type intelligent network model connected by a myriad of neurons, i.e., the PCNN, by simulating the neurological system of the feline [13]. As a highly nonlinear system that simulates the way the biological brain works, PCNN adapts to the environment through the self-learning of neurons and exchanged learning conducted among neurons. Although the standard PCNN has been widely used, there are still many problems. First, its structure is rather complicated, and many parameters need to be set, and these parameters can only be obtained through a large number of experiments or based on substantial experience. Second, because of its bulky structure and unclear number of iterations, the computation of classical PCNN is time-consuming. Third, because it is unsuitable for the human visual model, some of the attenuation models in the classical PCNN remain to be further improved. Qu et al. [14] and Kong et al. [15] replaced the original gray value with spatial frequency and sharpness, as the excitation input of neurons. Wang and Ma improved the computational efficiency of PCNN by multichannel means [16]. Yang addressed the defect of a fixed-value link coefficient and designed the self-adaptive link coefficient [17]. Liu and Ma solved the problem of the number of iterations with the time matrix [18]. By studying PCNN, the improved PCNN model was explored and used in the field of image enhancement in this study. This model was reasonably designed to have better adaptability than other models, a simple structure, and concise computation.

Through multiscale multidirectional decomposition of INSST, the original image was decomposed into a low-frequency image and several high-frequency images. As an approximate image of the original image, the low-frequency image contained the contour information of the original image, whereas the high-frequency image included important details of the original image, e.g., the edge and textures. In this study, the improved PCNN model was used to enhance the low- and high-frequency coefficients in the transform domain. The experiment shows that this method not only can effectively extract high-energy information in low-frequency images with relatively low energy and enhance the high-energy information specifically but also can further enhance the detailed information of high-frequency images and inject the information into the original image, thereby increasing the amount of information and sharpness of the original image, so as to improve the visual effect of the image.

2. Improved NSST

2.1. Frame of NSST

As a milestone of multiscale transform [10], wavelet transform can effectively deal with the point singularity problem of one-dimensional or high-dimensional signals, but wavelet transform cannot capture the line singularity of images. Curvelet transform and contourlet transform [19] have been successively proposed as ways of solving this problem. On the basis of multiscale transform, these two transforms decompose the image in multiple directions, and their base functions both have wedge-shaped or rectangular support areas, which provide a better sparse representation of the image’s high-dimensional singularity. Because of the downsampling operation in the multiscale and multidirectional decomposition, these two transforms will lose translation invariance, which can cause the Gibbs phenomenon in the enhanced image [20]. The directional filtering system adopted by the recently proposed continuous shearlet transform (ST) is equipped with translation invariance itself. Theoretically, the directional filtering of shear waves is a natural extension of the multidimensional and multidirectional circumstances. The shear wave can express the set of these regeneration functions:

Matrix determines the multiscale decomposition of the image, and matrix is “anisotropic,” shear matrix determines the multidirectional analysis of the image, represents the decomposition scale, is the direction parameter, and is the translation parameter. In domain , the set of base function was formed by rotating, shearing, and translating a single window function with good local features. These sets formed the frame , and this process constituted an affine system. When

According to the work of Kong et al. [15], the basis function of ST can be expressed as follows:

For any , , , , , , the following equations are satisfied:

Partitioning , and , and are composed of and , respectively.

As shown from the above algorithm, shearlet possesses good local features and direction sensitivity, with the number of directions multiplied on each layer. In the process of image decomposition, both curvelet transform and contourlet transform used the directional filter bank (DFB) proposed by Bamberger and Smith [21] to decompose the image. Since DFB introduced a two-dimensional diamond filter matrix, to adapt to this filter, the input image should be reengineered, and the DFB-filtered image needs to be inversely synthesized before being remodeled, which might distort the enhanced image and its frequency domain coefficient would be confused. The NSST can overcome this exact shortcoming.

The multidirectional decomposition of NSST was realized using the improved shear filter, which applied a two-dimensional Fourier transform on the image on a pseudopolar grid and then the image was filtered by a one-dimensional subband filter on the grid. In the frequency domain transform of NSST, there was no need to critically sample the original image, thereby guaranteeing the translation invariance of the image and ensuring that the image would not be distorted during the directional filtering process. In addition, the shear filter can self-adaptively collect the geometric characteristics of multidimensional data, so as to better express the details of the edges of the image. Since the support area of the shear filter is small, reducing the probability of Gibbs phenomenon may improve the computational efficiency.

Multiscale decomposition of NSST was performed using the nonsubsampled Laplacian pyramid (NSLP) filter proposed by M. N. Do rather than the original LP filter. The NSLP decomposition was completed by iterating the following process:

is an image, is the detail coefficient at the scale of , represents the low-frequency filter at the scale of , and is the high-frequency filter at the scale of . Assuming that is an image and the number of direction is given, the NSST-based image transformation can be done as follows.

Step 1. The NSLP transform was used to decompose the image into a low-pass image and a high-pass image .

Step 2. Compute in the pseudopolar grid to obtain .

Step 3. Conduct band-pass filtering on to obtain the frequency domain coefficient in the pseudopolar coordinates.

Step 4. The inverse transformation of FFT transform was applied in the pseudopolar grid to obtain the NSST coefficient of the Cartesian coordinate system.

2.2. Frame of Improved NSST

Unlike the traditional NSST model, the INSST transform used in this study integrated the latest theory of lifting wavelet transform, considered the characteristics of translation invariance, and replaced NSP transform in traditional NSST with RLNSW. This is based on the following considerations: first, NSP performed poorly in capturing the details, while RLNSW could capture rich image details; second, decomposition realized by the lifting algorithm not only improve the computational efficiency and accuracy but also better extract image information because it has the characteristics of quincunx sampling; third, from the perspective of the visual system, the cutoff frequency of the low-pass filter of the inseparable wavelet obtained by directly applying the lifting algorithm to the two-dimensional image is on the diagonal, which is more in line with the visual characteristics.

The horizontal and diagonal lifting of RLNSW at different levels was conducted alternatively. Taking the horizontal lifting as an example, like the traditional lifting algorithm, the horizontal lifting was divided into 3 steps. The first step was to avoid translation variability and change the splitting process to copy so as to obtain the odd and even subbands of the same size as that of the original image. Then, prediction and updating were performed to further obtain the low-frequency and high-frequency subbands. The specific steps are as follows.

(1) Copy. Copy the input image into the even subband and the odd subband to store low-frequency information and high-frequency information, respectively. At this time, the input image is exactly the same as the image in the two subbands.

(2) Prediction. With the help of the prediction operator , the low-frequency information around the even subband was used to conduct an interpolation prediction of the high-frequency information in the odd subband, and the difference between the odd subband and the predicted value was regarded as the high-frequency component:

(3) Update. The high-frequency component obtained in the prediction step was corrected by the update operator and superimposed on the low-frequency information in the even subband to obtain the low-frequency component :

In the diagonal lifting process, the prediction operator and the update operator were rotated by 45° to obtain the prediction operator and the update operator of the diagonal lifting, and then the same lifting process was performed on the low-frequency component obtained by the horizontal lifting. By repeating the horizontal/diagonal lifting process, RLNSW decomposition was determined at different levels, and each process was reversible. The decomposition process and inverse process of RLNSW are shown in Figure 1. Each stage of the RLNSW transform generated a low-frequency subband which contained low-frequency components and a high-frequency subband which contained high-frequency components, which are similar to the output results of NSP.

It should be noted that Yu et al. used the Neville filter as an interpolation filter to construct the wavelet cluster of arbitrary order in any dimension [22]. In this study, the Neville (2, 2) filter was used as the prediction and update operator in the implementation of RLNSW.

3. Improved Neural Network Model

3.1. Standard PCNN Model

The standard PCNN model consists of a receiving module, a modulation module, and a pulse generation module, which gives the PCNN such inherent characteristics as time–space accumulation, pulse distribution, and threshold variability. Time–space accumulation means that, in the same region, the firing time of neurons with similar values is very close as is the number of firings. Pulse distribution means that relevant processing of neurons will be produced in the form of binary pulse. If the output is 1, firing occurs, and if the output is 0, no firing occurs and more neurons will be captured. Threshold variability means that the threshold of the model exists in the form of exponential attenuation in order to cause firing. When the attenuation reaches a certain level and neuron firing occurs, the value becomes larger instantaneously, preventing repeated firing within a short time, which is the threshold variability [23].

Figure 2 shows the structure of the standard PCNN model. As can be seen, the receiving module of the PCNN consists of a link channel and a feedback channel; as the key to realize mutual learning and mutual communication between neurons, the link channel received local stimuli generated by neighboring neurons; the feedback channel, the key for the neurons to adapt to the environment, received external stimuli caused by external signals. The external signals in this study refer to some information closely related to the image itself, e.g., gray value, spatial frequency, and regional energy sum. The modulation module coupled the signals obtained by the two channels of the receiving module, i.e., the internal and external stimuli of the integrated neurons, to gain an internal coupling value . This value, which contains comprehensive information of the internal and external stimuli of the neuron, was the key to determining whether the neuron would fire or not. As the name suggests, the pulse generation module is the link to generating the pulse. It actually controlled the above-mentioned variable threshold . The internal coupling value and the variable threshold were compared. If , the output is 0, no firing occurs, and more neurons will be captured. If , then the output is 1, and neuron firing occurs, which will further affect other neurons in the neighborhood through the link channel.

The equation of the standard PCNN model is as follows:

According to the model, the receiving part is mainly composed of (10) and (11), where represents the feedback channel; and are the amplitude constant and attenuation coefficient of the feedback channel, respectively; is the link channel; and are the amplitude constant and attenuation coefficient of the link channel, respectively; represents the external stimulus; and and are the matrix of link weights between the neuron whose coordinates are and the neuron whose coordinates are , which generally satisfy the following equation:

The modulation module was expressed in (12), where is the modulated internal coupling value, which is also known as the internal action item that contains the input information of the link channel and the feedback channel. is the link coefficient to determine the extent to which neighboring neurons affect the central neuron, which is a constant in the standard PCNN model. The pulse generation model corresponds to (13), where is the binary pulse output, is the dynamic threshold, which was calculated by (14) in the model, and is the attenuation coefficient of the dynamic threshold.

By applying PCNN to the field of image processing, a two-dimensional grid structure equivalent to the original image in size was formed. In the standard PCNN, when the gray value of an image pixel is entered in as an external stimulus to the corresponding neuron, each neuron corresponds to a 3×3 or 5×5 neighborhood interval, and each neuron only has two states, i.e., firing (when the pulse output is 1) and unfiring (when the pulse output is 0). In one iteration, once the receiving module was stimulated, the link channel and the feedback channel would change simultaneously. The link channel was stimulated by the neighboring neurons, while the feedback channel was stimulated by the external neurons, and these transforms eventually formed the internal action item through specific coupling modulation. The dynamic threshold was always in an attenuated state. The internal action item was compared with the dynamic threshold . If , the neuron would fire and would output the pulse sequence 1. At this time, the dynamic threshold increased rapidly, and this pulse sequence would simultaneously output to the feedback channel and neighborhood of this neuron and affect other neurons through the link domain. If other neurons also experience firing upon receiving the pulse, it is referred to as capture occurrence. Through N iterations, different neurons fire different numbers of times, so the entire PCNN model forms a two-dimensional firing map the same size as the original image. The firing map records the number of times the neurons fire, which also corresponds to the pixels in the image and can reflect some of the features of the pixels. It should be noted that pixels with similar gray values are also close to each other when it comes to the number of firing times, which further indicates that the PCNN model can acquire image information by simulating the working principle of the visual cortex.

Thus, the standard PCNN involved multiple attenuation coefficients, i.e., , , and , a plurality of amplitude constants, i.e., , , and , and several other parameters (matrix of link weight and link coefficient ). The optimal values of these parameters were determined through numerous experiments, and it is often difficult to set all parameters to optimal values in practical application. The parameter settings of different types of images are usually different. In this context, the parameters are usually set based on experience, which greatly limits the versatility and adaptability of PCNN. Moreover, the standard PCNN model did not give the setting of the number of iterations which is the key to the PCNN model. If is too small, the neurons cannot fully fire, so that the image information cannot be fully obtained. If is too large, the amount of computation required will be excessively large. It should also be pointed out that the dynamic threshold in the standard PCNN model features periodic exponential attenuation, but the dynamic threshold does not conform to the human visual model [24]. Compared to the linear attenuation model, its attenuation form has different effects on high-intensity and low-intensity pixels; that is, its attenuation speed is constantly changing [25]. In view of this, an improved PCNN model was proposed in this study to simplify and improve the standard PCNN model, which reduces the parameter setting, corrects relevant models, and adds self-adaptiveness. However, with the structural advantages of the improved PCNN model, the computational efficiency was improved and excessive repetitive operations were avoided. In addition, the improved PCNN model is more suitable for use in the field of image enhancement.

3.2. Image Enhancement Based on the Improved PCNN Model

The purpose of image enhancement is mainly to improve the contrast of the image and to more hierarchically distribute the gray values of the image. The improved PCNN model proposed in this study contains the following main ideas.

(1) The standard PCNN model was simplified and the number of parameters set was reduced. The functions of the feedback channel were simplified only to receive external stimuli, and the functions of the feedback channel were simplified only to receive internal stimuli, aiming to achieve a clear division and close integration of labor in each part of the model, which not only simplified the structure but also improved the efficiency.

(2) Self-adaptive factors were introduced to render the improved model more adaptable. The link coefficient reflects the degree to which the surrounding neighboring neurons influence the central neuron. In the standard PCNN model, the link coefficient is usually a constant, but according to the human visual characteristics, human sensitivity to different regions is different; humans are more sensitive to regions with drastic changes than to regions with slow changes. For this reason, the spatial frequency SF was used as the link coefficient because SF is an excellent measurement criterion for describing the intensity of regional transformation. Hence, the value of can be set self-adaptively based on the degree of neighborhood transformation, rendering the model more adaptable

(3) The attenuation model was adjusted to a linear model. The exponential attenuation model of the standard PCNN model is not suitable for human visual characteristics, so its attenuation model was changed to a linear attenuation model, so that the degree of attenuation can be fair for stimulus of different intensities and the firing state of the neurons can be better reflected.

In view of these points, the structure of the improved PCNN model proposed in this study is as follows:

Here, is the maximum gray value of the firing pixels in the iterative process, is the enhanced image, and is the attenuation step size. When setting the parameters, the following principles should be followed: should be relatively large, while ought to be relatively small so as to ensure that the neuron only fires once and the threshold attenuation is small, so that adjacent gray levels can be well-distinguished by different firing times; if the value of is too large, some image information might be lost. The model proposed in this study has a more reasonable form of link channel construction and has the self-adaptive link coefficient .

The setting of the self-adaptive link coefficient was discussed as follows. According to the above analysis, it is unsuitable for the value of to be constant. should vary with the transformation of different regions. In regions with significant edge characteristics and sharp transformations, should be relatively large, whereas in regions with slow transformation or no detail information, should be comparatively small. As an important criterion to describe the sharpness of an image, SF [25] was obtained by the window function. It can accurately capture the degree of image transformation in the region. Therefore, the spatial frequency was regarded as a vital basis to determine the value of . Besides, the link coefficient usually satisfies , so the self-adaptive link coefficient was constructed as follows:

where and are row frequency and column frequency, respectively:where is the specific position of the pixel, and is a constant to adjust the value of the link coefficient.

4. Image Enhancement Based on PCNN in the NSST Domain

The purpose of image enhancement is mainly to enhance images with unsatisfactory contrast and sharpness into usable images with more complete and clear details that are more suitable for the visual features of humans. High-frequency images with a large number of edge texture features decomposed by NSST were enhanced by the improved PCNN model, which increased the visibility of important information of the enhanced image, enhanced the feature information of the smooth region, and improved the image quality. In addition, since the improved PCNN model can improve the contrast and sharpness of the image while enhancing the image, the image quality can be further improved by enhancing the high-frequency image using the improved PCNN model.

The algorithm of image enhancement based on PCNN in the NSST domain is as follows:

Input: Original image

Output: Enhance image using the proposed method

Step 1. The original image was decomposed multidirectionally at a multiple scale based on NSST transform, and the decomposed low-frequency coefficient was defined as and the high-frequency coefficient was defined as , where is the decomposition scale of the image and is the multidirectional decomposition level of the image at the scale of .

Step 2. The low-frequency subband image and the high-frequency subband image obtained by the NSST decomposition in the previous step were enhanced by the improved PCNN model proposed in Section 3 as follows:

(1) Based on (22), the sliding window was used to compute the spatial frequency SF of each subband as an external stimulus; that is, , and the self-adaptive link coefficient was computed using (21).

(2) According to (16)-(20), the model was used for iterative operation, and the enhanced low-frequency subband and high-frequency subband were computed based on Equation (19).

(3) The above computations were repeated until firing occurred in each neuron.

Step 3. The enhanced subband coefficient obtained by the improved PCNN model was inversely transformed by NSST to finally gain the enhanced image .

5. Simulation

5.1. Experimental Protocol

The computer configuration used in this experiment is as follows: an Intel Core i7 CPU with a memory of 4G was used, with MATLAB R2010b as the simulation software. To verify the validity of the proposed method, 3 groups of different source images were selected as the experimental samples, i.e., remote sensing images, infrared images, and medical image. For the simulation experiment, to ensure the superiority of the improved method proposed in this study, the method was compared to the PCNN-based image enhancement method (Method 1), the image enhancement method based on NSCT and histogram equalization (Method 2), and the image enhance method based on NSCT and PCNN (Method 3). The parameters of the standard PCNN model used in Method 1 and Method 3 were as follows: the number of iterations , the window size is 3, , , , , , and . As shown, the parameters of the standard PCNN model are complicated and various, whereas the improved PCNN model proposed in this study greatly reduced the number of parameters. The specific parameters are as follows: , , , and the value of was computed based on (21). “Maxflat” is the scale filter used in NSCT decomposition in Method 2 and Method 3, “dmaxflat7” is the directional filter, and the directional decomposition levels are 1, 2, 4, and 8 successively; in the INSST transform presented in this study, Neville (2, 2) was used as the prediction and update filter in the RLNSW decomposition, and the “shear” filter was used as the directional filter, with the directional decomposition number being [9, 13, 21].

Image enhancement quality can be evaluated subjectively and objectively. Usually performed by visual inspection, subjective evaluation generally involves the brightness, texture, contrast, and sharpness of the image. Objective evaluation refers to quantitative analysis of images. The objective evaluation criteria used in this study include standard deviation (), information entropy (), average gradient (), and edge intensity () [26]. Also called the mean square error, the standard deviation can reflect the dispersion of the gray level of each pixel of the image relative to the average gray value. With a larger standard deviation, there is a greater image contrast, more information can be seen, and the image quality is better. Information entropy measures the richness of image information from the perspective of information theory. The value of information entropy reflects the amount of information carried by the image. Therefore, the larger the information entropy of the image, the richer the information and the better the quality [27]. Average gradient reflects the ability of the image to express the contrast of small details. In general, the larger the average gradient, the clearer the image. The larger the value of , the greater the presence of important detailed information such as the edge of the original image in the enhanced image, and the better the quality of the enhanced image.

5.2. Results and Analysis

Figures 3, 4, and 5 show the enhanced infrared image, remote sensing image, and medical image, respectively. From a subjective point of view, in Figure 3, Method 1 performed poorly in enhancing the contrast of the image and failed to obviously express the texture and detailed information. Method 3 outperformed Method 2 in terms of edge texture, but there was no large difference between the two when it comes to the contrast. The proposed method is significantly better than that of the other three methods with respect to enhancing the sharpness and contrast of the image. This method not only clearly expressed the detailed features of the background but also highlighted the target information. As shown in Figure 4, the proposed method has a visibly better enhancement than the other three methods. From a visual point of view, the difference among the other three methods is not so apparent, but the texture features enhanced by Method 3 are more complete and easier to observe. The proposed method enhanced the image details, highlighted the texture, controlled the brightness, and clearly showed the image in the lower left corner of the image, which is obviously superior to the other three methods. As shown in Figure 5, all 4 methods enhanced the original image to a certain extent. Part of the information on the left sides of the images enhanced by Method 1 and Method 2 was not fully expressed. Method 2 depicted the bone texture more clearly than Method 1. However, the enhancement effect of Method 3 and the proposed method was better than that of Method 1 and Method 2. The proposed method outperformed Method 3 in expressing the texture features of the tissues in the bone. With a great enhancement effect in terms of contrast and sharpness, this method was more consistent with human visual requirements. According to the subjective evaluation, the enhancement effect in the frequency domain was better than that in the spatial domain. In addition, PCNN had advantages in image enhancement, and the NSST domain was more suitable than the NSCT domain for image enhancement.

Because subjective analysis has limitations, the image enhancement results were evaluated objectively. Tables 13 show the objective evaluation results of the three experiments. It is shown that, in addition to the average gradient in the infrared image enhancement experiment, the proposed method was superior to the other three methods in all the three experiments with respect to the following indicators: , , , and . In the infrared image enhancement experiment, the four objective evaluation indicators of the proposed method were significantly higher than the other three methods. The standard deviation and edge intensity of Method 2 were higher than that of Method 1, and the 4 indicators of Method 3 were higher than that of Method 1 and Method 2. In the remote sensing image experiment, the proposed method visibly outperformed Method 1 and Method 2 and slightly outperformed Method 3, making it the best among the 4 methods. In the remote sensing image enhancement experiment, the indicators of Method 1 and Method 2 were similar. For all indicators except information entropy, the indicators of Method 3 were better than those of Method 1 and Method 2, and the indicators of the proposed method were better than those of the other three methods. In the medical image enhancement experiment, the proposed method performed poorly in terms of the average gradient, but it still outperformed the other three methods with respect to the remaining indicators. The average gradient, information entropy, and standard deviation of Method 3 were higher than that of Method 1 and Method 2. The above data again show that image enhancement carried out in the frequency domain is better than image enhancement conducted in the spatial domain. The proposed method has a strong ability to capture details and enhance contrast, which can improve image quality.

6. Conclusion

To solve the problems faced by the traditional NSST in image processing, the improved NSST was applied to image enhancement in this study. According to the characteristics of high- and low-frequency coefficients in the improved NSST domain, an improved, simpler, faster PCNN model was proposed to enhance the high- and low-frequency subband images, and the enhanced high- and low-frequency subband images were further inversely transformed to produce the final enhanced images. Experiments show that the images enhanced based on the proposed method can better express the edge, texture, and structural features in the original images. This method can effectively increase the contrast of the image while suppressing noise, thus improving quality and application value of the image. The algorithm should be further improved to increase the computing speed.

Data Availability

The image data used to support the findings of this study are included within the article. The code of the proposed method was supplied by the author Y. Xing under license and so cannot be made freely available. Requests for access to these data should be made to Y. Xing by [email protected]

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China under grants 61703426 and 61503407.