Research Article  Open Access
Automatic SideScan Sonar Image Enhancement in Curvelet Transform Domain
Abstract
We propose a novel automatic sidescan sonar image enhancement algorithm based on curvelet transform. The proposed algorithm uses the curvelet transform to construct a multichannel enhancement structure based on human visual system (HVS) and adopts a new adaptive nonlinear mapping scheme to modify the curvelet transform coefficients in each channel independently and automatically. Firstly, the noisy and lowcontrast sonar image is decomposed into a low frequency channel and a series of high frequency channels by using curvelet transform. Secondly, a new nonlinear mapping scheme, which coincides with the logarithmic nonlinear enhancement characteristic of the HVS perception, is designed without any parameter tuning to adjust the curvelet transform coefficients in each channel. Finally, the enhanced image can be reconstructed with the modified coefficients via inverse curvelet transform. The enhancement is achieved by amplifying subtle features, improving contrast, and eliminating noise simultaneously. Experiment results show that the proposed algorithm produces better enhanced results than stateoftheart algorithms.
1. Introduction
Acoustic remote sensing technologies, such as highresolution multibeam and sidescan sonars imaging in water, are widely used in marine geology, commercial fishing, offshore oil prospecting and drilling, and so forth [1–4]. Due to transmission loss and acoustic wave scattering, sonar images are notorious for low contrast, edgeblurring, and being full of noise. Therefore it is necessary to amplify faint edges and eliminate noise in sonar images simultaneously for further image processing, such as image segmentation and object detection and classification.
Image enhancement approaches can generally be divided into two categories: spatial domain methods and transform domain methods. Spatial domain enhancement methods deal with the image pixels. Desired enhancement can be achieved by manipulating the pixel values. Commonlyused spatial techniques are linear stretch, histogram equalization (HE) [5], convolution mask enhancement, adaptive histogram equalization, and so forth. The conventional histogram equalization has received considerable attention due to its simple and straightforward implementation, but it often amplifies noise, blurs subtle edges, and tends to overenhance the image contrast if there are high peaks in the histogram [6]. These spatial domain methods usually cannot effectively discriminate edges from noise, because edges and noise have similar properties in spatial domain.
One way to solve this problem is to use multiscale geometric analysis (MGA) to decompose the image into different frequency bands and process the image in each band independently. It belongs to transform domain methods, the second category. Multiscale waveletbased image enhancement algorithms have achieved promising results over the last decades [7, 8]. However, twodimensional (2D) wavelet transform commonly used is a separable extension of 1D wavelet transform, which does not work very well in capturing the image’s geometric edges because of its isotropy. To overcome the limitation of the wavelet transform, other multiscale analyses have been developed during the past decade, including curvelet transform [9] and nonsubsampled contourlet transform (NSCT) [10]. These approaches capture edges better than the wavelet transform owing to their high directional sensitivity and anisotropy. The curvelet transform therefore has been widely applied in the image processing field [11–15]. A contrast enhancement method based on curvelet transform has been developed, which uses a gain function with four parameters to modify the curvelet transform coefficients [11]. However, it requires appropriate manual parameter settings for different images that might otherwise result in image degradations. Lu et al. [16] proposed a piecewise function based enhancement method in curvelet transform domain (PFBE) to enhance the sonar image’s contrast. This method reduces the complexity of parameter adjustment by using an improved gain function with only one parameter, but it still requires parameter selection, which is manually set according to the input sonar images. In order to avoid manual parameter tuning, an automatic image enhancement method based on NSCT (AIENSCT) is proposed, which adjusts the NSCT coefficients by using a nonlinear mapping function [17]. This stateoftheart image enhancement method has achieved good results in both grayscale and colour images. When processing the sonar image which has very low signaltonoise ratio and strong noise, AIENSCT cannot sufficiently adjust contrast and eliminate noise. Furthermore, owing to the high redundancy of NSCT, NSCTbased methods are more timeconsuming than curveletbased methods.
Curvelet transform is better in representing edges and removing noise than classical wavelet transform for its anisotropy and multidirectional decomposition capabilities, and it is also faster than many other multiscale geometric transforms for its less redundancy. Moreover, curvelet transform well coincides with the sparse coding mechanism and the multichannel processing mechanism of the human visual system (HVS), which is composed of a series of parallel channels with each channel corresponding to a specific range of image spatial frequencies. Therefore, we propose an automatic sidescan sonar image enhancement method based on curvelet transform in this paper. The proposed algorithm utilizes the curvelet transform to model a multichannel enhancement structure based on the HVS and adopts a new adaptive nonlinear mapping scheme to modify the curvelet transform coefficients in each channel independently and automatically. Experiment results show that the proposed method can effectively enhance the contrast while eliminating noise and preserving edges in sidescan sonar images. The proposed method outperforms the stateoftheart enhancement techniques in both qualitative and quantitative assessments.
The remainder of this paper is organized as follows. Section 2 describes the curvelet transform. Section 3 presents the curveletbased multichannel enhancement structure, which is inspired by the multichannel processing mechanism of the HVS. Also in this section, an adaptive nonlinear mapping integrating noise removal with feature enhancement is proposed for each independent channel. The experimental results and performance evaluation are given in Section 4. The conclusions are drawn finally in Section 5.
2. Curvelet Transform
Curvelets were first introduced by Candès and Donoho in 1999 [9], which broke an inherent limit of wavelet in representing the geometry of image edges. The firstgeneration curvelet transform based on multiscale ridgelets combines with a spatial bandpass filtering operation to isolate different scales. However, this transform is very complicated and redundant, including many steps, such as subband decomposition, smooth partitioning, renormalization, and ridgelet analysis [15]. Later, a considerably simpler secondgeneration curvelet transform based on frequency partition technique was proposed by the same authors [18–20]. The frequency domain of image is separated into disjoint wedge regions. Then, the local Fourier transform is implemented on these regions.
Assume that we work throughout in two dimensions, that is, , and set as spatial variable, as a frequency domain variable, and as polar coordinates in the frequency domain [21]. Let and be a pair of nonnegative, realvalued, and smooth window functions, called “radial window” and “angular window,” respectively. These windows will always satisfy the admissibility conditions:For each scale , we introduce the frequency window defined in the Fourier domain bywhere is the integer part of .
Define a “mother” curvelet as , and its Fourier transform . Then all curvelets at scale are obtained by rotations and translations of .
Introduce the equispaced sequence of rotation angles , with , , and the sequence of translation parameters . Then define curvelets at scale , orientation , and position bywhere is the rotation by radians. A curvelet coefficient is then given byAccording to Plancherel’s theorem, the curvelet transform can be expressed as the integral over the frequency plane Figure 1 summarizes the key components of the construction in secondgeneration continuoustime curvelet transform. The figure on the left represents the induced tiling of the frequency plane. The figure on the right represents the spatial Cartesian grid associated with a given scale and direction [21].
In practical implementation, we define Cartesian window bywhere is defined as the product of lowpass onedimensional windows:Introduce the set of equispaced slopes: , , and definewhere is the shear matrix,The family implies a concentric tiling whose geometry is pictured in Figure 2.
Thus, the discrete curvelet transform is defined as where takes on the discrete values .
Two versions of fast discrete curvelet transform (FDCT), namely, FDCT via Unequispaced FFTs (USFFT) and FDCT via Wrapping, were developed [21]. They were simpler, faster, and less redundant than the firstgeneration curvelet transform. In this paper, the Wrappingbased version, which is faster than USFFTbased version, is chosen to implement the digital curvelet transform.
3. Automatic Enhancement and Denoising Algorithm
3.1. Multichannel Enhancement Structure Based on HVS
It is well known that for the HVS, the receptive fields of simple cells in primary visual cortex can be characterized as being spatially localized, oriented, and bandpass [22]. The HVS captures the essential information of a natural scene using a least number of visual active cells, which is called sparse coding on a natural scene. This result suggests that for an efficient computational image representation, it should be based on a local, directional, and multiresolution expression. In addition, the HVS is a multichannel processing mechanism. It is composed of a series of parallel channels and each channel runs independently of the visual cortex. There is an agreement that each channel is sensitive to a specific range of image spatial frequencies. Therefore, these channels usually can be divided into a lowpass channel and several bandpass channels.
As a multiscale multidirectional transform, the curvelet transform allows an almost optimal nonadaptive sparse representation of objects with edges. Because the curvelet transform exactly coincides with the mechanism of human visual perception, we can model the multichannel enhancement structure using the curvelet transform. The low frequency subband of the curvelet transform corresponds to the lowpass channel of the HVS, and each high frequency directional subband corresponds to each bandpass channel. Figure 3 shows a multiscale decomposition of a sidescan sonar image using the curvelet transform, from which we can see the outstanding capability of multiscale edges representation of the curvelet transform.
(a)
(b)
(c)
(d)
(e)
3.2. Adaptive Nonlinear Mapping Scheme
Because sonar images are full of strong noise, the critical problem for sonar image enhancement is to effectively remove noise while adaptively adjusting dynamic range and amplifying weak edges. After curvelet decomposition on a sonar image, the low frequency subband, which is almost noiseless, contains overall contrast information. While the high frequency subband in each scale and direction contains not only edges but also noise. Edges are geometric structures, while noise is not, so we can use the curvelet transform to distinguish edges from noise. Consequently, the low frequency subband, which corresponds to the lowpass channel of the HVS, needs to be stretched appropriately. However each high frequency subband, which corresponds to each bandpass channel of the HVS, needs to be sufficiently enhanced with denoising. According to the fundamental requirements proposed by Laine et al. in [23], an ideal design of a nonlinear mapping function should satisfy the following rules:(1)low contrast area should be enhanced more than high contrast area;(2)sharp edges should not be blurred;(3)the nonlinear function should be monotonically increasing, in order to maintain the location of local extrema and avoid generating new extrema;(4)the nonlinear function should be antisymmetric, for example, , in order to preserve the polarity of phases and avoid the phenomenon of ring.
To achieve the adaptive multichannel enhancement, we propose a nonlinear mapping scheme to modify the curvelet transform coefficients in each channel independently.
3.2.1. Adaptive Nonlinear Mapping of Low Frequency Channel
For the low frequency subband (), the nonlinear mapping function is defined as wherewhere is an original curvelet transform coefficient at location in the subband indexed by scale and direction . is the curvelet transform coefficient processed. For the low frequency subband, . Considerwhere denotes the maximum absolute coefficient amplitude in the subband indexed by scale and direction . One haswhere denotes the mean value of absolute coefficient amplitude in the subband indexed by scale and direction . Considerwhere denotes the standard deviation of absolute coefficient amplitude in the subband indexed by scale and direction .
Subjective brightness (intensity as perceived by the HVS) is a nonlinear logarithmic function of the light intensity incident on the eye, as shown in Figure 4 [5]. Figure 5(a) shows the nonlinear mapping curve representing the enhanced curvelet transform coefficients versus the original coefficients in the low frequency subband. From Figure 5(a), it is observed that the proposed nonlinear mapping function is an approximate logarithmic mapping, which is consistent with the characteristics of the HVS perception. Therefore, the dynamic range in the low frequency subband is well stretched by the proposed mapping function as shown in Figure 5(a).
(a) The low frequency subband
(b) The high frequency subband (, )
3.2.2. Adaptive Nonlinear Mapping of High Frequency Channel
For the high frequency subband in each scale and direction , the nonlinear mapping function is defined as where and are given by (12) and (14), respectively, in Section 3.2.1. For each high frequency subband, . is a hard threshold in the corresponding high frequency subband for adaptive denoising. In this paper, the threshold for each high frequency subband is given by which has been termed as sigma thresholding [24]. We set for the finest scale and for the others. The noise standard deviation of the original image is estimated using the robust median operator [25]; that is, , where refers to the curvelet transform coefficients in the finest subband. An approximate value of the individual variances in the th directional subband of the th scale is calculated using MonteCarlo simulations [26].
Because the shapes of mapping curves in all the high frequency subbands are similar, the mapping curve of the high frequency subband indexed by and is given as an example in Figure 5(b). As expected, noise has been effectively suppressed by setting the coefficients which are smaller than the threshold to be zeros. The absolute slope of the mapping curve in Figure 5(b) is decreasing with the increasing of the absolute coefficient amplitude in the enhanced intervals . Therefore, compared with strong edges, weak edges which correspond to smaller coefficients will be enhanced more obviously. The proposed mapping function is in accord with the nonlinear logarithmic property of the HVS and can well balance noise reduction and edge enhancement, as shown in Figure 5(b). Moreover, the proposed mapping function also well satisfies the rules of monotonicity and antisymmetry.
In summary, the proposed nonlinear mapping achieves the following targets: preserving strong edges by keeping the large coefficients, enhancing weak edges by amplifying the small coefficients, and removing noise by eliminating noise coefficients using thresholding in the high frequency subbands . Meanwhile, it increases overall contrast by adjusting the dynamic range in the low frequency subband . Therefore, it adequately improves the visual quality of sidescan sonar images.
3.3. Block Diagram of the Proposed Algorithm
A block diagram of the proposed image enhancement algorithm is shown in Figure 6. To summarize, the proposed algorithm can be described as follows.
Input: original image .(1)Implement the curvelet transform of and obtain the curvelet transform coefficients in the low frequency subband and each high frequency directional subband.(2)Calculate the enhanced curvelet transform coefficients in the low frequency subband using (11).(3)Estimate the noise standard deviation and the individual variances in each high frequency directional subband to obtain the threshold .(4)Calculate the enhanced curvelet transform coefficients in each high frequency directional subband using (17).(5)Reconstruct the enhanced image from the coefficients via inverse curvelet transform.
Output: enhanced image .
4. Experimental Results and Discussion
In this section, the effectiveness of the proposed algorithm is validated through computer simulation. We compare our algorithm with three image enhancement algorithms: HE [5], AIENSCT [17], and PFBE [16]. HE and AIENSCT are free of parameter selection for optical image enhancement, but PFBE requires parameter tuning for sonar image enhancement. We use the fast discrete curvelet transform via Wrapping in our experiments [21].
4.1. Qualitative Assessment
Four typical sidescan sonar images, shown in Figure 7, are chosen as test images and enhancement results on sonar images are given in Figures 8(a)–8(d), 9(a)–9(d), 10(a)–10(d), and 11(a)–11(d) respectively. In order to further evaluate the performance of the different enhancement algorithms, we also use the Canny edge detector to extract the corresponding edges of the enhanced images, as shown in Figures 8(e)–8(h), 9(e)–9(h), 10(e)–10(h), and 11(e)–11(h). The graylevel mapping curves of the enhanced image versus the original image are shown in Figure 12.
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(a) Sand
(b) Plane
(c) River landform
(d) Bridge pier
The original image Sand in Figure 7(a) is a dark sidescan sonar image of the seafloor sand ripple. Figures 8(a)–8(d) show the enhanced images by using HE, AIENSCT, PFBE, and the proposed algorithm, respectively. HE overenhances the image. This over enhancement is represented as a sharp change of its mapping curve in Figure 12(a). Moreover, it produces the brightness distortion of the sand ripple’s bottom; that is, the dark region at the bottom becomes too bright. The mapping curve verifies the observation, where input gray levels in the range of 0–20 are mapped to output gray levels in the range of 36–138. AIENSCT is able to enhance the contrast to some extent, but the overall contrast enhancement is not satisfactory and some details are still faint. It can also be seen from the mapping curve that the stretching of the dynamic range is insufficient. For PFBE, we choose the parameter to obtain the best enhanced image. PFBE increases the overall brightness of the image but cannot provide an adequate contrast enhancement. This is verified by its mapping curve, which is linear and almost parallel to the nochange mapping. However, the proposed algorithm efficiently emphasizes the subtle features. Meanwhile, it improves the overall contrast and removes noise. This is verified by its mapping curve in Figure 12(a), where input low graylevel range is compressed and high graylevel range is stretched sufficiently.
The original image Plane in Figure 7(b) is a sidescan sonar image of the plane wreckage in Lake Washington obtained by Marine Sonic Technology, Ltd. Figures 9(a)–9(d) show the enhancement results by adopting HE, AIENSCT, PFBE, and the proposed algorithm, respectively. HE again overenhances the image accompanied with annoying amplified noise, resulting in the overbrightness of the fuselage. This is verified by the mapping curve in Figure 12(b), where the rising slope of the curve is the largest. AIENSCT enhances the contrast of some of the image structures, but the boundary of the plane’s shadow area is still unclear. For PFBE, we choose the parameter to get the optimal enhanced image. PFBE improves the overall contrast, but the edges of the plane are a little fuzzy. The proposed algorithm well distinguishes the background, the plane, and the shadow area, simultaneously yielding visually pleasing brightness. It can be seen from Figure 12(b) that our mapping curve is located between the mapping curve of HE and that of PFBE.
The original image River landform in Figure 7(c) is a sidescan sonar image of the riverbed in Changzhou city section of China Grand Canal. Figures 10(a)–10(d) show the enhancement results by using the four algorithms, respectively. HE obviously amplifies noise, resulting in the undesired blur of weak edges. AIENSCT can preserve edges and reduce noise partly, but the overall contrast enhancement is not noticeable for its limited dynamic range adjustment. For PFBE, we choose the parameter to obtain the best enhanced image. Although the noise is effectively suppressed by PFBE, the perceived contrast is not significantly improved. The proposed algorithm efficiently enhances the dynamic range and edges while simultaneously removing noise. Furthermore, it can recover the details of small silt pits at the left bottom. This is verified by the mapping curves in Figure 12(c), where the input gray levels are stretched either excessively in HE or inadequately in AIENSCT and PFBE but appropriately in our algorithm.
The original image Bridge pier in Figure 7(d) is a sidescan sonar image of the Huai De bridge pier across Changzhou city section of China Grand Canal. We choose the parameter to get the optimal enhanced image for PFBE. The proposed algorithm is the only approach which can simultaneously enhance the contrast, sharpen the edges, and reduce noise. This can be illustrated by the visual results shown in Figures 11(a)–11(d) and the mapping curves shown in Figure 12(d).
The corresponding edge detection results of the enhanced images are shown in Figures 8(e)–8(h), 9(e)–9(h), 10(e)–10(h), and 11(e)–11(h), respectively. The over enhancement and amplified noise provided by HE give rise to many false edge points, which can be seen in Figures 8(e), 9(e), 10(e) and 11(e). Owing to the insufficient contrast enhancement and edge details blurring produced by AIENSCT and PFBE, the detected edges are incomplete (e.g., the loss of weak edges in the middle of Sand shown in Figures 8(f)8(g), the edge loss of the plane tail and wings in the shadow area of Plane shown in Figures 9(f)9(g), the loss of texture details in bottom right of River landform in Figures 10(f)10(g), and the incomplete outer contour of Bridge pier in Figures 11(f)11(g)). Because the proposed method can significantly reduce noise and strengthen edges, it obtains more accurate, clean, and complete edges, which can be clearly observed in Figures 8(h), 9(h), 10(h), and 11(h). These edge detection results further demonstrate that the proposed method has a significant advantage of being able to suppress noise while preserving edges.
4.2. Quantitative Assessment
To acquire quantitative evaluation of enhancement results, the image contrast measure called the measure of enhancement by entropy or EME using entropy is proposed in [27]:where an image is split into blocks of size , and , are the maximum and minimum values of the pixels in each block , respectively. is a small constant equal to 0.0001 to avoid dividing by 0.
The EME by entropy (EMEE), which is of the entropy formula form XlogX, is actually measuring the entropy, or information, in the contrast of the image [28]. The EMEE value should increase by a significant magnitude when the contrast of an image is enhanced noticeably. We use the EMEE to compare the enhancement performance of different algorithms as listed in Table 1. Because HE overenhances the images of Sand and Plane, the maximum and minimum values of each block simultaneously become large, resulting in small EMEE values. Forthe images of River landform and Bridge pier, HE amplifies noise, resulting in large EMEE values. It is observed that the proposed algorithm offers the largest value of EMEE for each test image, which indicates that the enhanced images of our algorithm have the highest contrast. Both subjective and quantitative assessments have shown that the proposed algorithm outperforms other algorithms in enhancing image contrast, improving edge sharpness, and suppressing noise.

4.3. Comparison of Running Time
All the algorithms are implemented under MATLAB R2011b environment on a PC with 3 GHz Pentium(R) Dualcore CPU E5700 and 2 GB RAM. The running times of the four methods are given in Table 2. The test images are Sand, Plane, River landform, and Bridge pier, respectively with 500 × 512 pixels, 256 × 256 pixels, 500 × 500 pixels, and 300 × 300 pixels. For all the methods, the running time is proportional to the size of the image. The larger the size of the image is, the longer the running time is. The results indicate that our algorithm consumes much less time than AIENSCT because of the lower computational complexity of the curvelet transform. Meanwhile, compared with HE, our algorithm produces much better enhanced images though it costs a little more time. Compared with PFBE, our algorithm is also faster. It should be noticed that an extra time is needed for PFBE to select the optimal parameter manually, and the time is not included in the running time of PFBE in Table 2 because it is difficult to determine.

5. Conclusion
In this study, a new automatic sidescan sonar image enhancement algorithm in curvelet transform domain is proposed. We present an adaptive multichannel enhancement structure based on the HVS, combining the nonlinear mapping scheme with the curvelet transform. The proposed nonlinear mapping scheme is well designed to achieve the following goals: in the high frequency subbands, amplifying the coefficients of weak edges, preserving the coefficients of strong edges, and inhibiting noise coefficients, and in the low frequency subband, adjusting the dynamic range adequately. The nonlinear mapping is adaptive without any parameter tuning and is consistent with the nonlinear logarithmic property of the HVS. Therefore, the proposed algorithm can automatically achieve noise suppressing, edge sharpening, and contrast enhancement for sidescan sonar images. The proposed algorithm is tested on real sonar images and is compared with some popular enhancement algorithms. Experiment results demonstrate that the proposed algorithm outperforms the existing enhancement algorithms in terms of subjective visual evaluation and objective quantitative evaluation measure of EMEE. Moreover, compared with HE, the proposed algorithm can enhance the image much better with only a bit more time consumption. Compared with NSCTbased enhancement algorithm, our algorithm not only produces better results but also consumes much less time. Compared with PFBE, which is a nonadaptive curveletbased enhancement algorithm, our algorithm can achieve better enhancement results without adjusting parameters manually. Therefore, the proposed approach can be easily and effectively used for sonar image enhancement.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (no. 60972101 and no. 41306089) and the Natural Science Foundation of Jiangsu Province (no. BK20130240).
References
 T. Celik and T. Tjahjadi, “A novel method for sidescan sonar image segmentation,” IEEE Journal of Oceanic Engineering, vol. 36, no. 2, pp. 186–194, 2011. View at: Publisher Site  Google Scholar
 A. J. Hunter and R. van Vossen, “Sonar target enhancement by shrinkage of incoherent wavelet coefficients,” Journal of the Acoustical Society of America, vol. 135, no. 1, pp. 262–268, 2014. View at: Publisher Site  Google Scholar
 R. Fandos, A. M. Zoubir, and K. Siantidis, “Unified design of a featurebased ADAC system for mine hunting using synthetic aperture sonar,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 5, pp. 2413–2426, 2014. View at: Publisher Site  Google Scholar
 T. Fei, D. Kraus, and A. M. Zoubir, “Contributions to automatic target recognition systems for underwater mine classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 1, pp. 505–518, 2015. View at: Publisher Site  Google Scholar
 R. C. Gonzalez and R. E. Woods, Digital Image Processing, PrenticeHall, Upper Saddle River, NJ, USA, 2006.
 T. Celik and T. Tjahjadi, “Automatic image equalization and contrast enhancement using Gaussian mixture modeling,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 145–156, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 A. K. Bhandari, V. Soni, A. Kumar, and G. K. Singh, “Artificial Bee Colonybased satellite image contrast and brightness enhancement technique using DWTSVD,” International Journal of Remote Sensing, vol. 35, no. 5, pp. 1601–1624, 2014. View at: Publisher Site  Google Scholar
 M. Z. Iqbal, A. Ghafoor, A. M. Siddiqui, M. M. Riaz, and U. Khalid, “Dualtree complex wavelet transform and SVD based medical image resolution enhancement,” Signal Processing, vol. 105, pp. 430–437, 2014. View at: Publisher Site  Google Scholar
 E. J. Candès and D. L. Donoho, “Curvelets—a surprisingly effective nonadaptive representation for objects with edges,” in Curve and Surface Fitting: SaintMalo 1999, A. Cohen, C. Rabut, and L. L. Schumaker, Eds., Vanderbilt University Press, Nashville, Tenn, USA, 1999. View at: Google Scholar
 A. L. da Cunha, J. P. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089–3101, 2006. View at: Publisher Site  Google Scholar
 J.L. Starck, F. Murtagh, E. J. Candès, and D. L. Donoho, “Gray and color image contrast enhancement by the curvelet transform,” IEEE Transactions on Image Processing, vol. 12, no. 6, pp. 706–717, 2003. View at: Publisher Site  Google Scholar  MathSciNet
 J. M. Mejia, H. D. Ochoa Dominguez, O. O. Vergara Villegas, L. Ortega Maynez, and B. Mederos, “Noise reduction in smallanimal PET images using a multiresolution transform,” IEEE Transactions on Medical Imaging, vol. 33, no. 10, pp. 2010–2019, 2014. View at: Publisher Site  Google Scholar
 E. Uslu and S. Albayrak, “Synthetic aperture radar image clustering with curvelet subband gauss distribution parameters,” Remote Sensing, vol. 6, no. 6, pp. 5497–5519, 2014. View at: Publisher Site  Google Scholar
 L. Liu, H. Dong, H. Huang, and A. C. Bovik, “Noreference image quality assessment in curvelet domain,” Signal Processing: Image Communication, vol. 29, no. 4, pp. 494–505, 2014. View at: Publisher Site  Google Scholar
 Y. Li, H. Gong, D. Feng, and Y. Zhang, “An adaptive method of speckle reduction and feature enhancement for SAR images based on curvelet transform and particle swarm optimization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 8, pp. 3105–3116, 2011. View at: Publisher Site  Google Scholar
 H. Lu, A. Yamawaki, and S. Serikawa, “Curvelet approach for deepsea sonar image denoising, contrast enhancement and fusion,” Journal of International Council on Electrical Engineering, vol. 3, no. 3, pp. 250–256, 2013. View at: Publisher Site  Google Scholar
 H. Soyel and P. W. McOwan, “Automatic image enhancement using intrinsic geometrical information,” Electronics Letters, vol. 48, no. 15, pp. 917–919, 2012. View at: Publisher Site  Google Scholar
 E. J. Candès and D. L. Donoho, “Continuous curvelet transform. I. Resolution of the wavefront set,” Applied and Computational Harmonic Analysis, vol. 19, no. 2, pp. 162–197, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 E. J. Candès and D. L. Donoho, “Continuous curvelet transform. II. Discretization and frames,” Applied and Computational Harmonic Analysis, vol. 19, no. 2, pp. 198–222, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with piecewise C^{2} singularities,” Communications on Pure and Applied Mathematics, vol. 57, no. 2, pp. 219–266, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 E. Candès, L. Demanet, D. Donoho, and L. Ying, “Fast discrete curvelet transforms,” Multiscale Modeling & Simulation, vol. 5, no. 3, pp. 861–899, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 B. A. Olshausen and D. J. Field, “Emergence of simplecell receptive field properties by learning a sparse code for natural images,” Nature, vol. 381, no. 6583, pp. 607–609, 1996. View at: Publisher Site  Google Scholar
 A. Laine, J. Fan, and W. Yang, “Wavelets for contrast enhancement of digital mammography,” IEEE Engineering in Medicine and Biology Magazine, vol. 14, no. 5, pp. 536–550, 1995. View at: Publisher Site  Google Scholar
 J.L. Starck, E. J. Candès, and D. L. Donoho, “The curvelet transform for image denoising,” IEEE Transactions on Image Processing, vol. 11, no. 6, pp. 670–684, 2002. View at: Publisher Site  Google Scholar  MathSciNet
 D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika, vol. 81, no. 3, pp. 425–455, 1994. View at: Publisher Site  Google Scholar  MathSciNet
 D. D. Po and M. N. Do, “Directional multiscale modeling of images using the contourlet transform,” IEEE Transactions on Image Processing, vol. 15, no. 6, pp. 1610–1620, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 S. S. Agaian, K. P. Lentz, and A. M. Grigoryan, “A new measure of image enhancement,” in Proceedings of the IASTED International Conference on Signal Processing Communication, Marbella, Spain, September 2000. View at: Google Scholar
 S. S. Agaian, B. Silver, and K. A. Panetta, “Transform coefficient histogrambased image enhancement algorithms using contrast entropy,” IEEE Transactions on Image Processing, vol. 16, no. 3, pp. 741–758, 2007. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2015 Yan Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.