About this Journal Submit a Manuscript Table of Contents
ISRN Computational Mathematics
Volume 2012 (2012), Article ID 982792, 12 pages
http://dx.doi.org/10.5402/2012/982792
Research Article

Nonconvex Compressed Sampling of Natural Images and Applications to Compressed MR Imaging

1College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210046, China
2School of Mathematics and Statistics, Nanjing Audit University, Nanjing 211815, China
3School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing 210094, China

Received 25 July 2011; Accepted 5 September 2011

Academic Editors: K. T. Miura and E. Weber

Copyright © 2012 Wenze Shao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

There have been proposed several compressed imaging reconstruction algorithms for natural and MR images. In essence, however, most of them aim at the good reconstruction of edges in the images. In this paper, a nonconvex compressed sampling approach is proposed for structure-preserving image reconstruction, through imposing sparseness regularization on strong edges and also oscillating textures in images. The proposed approach can yield high-quality reconstruction as images are sampled at sampling ratios far below the Nyquist rate, due to the exploitation of a kind of approximate 0 seminorms. Numerous experiments are performed on the natural images and MR images. Compared with several existing algorithms, the proposed approach is more efficient and robust, not only yielding higher signal to noise ratios but also reconstructing images of better visual effects.

1. Introduction

In the past several decades, image compression [1, 2] and superresolution [3, 4] have been the primary techniques to alleviate the storage/transmission burden in image acquisition. As for image compression, it is known that, however, the compression-and-then-decompression scheme is not economical [5]. Though superresolution is capable of economically reconstructing high-resolution images to subpixel precision from multiple low-resolution images of the similar view, subpixel shifts have to be estimated in advance. It is a pity that accurate motion estimation is not an easy job for superresolution, thus resulting in a possible compromise of image quality (e.g., spatial resolution, signal to noise ratio (SNR)). Recently, a novel sampling theory, called compressed sensing or compressive sampling (CS) [59], asserts that one can reconstruct signals from far fewer samples or measurements than traditional sampling methods use. The emergence of CS has offered a great opportunity to economically acquire signals or images even as the sampling ratio is significantly below the Nyquist rate. In fact, CS has become one of the hottest research topics in the field of signal processing. Though there have been many relevant results on encoding and decoding of sparse signals, our focus in this paper is mainly on the compressed sampling of natural images and its applications to magnetic resonance imaging (MRI) reconstruction.

Recently, there have been proposed several algorithms for compressed imaging reconstruction (e.g., [6, 1117]). In essence, each of them solves a minimization problem of single 1 norm or total variation (TV) or their combination either directly or asymptotically, and the difference of their reconstruction quality is anticipated to be not apparent. In this paper, a nonconvex CS approach is proposed for structure-preserving image reconstruction, through imposing sparseness regularization on strong edges and also oscillating textures in images. Even as images are sampled at sampling ratios far below the Nyquist rate, the proposed approach still yields much higher quality reconstruction than the aforementioned methods because of utilizing a kind of approximate 0 seminorms. Numerous experiments are performed on test natural images and MR images, showing that reconstructed images obtained by the proposed approach not only are of higher SNR values but also are of better visual effects even as the sampling ratio becomes much lower.

The paper is organized as follows. In Section 2, a review of compressed sampling signal and image reconstruction is given, including the basic theory of compressed sampling and reconstruction algorithms for compressed imaging. The proposed nonconvex compressed sampling approach is described in Section 3, solved by the half-quadratic regularization, primal-dual, and operator-splitting methods. Section 4 provides numerous experiments on test natural images and ordinary MR images and compares the proposed method with several existing algorithms in terms of signal to noise ratios, relative errors, and visual effects. Finally, the paper is concluded in Section 5.

2. Review of Compressed Sampling Signal and Image Reconstruction

The success of CS theory is based on two fundamental principles, that is, sparsity and incoherence, or restricted isometry property (RIP). Sparsity says that a signal should be sparse itself or have a sparse representation in a certain transform domain, and incoherence implies that a sampling/measurement matrix should have an extremely dense representation in the transform domain [9]. Consider the CS problem 𝑦=Φ𝑥 for the moment, where 𝑥𝑁 is a sparse signal, Φ𝑀×𝑁 is a measurement matrix, and 𝑦𝑀 is the measurement vector. Suppose the sparsity of 𝑥 is 𝐾, then signal reconstruction can be recast as the minimization of 0-problem: min𝑥{𝑥0𝑦=Φ𝑥}. Since the problem is NP-hard, it is more realistic to solve the computationally tractable 𝑝-problem (0<𝑝1)min𝑥{𝑥𝑝𝑝𝑦=Φ𝑥}. For the problem, a sufficient condition of exact reconstruction has been provided in [18]: if Φ satisfies the inequality 𝛿𝑎𝐾+𝑏𝛿(𝑎+1)𝐾<𝑏1(𝑏>1,𝑎=𝑏𝑝/(2𝑝)), then the unique minimizer of the 𝑝-problem is exactly 𝑥. In any real applications, due to the finite precision of sensing devices, measurements are to be inevitably corrupted by at least a small amount of noise. Hence, the constraint 𝑦=Φ𝑥 must be relaxed, resulting in either the problem min𝑥𝑥𝑝𝑝𝑦Φ𝑥22<𝜎(1) or its Lagrange version min𝑥𝛾𝑥𝑝𝑝+12𝑦Φ𝑥22,(2) where 𝜎 and 𝛾 are positive parameters. For the noisy compresses sensing problem, it is shown in [19] that the solution to problem (1) exhibits reconstruction error on the order of 𝐶𝜎, the constant C depending only on the RIP constant of Φ. Moreover, the reconstruction error bound is also provided in [19] as signals are not sparse but just compressible.

In the literature, numerous computational methods have been proposed to resolve the 𝑝-problem particularly in the case of 𝑝=1 and its relaxations for sparse solutions. One representative algorithm is the interior-point method, such as the primal log-barrier approach for CS [20]. Compared with interior point methods, however, gradient methods are generally more competitive on CS problems with very sparse solutions, for example, iterative splitting and thresholding [21], fixed-point continuation (FPC) [10], and gradient projection sparse reconstruction (GPSR) [22]. For the 𝑝(0<𝑝<1) nonconvex minimization [23, 24], for example, iterative reweighted least squares (IRLSs) [23], they do not always give global minima and are also slower. Besides, a kind of Bayesian CS approaches are also proposed based on sparse Bayesian learning to solve the 0-problem [25, 26]. While, in essence, they correspond to the nonconvex methods just like IRLS.

CS theory is originally emerged from the community of MR imaging, where Candès et al. [6] proposed to reconstruct piecewise constant Logan-Shepp phantoms based on TV and theoretically proved that exact signal reconstruction from incomplete frequency information is based on the assumption that the image has a sparse representation. Afterwards, researchers proposed to reconstruct images using the wavelet transform, since it is also a good sparse representation for piecewise constant images (e.g., [14, 15, 18, 2226]). However, images, for example, natural images, MR images, are seldom piecewise constant but commonly piecewise smooth. The single TV regularization or wavelet-𝑝(0<𝑝1) seminorm is not a good sparse representation for piecewise smooth images, thereby leading to avoidable mistakes on those images. For example, the iterative reweighted methods and Bayesian hierarchical methods have behaved aggressively in terms of sparsification [16].

Recently, several papers focused specifically on the compressed imaging reconstruction of natural and MR images, (e.g., [6, 1117]). In essence, however, most of them aim at good reconstruction of edges through minimizing the combination of TV and wavelet-1 regularization. Besides, their reconstruction quality is anticipated to be similar in terms of SNR and visual effects. Actually, images are usually of morphological diversities implying that several types of geometric structures exist in images, for example, strong edges, oscillating textures, and so on. Hence, more careful sparseness regularization is required for faithful image reconstruction.

3. Nonconvex Compressed Sampling for Structure-Preserving Image Reconstruction

3.1. Sparseness Modeling for Nonconvex Compressed Sampling

We consider the following image CS problem: 𝑦=ΦPDFT𝑢+𝑧,(3) where 𝑢 is an original image, 𝑦 is a measurement vector, 𝑧 is random noise or deterministic unknown error, and ΦPDFT𝑀×𝑁 is a partial discrete Fourier measurement matrix, such as the uniformly random selection of 𝑀 rows from an 𝑁×𝑁 discrete Fourier transform [6]. The sampling ratio is defined as 𝑀/𝑁(𝑀𝑁). The original strategy is the TV-based convex optimization proposed by Candès and his Caltech team, obtaining perfect reconstruction on the piecewise constant Logan-Shepp phantom and many other similar test phantoms in medical imaging [6]. It has been mentioned also that other researchers [11, 13, 16, 17] proposed to minimize TV and wavelet-1 norm to improve the reconstruction quality of natural and MR images.

Our idea originates from the fact that images are of morphological diversities [27, 28]. It is believed that sparseness regularization should be imposed on each morphological component in images. In practice, strong edges and oscillating textures are of our particular interests, here. It has come to light that the TV regularization is quite a good candidate for sparseness modeling of strong edges in piecewise constant images [6, 28]. To get more out of the linear measurements, for example, oscillating textures, a proper sparseness modeling should also be imposed on those texture components. To make the idea possible, the local DCT is adopted to specifically capture the oscillating textures and of course other image details. Since natural images and MR images are often rich in textures and fine details, their local DCT coefficients tend to be approximately sparse thus satisfying the fundamental principle of sparsity underlying in CS. Furthermore, images are usually sampled at ratios far below the Nyquist sampling rate in compressed imaging. Hence, it is more appropriate to recast image reconstruction as the minimization of following variational functional: 𝐽(𝑢)=𝛾1ΨLDCT𝑢0+𝛾21TV(𝑢)+2𝑦ΦPDFT𝑢22,(4) where TV(𝑢) is the total variation of 𝑢, defined as TV(𝑢)=𝑘,𝑙𝑓(𝑘,𝑙𝑢),𝑓()=2, 𝑘,𝑙𝑢=(1𝑢𝑘,𝑙,2𝑢𝑘,𝑙); ΨLDCT stands for the matrix of local DCT, the window width of which is denoted as 𝑠; 𝛾1 and 𝛾2 are positive regularization parameters, prescribing the importance of the solution having a small 0 seminorm in the local DCT domain versus having a small TV seminorm in the spatial domain. However, it is not easy to efficiently solve the functional (4), since the related 0-problem is NP-hard.

To solve (4), our strategy differs from the relaxations of 𝑝(0<𝑝<1), usually solved by the iterative reweighted least squares [14, 18, 23, 29]. In this paper, a kind of approximate 0 seminorms are proposed. Suppose 𝑥𝑁 is a sparse signal and the sparsity of 𝑥 is 𝐾, then the 0 seminorm of 𝑥 can be approximated by appEll-0(𝑥;𝑝)=lim𝑁𝜀0+𝑖=1||𝑥𝑖||𝑝||𝑥𝑖||𝑝+𝜀,(5) where 1𝑝2. In practice, however, 𝜀 acts as a regularization parameter, resulting in a practicable appropriate 0 seminorm, that is, appEll-0(𝑥;𝑝,𝜀). Then, formula (4) can be relaxed as follows: 𝐽(𝑢;𝑝,𝜀)=𝛾1ΨappEll-0LDCT𝑢;𝑝,𝜀+𝛾21TV(𝑢)+2𝑦ΦPDFT𝑢22.(6) It is to demonstrate in the following section that a larger 𝑝 makes formula (6) more efficient and robust to the reconstruction of texture-rich natural images and ordinary MR images, particularly as the images are highly undersampled.

3.2. Implementation Using Primal-Dual and Operator-Splitting Methods

Till now, we come to the issue of numerical implementation of formula (6). To ease the computation, rewrite formula (6) in an equivalent form, given as 𝐽(𝑥;𝑝,𝜀)=𝛾1appEll-0(𝑥;𝑝,𝜀)+𝛾2ΨTV1LDCT𝑥+12𝑦Θ𝑥22,(7) where Θ=ΦPDFTΨ1LDCT, Ψ1LDCT represents the inverse local DCT and 𝑢=Ψ1LDCT𝑥. Nevertheless, it is difficult to directly solve formula (7) since appEll-0(𝑥;𝑝,𝜀) is nonconvex and TV(Ψ1LDCT𝑥) is nonsmooth. Thanks to the half-quadratic regularization method [30], the minimization of (7) can be translated into a computationally more tractable expression, that is, 𝑥(𝑘+1)=min𝑥{𝐽(𝑥;𝑏(𝑘),𝑝,𝜀)}, where 𝐽𝑥;𝑏(𝑘),𝑝,𝜀=𝛾1𝑁𝑖=1𝑏(𝑘)||𝑥𝑖||2+𝛾2ΨTV1LDCT𝑥+12𝑦Θ𝑥22,𝑏(8)(𝑘)=𝜀𝑝2|||𝑥𝑖(𝑘)|||𝑝2|||𝑥𝑖(𝑘)|||𝑝+𝜀2.(9)

Given the current estimate 𝑏(𝑘), the remaining problem is to solve the formula (8). Borrowing the idea of primal-dual methods [11, 31], 𝑥 is the optimal solution of (8) if and only if there exists an auxiliary variable 𝜉 such that 𝑘,𝑙Ψ1LDCT𝑥𝜕𝑓𝜉𝑘,𝑙,𝑥(10)𝟎𝜕𝐽;𝑏(𝑘),,𝑝,𝜀(11) where 𝜉=(𝜉𝑘,𝑙),𝜉𝑘,𝑙2,𝑓 is the convex conjugate of 𝑓, and the subdifferential 𝜕𝐽(𝑥;𝑏(𝑘),𝑝,𝜀) is of the form 𝛾1𝜕𝑥(𝑥)𝑇𝑊(𝑘)𝑥+𝛾2ΨLDCT𝑘,𝑙𝑘,𝑙(𝜉𝑘,𝑙)+𝜕𝑥1/2𝑦Θ𝑥22, where 𝑘,𝑙 is the adjoint operator of 𝑘,𝑙 and 𝑊(𝑘) is a diagonal matrix with elements 𝑏(𝑘) on the diagonal and zeros elsewhere. In the following, we apply the operator-splitting method to (10) and (11). The splitting expression of (10) is given as 𝟎𝜅𝜕𝑓𝜉𝑘,𝑙+𝜉𝑘,𝑙𝜁𝑘,𝑙,𝜁(12)𝑘,𝑙=𝜉𝑘,𝑙+𝜅𝑘,𝑙Ψ1LDCT𝑥(13) and that of (11) corresponds to 𝟎𝜏𝛾1𝜕𝑥𝑥𝑇𝑊(𝑘)𝑥+𝑥𝜒,(14)𝜒=𝑥𝛾𝜏2ΨLDCT𝑘,𝑙𝑘,𝑙𝜉𝑘,𝑙+𝜕𝑥12𝑦Θ𝑥22,(15) where 𝜅 and 𝜏 are positive auxiliary scalars. Once again, (12) and (14) can be calculated by the primal-dual method, yielding min𝜉𝜅2𝜉𝑘,𝑙22+𝜉𝑘,𝑙𝜁𝑘,𝑙22,(16)min𝑥𝜏𝛾1𝑥𝑇𝑊(𝑘)𝑥+12𝑥𝜒22.(17) It can be proved that [11] both (16) and (17) have closed form solutions to yield 𝜉=(𝜉𝑘,𝑙) and 𝑥, respectively, 𝜉𝑘,𝑙1=min𝜅,𝜁𝑘,𝑙2𝜁𝑘,𝑙𝜁𝑘,𝑙2,𝑥(18)=1+𝜏𝛾1𝑊(𝑘)1𝜒.(19)

From the above discussion, the iterative reconstruction scheme of formula (6) can be described by formulas (9), (13), (15), (18), and (19). When initial estimates 𝜉(0),𝑥(0) of 𝜉 and 𝑥 are given, 𝑏(0)  can be estimated using (9), 𝜁(1),𝜒(1) can be estimated using (13) and (15), and 𝜉(0),𝑥(0) can be updated subsequently using (18) and (19), obtaining 𝜉(1),𝑥(1). For simplicity, the algorithm is called HQRCSparET (half-quadratic regularized CS based on sparseness modeling of edges and textures), specified in detail as follows. Notice that HQRCSparET has incorporated the continuation idea in FCP [10] and hence is a scheme of iteration-and-then-continuation.

Algorithm (HQRCSparET)
Input
An image 𝑢, a measurement matrix ΦPDFT, initial points 𝜉(0),𝑥(0), regularization parameters 𝜀,𝛾1, and 𝛾2, auxiliary scalars 𝜅 and 𝜏, maximum iteration times (MITs), initial iteration time 𝑘=0, the width 𝑠, and a factor 𝜇.
Output
Reconstructed image 𝑢.
(1) Initialize
𝛾1(1)=max{𝜇Θ𝑇𝑦,2(𝜀𝑝)1𝛾1},𝛾2(1)=𝛾1(1)𝛾2/𝛾1,𝑘=1.
(2) Iterate
Estimate 𝑏(𝑘1) using (9), 𝜁(𝑘),𝜒(𝑘) using formulas (13) and (15) and 𝜉(𝑘),𝑥(𝑘) using formulas (18) and (19). If the acceptance tests on 𝑥(𝑘),𝜉(𝑘) (i.e., their relative errors reach to 1𝑒4) are not passed, repeat the iteration, and set 𝑘=𝑘+1.
(3) Continuation
If the acceptance tests on 𝑥(𝑘),𝜉(𝑘) are passed, but the stopping criterion (i.e., 𝛾1(𝑘)=𝛾1) does not hold, reduce 𝛾1(𝑘)and 𝛾2(𝑘) by a factor 𝜇, and go to step (2).
(4) Check
If the stopping criterion holds, terminate with 𝑥=𝑥(𝑘), and 𝑢=Ψ1LDCT𝑥.

4. Numerical Examples

4.1. General Description

In this section, compressed imaging reconstruction is applied to both the natural and MR images utilizing the proposed approach and several other recent algorithms. In each test, the observed data 𝑦 is synthesized as 𝑦=ΦPDFT𝑢+𝑧, where 𝑢 is the original image, ΦPDFT is a partial DFT measurement matrix as designed in [11], and 𝑧 is the Gaussian white noise with a mean zero and standard deviation 0.01. To be noticed, many other measurement matrices [6, 16, 32] are also applicable to our algorithm framework. All experiments are performed in MATLAB v7.0 running on a Toshiba laptop with an Intel Core-Duo at 2 GHZ and 2 GB of memory. On average, our unoptimized implementation processes a 256 × 256 test image is about 4-5 minutes.

In the proposed HQRCSparET, 𝜉(0) and 𝑥(0) are simply initialized by zero, though it is not sensitive to the starting points; the parameter 𝛾1 is set to 0.35𝑒3 and 𝛾2 to 1𝑒3; the scalars 𝜅 and 𝜏 are equally set to 1.6; MIT is set to 100; 𝜀 is set to 0.1, 𝜇 is set to 2/3, and 𝑝 is set to 2 if not specified. For the choice of 𝑠, it is to be clarified in each specific experiment. For the fair comparison, all tuning parameters in other methods are adjusted manually to achieve the best reconstruction quality. The signal to noise ratio (SNR) and relative error (Rel. Err.) are used to evaluate the performance of different methods.

4.2. Performance on Natural Image Reconstruction

In Figure 1, six natural images are shown to test the performance of the proposed HQRCSparET, FPC [10], and TVCMRI [11]. The images are all of size 256 × 256. It is observed that there are a large amount of textures and fine details in images, especially in the latter three ones. One can obtain the partial Fourier data from each natural image based on the observation model in Section 4.1 and reconstruct the original image 𝑢 utilizing FPC, TVCMRI, and HQRCSparET, respectively.

fig1
Figure 1: Test natural images of size 256×256.

Table 1 shows experiment results on the six images as the sampling ratio (SR) is 38.56%, providing the values of SNR, relative error, and iteration times. Here, 𝑠 is chosen as 16, and 𝑝 is set to 1 for the moment. From Table 1, it is clearly observed that the SNR values of the reconstructed images from HQRCSparET are smaller than those from FPC and TVCMRI. In terms of relative errors, HQRCSparET also performs better than FPC and TVCMRI. A larger 𝑝, however, makes CS strategy (6) more efficient to reconstruct the texture-rich images. Let 𝑝 be equal to 2 and apply the algorithm HQRCSparET to Figures 1(d)1(f). It is observed that experiment results on the latter three texture-rich images, that is, Figures 1(d)1(f), as shown in Table 2, have manifested the efficiency of formula (6) more obviously. Take Figure 1(e), for example. About 6 dB is improved by HQRCSparET compared with TVCMRI, and about 11 dB is improved by HQRCSparET compared with FPC. However, it is also observed that more iteration times are required here implying that HQRCSparET converges slower as 𝑝 becomes larger.

tab1
Table 1: Results on natural images in Figure 1 by three methods (SR = 38.56%, 𝑠=16, 𝑝=1).
tab2
Table 2: Experiment results on texture-rich natural images in Figure 1 by three methods (SR = 38.56%, 𝑠=16, 𝑝=2).

HQRCSparET also performs more robust than FPC and TVCMRI as sampling ratios become lower. Table 3 shows experiment results as SR is 21.59%, obtained by FPC, TVCMRI, and HQRCSparET, respectively. Here, 𝑠 is chosen as 8. Reconstructed images of Figures 1(d) and 1(f) are accordingly shown in Figures 2 and 3, showing that HQRCSparET is capable of structure-preserving image reconstruction. Take Figure 1(d), for example. On the one hand, TV norm in formula (6) is able to remove the obvious ringing artifacts occurred in Figure 2(a); on the other hand, the approximate 0 norm in formula (6) is able to simultaneously preserve the textures and remove the slight ringing artifacts occurred in Figure 2(b). That is why HQRCSparET yields higher SNR values and smaller relative error values than FPC and TVCMRI in Tables 2 and 3. Similar results can be also obtained as SR is 12.91%. Take the texture-rich image Figure 1(f), for instance. The SNR value produced by HQRCSparET is 15.6560 dB, which is higher than 12.9269 dB of FPC and 13.8814 dB of TVCMRI. Moreover, it is to be mentioned that the parameters in FPC and TVCMRI have to be adjusted accordingly to achieve the best reconstruction quality. However, those in HQRCSparET are chosen the same as in Section 4.1.

tab3
Table 3: Results on natural images in Figure 1 by three methods (SR = 21.59%, 𝑠=8, 𝑝=2).
fig2
Figure 2: Reconstruction of the natural image (Figure 1(d)) from partial Fourier data (SR = 21.59%) by FPC [10], TVCMRI [11], and HQRCSparET, respectively. Values on SNR and relative error are listed in Table 3.
fig3
Figure 3: Reconstruction of the natural image (Figure 1(f)) from partial Fourier data (SR = 21.59%) by FPC [10], TVCMRI [11], and HQRCSparET, respectively. Values on SNR and relative error are listed in Table 3.
4.3. Applications to Compressed MR Imaging

In literature, several papers focused on reconstruction of MR images using TV and perhaps additional wavelet-1 constraints (e.g., [6, 11, 13, 16, 17]). In this subsection, the proposed HQRCSparET is applied to compressed MRI and compared with Candès’s TV-constrained CS (CTVCS) [6] and TVCMRI [11].

Six MR images as shown in Figure 4 are used to test the performance of above three methods. All the images are of size 256 × 256. It is noticed that, the ordinary MR images are rich in textures and fine details and also of much low contrast. From the experience of tests on natural images in Section 4.2, HQRCSparET is to perform much better than CTVCS and TVCMRI. Sample partial Fourier data from each MR image using the observation model in Section 4.1. Table 4 shows experiment results on the six MR images as the sampling ratio is 38.56%, using CTVCS, TVCMRI, and HQRCSparET, respectively. The reconstructed MR images of Figures 4(d), 4(e), and 4(f) are shown in Figure 5. In this subsection, 𝑠 is chosen as 8. From the SNR and relative error values in Table 4, it is concluded that HQRCSparET has performed fairly better than CTVCS and TVCMRI, just like the case of texture-rich natural image reconstruction. From Figure 5, it is observed that HQRCSparET gives a faithful reconstructed image at the sufficient SR of 38.56%, and there is no obvious difference from other two reconstructed images, either.

tab4
Table 4: Results on MR images in Figure 4 by three methods (SR = 38.56%, 𝑠=8, 𝑝=2).
fig4
Figure 4: Test MR images of size 256 × 256.
982792.fig.005
Figure 5: MRI reconstruction of Figures 4(d), 4(e), and 4(f) from partial Fourier data (SR = 38.56%) by CTVCS [6], TVCMRI [11], and HQRCSparET, respectively. Values on SNR and relative error are listed in Table 4.

In addition, Table 5 shows experiment results on MR images in Figure 4 as SR is 21.59%, obtained, respectively, by CTVCS, TVCMRI, and HQRCSparET, and Table 6 shows the results as SR is 12.91%. Reconstructed images of Figures 4(d), 4(e), and 4(f) are shown in Figures 5 and 6, respectively. On the one hand, it is found that HQRCSparET still performs better than CTVCS and TVCMRI from Tables 5 and 6. Take Figure 4(d), for instance. As SR is 21.59%, about 3 dB is improved by HQRCSparET compared with CTVCS, and about 2.5 dB compared with TVCMRI; as SR is 12.91%, about 3 dB is improved compared with CTVCS and about 2 dB compared with TVCMRI. On the other hand, from Figures 6 and 7 it is observed that HQRCSparET still yields higher quality reconstruction as images are highly undersampled. As for CTVCS, however, there are more and more staircase artifacts as reducing the sampling ratio; as for TVCMRI, obvious staircase artifacts also exist in the reconstructed MR images, and the image contrast becomes lower as reducing the sampling ratio.

tab5
Table 5: Results on MR images in Figure 4 by three methods (SR = 21.59%, 𝑠=8, 𝑝=2).
tab6
Table 6: Results on MR images in Figure 4 by three methods (SR = 12.91%, 𝑠=8, 𝑝=2).
982792.fig.006
Figure 6: MRI reconstruction of Figures 4(d), 4(e), and 4(f) from partial Fourier data (SR = 21.59%) by CTVCS [6], TVCMRI [11], and HQRCSparET, respectively. Values on SNR and relative error are listed in Table 5.
982792.fig.007
Figure 7: MRI reconstruction of Figures 4(d), 4(e), and 4(f) from partial Fourier data (SR = 12.91%) by CTVCS [6], TVCMRI [11], and HQRCSparET, respectively. Values on SNR and relative error are listed in Table 6.

5. Conclusions

In this paper, a novel CS strategy is proposed for image reconstruction based on respective sparseness modeling for strong edges and oscillating textures. Numerous experiments demonstrate that TV and perhaps additional wavelet-1 regularization are not enough to reconstruct faithful images. Unlike existing methods, our proposed approach is capable of structure-preserving image reconstruction. It is particularly efficient and robust to the reconstruction of texture-rich natural and ordinary MR images even as images are highly undersampled because of exploitation a kind of approximate 0 seminorms. However, our approach consumes more CPU time than TVCMRI in per iteration, due to the superlinear computational complexity of local DCT. Therefore, an ongoing research direction is the computationally efficient sparse representation to model oscillating textures and fine details.

Acknowledgments

The authors would like to thank Professor Yizhong Ma for his kind and constructive suggestions on improving this paper. H. Deng’s work is supported by the Talent Introduction Project (NSRC11009) of Nanjing Audit University, and Z. Wei’s work is supported in part by the National High Technology Research and Development Plan of China under Grant 2007AA12E100 and by the Natural Science Foundation of China under Grants 60802039 and 60672074.

References

  1. A. Skodras, C. Christopoulos, and T. Ebrahimi, “The JPEG 2000 still image compression standard,” IEEE Signal Processing Magazine, vol. 18, no. 5, pp. 36–58, 2001. View at Publisher · View at Google Scholar · View at Scopus
  2. T. Acharya and P. S. Tsai, JPEG 2000 Standard for Image Compression, John Wiley & Sons, Hoboken, NJ, USA, 2005.
  3. A. K. Katsaggelos, R. Molina, and J. Mateos, Super-Resolution of Images and Videos, Morgan and Claypool, 2007.
  4. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Processing Magazine, vol. 20, no. 3, pp. 21–36, 2003. View at Publisher · View at Google Scholar · View at Scopus
  5. E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005. View at Publisher · View at Google Scholar · View at Scopus
  6. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006. View at Publisher · View at Google Scholar · View at Scopus
  8. E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 1207–1223, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Problems, vol. 23, no. 3, article no. 008, pp. 969–985, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. E. Hale, W. Yin, and Y. Zhang, “A fixed-point continuation method for L1-regularized minimization with applications to compressed sensing,” Tech. Rep., Rice University, 2007.
  11. S. Ma, W. Yin, Y. Zhang, and A. Chakraborty, “An efficient algorithm for compressed MR imaging using total variation and wavelets,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. K. T. Block, M. Uecker, and J. Frahm, “Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint,” Magnetic Resonance in Medicine, vol. 57, no. 6, pp. 1086–1098, 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid MR imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. C. Y. Jong, S. Tak, Y. Han, and W. P. Hyun, “Projection reconstruction MR imaging using FOCUSS,” Magnetic Resonance in Medicine, vol. 57, no. 4, pp. 764–775, 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. H. Jung, J. C. Ye, and E. Kim, “Improved k-t BLAST and k-t SENSE using FOCUSS,” Physics in Medicine and Biology, vol. 52, no. 11, article 018, pp. 3201–3226, 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. M. Seeger and H. Nickisch, “Compressed sensing and Bayesian experimental design,” in Proceedings of the 25th International Conference on Machine Learning, pp. 912–919, July 2008. View at Scopus
  17. J. Yang, Y. Zhang, and W. Yin, “A fast TVL1-L2 minimization algorithm for signal reconstruction from partial Fourier data,” Tech. Rep., Rice University, 2009.
  18. R. Chartrand, “Exact reconstruction of sparse signals via nonconvex minimization,” IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707–710, 2007. View at Publisher · View at Google Scholar · View at Scopus
  19. E. J. Candès, “The restricted isometry property and its implications for compressed sensing,” Comptes Rendus Mathematique, vol. 346, no. 9-10, pp. 589–592, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. S. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, “A method for large-scale L1-regularized least squares problems with applications in signal processing and statistics,” Tech. Rep., Stanford University, 2007.
  21. I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413–1457, 2004. View at Publisher · View at Google Scholar · View at Scopus
  22. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE Journal on Selected Topics in Signal Processing, vol. 1, no. 4, pp. 586–597, 2007. View at Publisher · View at Google Scholar · View at Scopus
  23. I. Daubechies, R. DeVore, M. Fornasier, and S. Güntürk, “Iteratively Re-weighted Least Squares minimization: proof of faster than linear rate for sparse recovery,” In Proceedings of the 42nd Annual Conference on Information Sciences and Systems, pp. 26–29, 2008.
  24. E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted1 minimization,” Journal of Fourier Analysis and Applications, vol. 14, no. 5-6, pp. 877–905, 2008. View at Publisher · View at Google Scholar · View at Scopus
  25. S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 2346–2356, 2008. View at Publisher · View at Google Scholar · View at Scopus
  26. S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian compressive sensing using laplace priors,” IEEE Transactions on Image Processing, vol. 19, no. 1, Article ID 5256324, pp. 53–63, 2010. View at Publisher · View at Google Scholar · View at Scopus
  27. J. L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Transactions on Image Processing, vol. 14, no. 10, pp. 1570–1582, 2005. View at Publisher · View at Google Scholar · View at Scopus
  28. Y. Meyer, Oscillating Patterns in IImage Processing and Nonlinear Evolution Equation, vol. 22 of University Lecture Series, American Mathematical Society, 2001.
  29. R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872, April 2008. View at Publisher · View at Google Scholar · View at Scopus
  30. G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing, Springer, New York, NY, USA, 2000.
  31. E. Esser, X. Zhang, and T. Chan, “A general framework for a class of first order primal-dual algorithms for TV minimization,” Tech. Rep., UCLA, 2009.
  32. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling: a sensing/sampling paradigm that goes against the common knowledge in data acquisition,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008. View at Publisher · View at Google Scholar · View at Scopus