Mathematical Problems in Engineering

Volume 2018 (2018), Article ID 7171352, 15 pages

https://doi.org/10.1155/2018/7171352

## Content-Aware Compressive Sensing Recovery Using Laplacian Scale Mixture Priors and Side Information

^{1}School of Electronic and Information Engineering, South China University of Technology, Guangzhou 510641, China^{2}School of Electronic and Information Engineering, Nanjing University of Information Science & Technology, Nanjing 210044, China

Correspondence should be addressed to Lihong Ma; nc.ude.tucs@amhlee

Received 10 August 2017; Revised 3 November 2017; Accepted 20 November 2017; Published 29 January 2018

Academic Editor: Raffaele Solimene

Copyright © 2018 Zhonghua Xie et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Nonlocal methods have shown great potential in many image restoration tasks including compressive sensing (CS) reconstruction through use of image self-similarity prior. However, they are still limited in recovering fine-scale details and sharp features, when rich repetitive patterns cannot be guaranteed; moreover the CS measurements are corrupted. In this paper, we propose a novel CS recovery algorithm that combines nonlocal sparsity with local and global prior, which soften and complement the self-similarity assumption for irregular structures. First, a Laplacian scale mixture (LSM) prior is utilized to model dependencies among similar patches. For achieving group sparsity, each singular value of similar packed patches is modeled as a Laplacian distribution with a variable scale parameter. Second, a global prior and a compensation-based sparsity prior of local patch are designed in order to maintain differences between packed patches. The former refers to a prediction which integrates the information at the independent processing stage and is used as side information, while the latter enforces a small (i.e., sparse) prediction error and is also modeled with the LSM model so as to obtain local sparsity. Afterward, we derive an efficient algorithm based on the expectation-maximization (EM) and approximate message passing (AMP) frame for the maximum a posteriori (MAP) estimation of the sparse coefficients. Numerical experiments show that the proposed method outperforms many CS recovery algorithms.

#### 1. Introduction

Compressive sensing (CS) [1, 2] allows us to reconstruct high-dimensional data with only a small number of random samples or measurements, if the original signal can be sparsely represented by some given appropriate basis. Owing to the fact that image prior knowledge plays a critical role in the performance of compressive sensing reconstruction, much efforts have been made to develop an effective regularization term or signal model to reflect the image prior knowledge. Standard CS methods exploit the sparsity of signal in some domains, such as DCT [3], wavelets [4, 5], total variation (TV) [6, 7], and learned dictionary [8, 9]. Unfortunately, these methods are less appropriate for many imaging applications. The reason for this failure is that natural images do not have an exactly sparse representation in any above basis. These models favor piecewise constant image structures and hence tend to smooth much the image details.

More recently, the concept of sparsity has evolved into various sophisticated forms, including group sparsity [10, 11], tree sparsity [12–14], and nonlocal sparsity [15–19], where higher-order dependency among sparse coefficients is exploited. Among them, nonlocal sparsity, which refers to the fact that a patch often has many nonlocal similar patches to it across the image, has been shown to be most beneficial to CS image recovery. In [15], a nonlocal total variation (NLTV) regularization model for CS image recovery is proposed by using the self-similarity property in gradient domain. In order to obtain an adaptive sparsity regularization term for CS image recovery process, a local piecewise autoregressive model is designed in [16]. In [17], similar patches are grouped to form a two-dimensional data matrix for characterizing the low-rank property, leading to a CS recovery method via nonlocal low-rank regularization (NLR-CS). In [18, 19], a probabilistic graphical model is established, which uses collaborative filtering [20] to promote sparsity of packed patches. Despite the steady progress in nonlocal methods, they still tend to smooth the detailed image textures, degrading the image visual quality, for the reason that the lack of self-repetitive structures and corruption for data is unavoidable.

To deal with this issue, local and global priors are designed to soften and complement the nonlocal sparsity for irregular structures so as to preserve image details. More specifically, the nonlocal sparsity is only imposed on a set of patches with limited influence from neighboring pixels, while the global prior refers to a prediction used as a reference which integrates the outcomes at the independent processing stage and can maintain the entire consistency of image; moreover, a compensation-based constraint term of local patch is utilized to enforce a small (i.e., sparse) prediction error. Both local sparsity and nonlocal sparsity are represented by Laplacian scale mixture (LSM) [21, 22] models, which are adopted to force coefficients, that is, singular values of local patches and similar packed patches, to be sparse. Each coefficient is modeled as a Laplacian distribution with a variable scale parameter, resulting in weighted singular value minimization problems, where weights are adaptively assigned according to the signal-to-noise ratio. On the other hand, the reference image can be used as side information. Finally, we obtain a side information-aided LSM prior model for CS image reconstruction. To solve this model, the expectation-maximization (EM) [23] method is adopted, turning the CS recovery problem into a prior parameter estimation problem and a singular value minimization problem. In particular, owing to its promising performance and efficiency, we are motivated to apply the approximate message passing (AMP) algorithm [24, 25], which is an iterative algorithm that can be used in signal and image reconstruction by performing denoising at each iteration, to solve the latter. Experimental results on natural images show that our approach can achieve more accurate reconstruction than other competing approaches.

#### 2. Background

##### 2.1. CS Recovery Problem

The CS recovery problem aims to find the sparsest solution from the underdetermined linear system , where are the measurements, , is the measurement matrix, and denotes the additive noise. One can solve the following objective function: The first term is the data fidelity term that represents the closeness of the solution to the measurements. The second term is a regularization term that represents a priori sparse information of the original signal. is a regularization parameter that balances the contribution of both terms. As mentioned in Introduction, CS recovery methods exploit the sparsity of signal in some domains, such as DCT [3], wavelets [4, 5], learned dictionary [8, 9], and total variation (TV) [6, 7], leading to various forms of , , and where and , respectively.

##### 2.2. Nonlocal Sparsity

The abundance of self-repeating patterns in natural image (as shown in Figure 1) can be characterized by the nonlocal sparsity [16, 17]. As shown in Figure 2, for each local patch, we can find the first most similar nonlocal patches to it. In practice, this can be done by Euclidean distance based block matching in a large enough local window. Let (or ) denote an exemplar patch located at the th position. Patches that are similar to including itself are found to form a low-rank matrix : . Here, we suppose that . An objective function that reflects the group sparsity of similar patches with a low-rank regularization term for CS recovery can be formulated as follows: where is the total number of similar patch groups; is the nuclear norm of , taking a sum value of its singular values, namely, . denotes the singular value vector of , that is, , and is a vector that contains the diagonal elements of .