Mathematical Problems in Engineering

Volume 2016 (2016), Article ID 3567095, 11 pages

http://dx.doi.org/10.1155/2016/3567095

## - and -Norm Joint Regularization Based Sparse Signal Reconstruction Scheme

Southwest Jiaotong University, Chengdu, Sichuan 610031, China

Received 10 May 2016; Revised 7 July 2016; Accepted 10 July 2016

Academic Editor: Nazrul Islam

Copyright © 2016 Chanzi Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Many problems in signal processing and statistical inference involve finding sparse solution to some underdetermined linear system of equations. This is also the application condition of compressive sensing (CS) which can find the sparse solution from the measurements far less than the original signal. In this paper, we propose - and -norm joint regularization based reconstruction framework to approach the original -norm based sparseness-inducing constrained sparse signal reconstruction problem. Firstly, it is shown that, by employing the simple conjugate gradient algorithm, the new formulation provides an effective framework to deduce the solution as the original sparse signal reconstruction problem with -norm regularization item. Secondly, the upper reconstruction error limit is presented for the proposed sparse signal reconstruction framework, and it is unveiled that a smaller reconstruction error than -norm relaxation approaches can be realized by using the proposed scheme in most cases. Finally, simulation results are presented to validate the proposed sparse signal reconstruction approach.

#### 1. Introduction

Compressive sensing or compressive sampling (CS) [1–3] is a novel technique that enables efficient sampling below Nyquist rate, without (or with little) sacrificing reconstruction quality. CS converts the high-dimensional sparse signal into a significantly lower dimensional measurement signal. More precisely, let be a sparse vector, let () be a measurement matrix, and suppose the noisy observation vector is given bywhere is the noise vector. Here sparsity means that and ; -norm counts the number of nonzero items of a vector. The goal is to obtain an estimate of given and .

Determining the sparse signal in from measurements is typically an underdetermined problem; obviously, there will be no unique solution without any prior knowledge or constraint imposed on the solution . As the original signal is sparse, the problem of finding the desired solution can be phrased as some optimization problem, where the objective is to minimize the noise between noisy observation vector and real measurement while satisfying the constraint of the sparsity of . As the sparsity of is reflected by the number of its nonzero entries, equivalently its so-called -norm, in the noise-free case of (1), we can seek to solve the following problem: can recover exactly if is sufficiently sparse and the matrix satisfies the requirement of Restricted Isometry Constant (RIC) , which can be defined as the smallest constant such that the matrix satisfies the Restricted Isometry Property (RIP) of order ; namely,whenever . However, the optimization problem in (2) is nondeterministic polynomial (NP) hard which is difficult to be solved in practice. A number of approaches are presented to solve (2). The related approaches can be briefly classified into the following three categories.

*(i) Greedy Pursuits*. Greedy algorithms attempt to determine the nonzero entry indices, based on the relationship between the columns of the measurement matrix and the signal (or the residual signal), and then to estimate the non-zero entry amplitudes by using the least square method. Its objective is to pursue the minimization of -norm directly. The typical greedy approaches include Orthogonal Matching Pursuit (OMP) [4], Stagewise OMP (StOMP) [5], and Subspace Pursuit (SP) [6]. Basically, the greedy algorithms can only seek an approximate solution to (2) and are sensitive to noise.

*(ii) Optimization*. This kind of approach attempts to replace -norm minimization problem in (2) with - or -norm (), admitting tractable algorithms. These methods try to obtain the sparse solution by employing a newly reformulated problem as follows:In the noise-free case, can recover exactly as soon as . In the noisy case, the above convex relaxation formulation leads to the following penalized least-square problem [7, 8]:Later, it is shown in [9] that the reformulated one like can better approach the original -norm problem in (2), where denotes the nonnegative weighting coefficient. Iterative reweighting scheme was further investigated in [10]. -magic algorithm in [11], the gradient projection for sparse reconstruction algorithm (GPSR) in [8], and their variants also belong to the convex relaxation categories. -norm minimization problem can be recast aswhere . Chartrand demonstrated that -sparse vectors can be exactly recovered by solving under the assumption that holds for some and [12]. This kind of algorithm is capable of achieving better reconstruction quality and less error; however, it takes more time to solve the optimization problems.

*(iii) Bayesian Framework*. Given the unknown sparse coefficients* a priori* probability distribution, the Maximum A Posteriori (MAP) estimation mechanism can be employed to derive the Bayesian framework based sparse signal recovery schemes [13, 14]. It is unveiled in [15] that this framework may lead to a new mixture penalty of - and -norm functions.

-norm sparsity inducing item is very useful in image processing, for example, image restoration, image denoising, and image superresolution. Since -norm is nonconvex, image processing pursues - and -norm or total variation (TV) norm to replace the original -norm and solves the problems. Reference [16] studies a minimization problem where the objective includes a usual -norm data-fidelity term and an overlapping group sparsity total variation regularizer, and a fast algorithm is proposed to solve this problem. This method can avoid staircase effect and allow edges preserving. Reference [17] proposes -norm fidelity term with a total variation regularizer to recover blurred and salt-and-pepper impulse noisy image for higher visual quality. Reference [18] proposes an optimization model for noise and blur removal involving the generalized variation regularization term, the MAP (maximum posterior probability) based data fitting term, and a quadratic penalty term based on the statistical property of the noise. This minimization problem can be solved by a primal-dual algorithm. Reference [19] studies TV regularization in deblurring and sparse unmixing of hyperspectral images. Reference [20] investigates the modified linearized Bregman algorithm (MLBA) for image deblurring problems with a proper treatment of the boundary artifacts and - plus -norm.

Motivated by the utility of sparsity and the efforts in relaxing -norm function with tractable norm functions to get either convex or nonconvex relaxation problem, in this paper, we propose a novel - and -norm joint regularization based reconstruction framework to approach the original -norm sparseness-inducing constraint based reconstruction problem. Although [21] has shown that and measures are theoretically better than -norm to promote sparsity, our analysis shows that the proposed sparse signal reconstruction model can also solve the original problem. Moreover, the upper error limit of the proposed sparse signal recovery model is derived. It is unveiled that the proposed signal reconstruction model can achieve the tradeoff between -norm relaxation and -norm relaxation techniques. Namely, it exhibits similar -norm approximation capability like -norm relaxation. At the same time, like -norm convex relaxation approaches, we can resort to a variety of feasible optimization algorithms to derive the solution.

The remainder of this paper is organized as follows. The proposed sparse signal recovery model will be introduced in Section 2. Nextly, we deduce the error bound of the sparse signal reconstruction in Section 3. The practical algorithm design and experimental results are demonstrated in Sections 4 and 5. Finally, we conclude this paper in Section 6.

#### 2. The Proposed Sparse Signal Recovery Model

##### 2.1. Problem Reformulation

The reconstruction algorithm plays a central role in the sparse signal reconstruction. We propose - and -norm joint regularization to approximate -norm and the sparse signal recovery problem can be reformulated as follows:

We will try to explicate that the joint combination of - and -norm regularization provides a reasonable approximation to -norm, which can bring surprising benefits. Let us show this from the geometry point of view at first. The linear constraint of defines the feasible set of the problem. Geometrically, solving (4) (or (6)) is done by blowing - (or -) norm balloon centered around the origin and stopping its inflation once it touches the feasible set. Figure 1 illustrates some examples with different values of 2, 1.5, 1, 0.5, and 0.1, respectively. One can see that the norms with tend to touch the feasible set at the axes, which leads to the sparse solutions. On the other hand, - or -norm tends to derive the nonsparse solution. One can also observe from Figure 1 that, by using the joint - and -norm in (7) with the suitable ratio of , the intersection will also take place at the axes, which leads to sparse solution as well. Namely, the proposed signal reconstruction model resembles -norm approximation capability like -norm relaxation scheme. The influences of different choices of and will be highlighted in the following discussions.