Journal of Electrical and Computer Engineering

Volume 2017, Article ID 3458054, 7 pages

https://doi.org/10.1155/2017/3458054

## A New Generalized Orthogonal Matching Pursuit Method

College of Information Engineering, Northeast Electric Power University, Jilin 132012, China

Correspondence should be addressed to Liquan Zhao; moc.361@nauqil_oahz

Received 26 January 2017; Revised 1 July 2017; Accepted 16 July 2017; Published 13 August 2017

Academic Editor: Jar Ferr Yang

Copyright © 2017 Liquan Zhao and Yulong Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

To improve the reconstruction performance of the generalized orthogonal matching pursuit, an improved method is proposed. Columns are selected from the sensing matrix by generalized orthogonal matching pursuit, and indices of the columns are added to the estimated support set to reconstruct a sparse signal. Those columns contain error columns that can reduce the reconstruction performance. Therefore, the proposed algorithm adds a backtracking process to remove the low-reliability columns from the selected column set. For any -sparse signal, the proposed method firstly computes the correlation between the columns of the sensing matrix and the residual vector and then selects columns that correspond to the largest correlation in magnitude and adds their indices to the estimated support set in each iteration. Secondly, the proposed algorithm projects the measurements onto the space that consists of those selected columns and calculates the projection coefficient vector. When the size of the support set is larger than , the proposed method will select high-reliability indices using a search strategy from the support set. Finally, the proposed method updates the estimated support set using the selected high-reliability indices. The simulation results demonstrate that the proposed algorithm has a better recovery performance.

#### 1. Introduction

In recent years, a new theory named compressive sensing (CS) [1] has surpassed the limits of the Nyquist sampling rate. Because CS can recover signals at a sampling frequency far lower than the Nyquist sampling rate, CS has aroused tremendous interests over the past few years [2, 3]. CS differs from the traditional Nyquist sampling theory and includes three procedures: sparse representation, nonrelated linear measurement, and signal reconstruction. The reconstruction algorithm aims to recover signals accurately from the measurements, and this step is one of most important parts of CS.

Recently, many reconstruction algorithms have been proposed to obtain the original sparse signal from measurements. Two major classes of reconstruction algorithms are -minimization and greedy pursuit algorithms. Common -minimization approaches include basis pursuit (BP) [4], Gradient projection for sparse reconstruction (GPSR) [5], iterative thresholding (IT) [6], and other algorithms. Those algorithms possess good performance in solving a convex minimization problem, but they have a higher computational complexity.

Greedy algorithms have received increasing attention for their excellent performance and small cost in recovering sparse signals from compressed measurements. A greedy algorithm proposed early on was the matching pursuit algorithm (MP) [7]. Building on the MP algorithm, the orthogonal matching pursuit algorithm (OMP) [8] was proposed to optimize the MP via orthogonalization of the estimate support set. The OMP has become a well-known greedy algorithm with wide application. The regularized orthogonal matching pursuit algorithm (ROMP) [9] was developed to refine the selected columns of the measurement matrix with a regularized rule to improve the speed of OMP. The stage wise orthogonal matching pursuit (StOMP) [10] selects multiple columns in each iteration via a presupposed threshold. The subspace pursuit (SP) [11] and compressive sampling matching pursuit (CoSaMP) [12] proposed similar improvement methods. Both of these algorithms were proposed with the idea of backtracking, and the difference is that SP selects columns from the sensing matrix for each iteration, while CoSaMP selects . The generalized orthogonal matching pursuit (GOMP) was proposed by Wang et al. [13, 14]. The algorithm selects columns in each iteration. When , GOMP is identical to OMP. Compared to OMP, which selects only one column in each iteration, GOMP changes the number of columns that are selected in each iteration to improve the computational efficiency and recovery performance. The generalized OMP (GOMP) has received increasing attention in recent years, because the method can enhance the recovery performance of OMP. Several papers have been published on the analysis of the theoretical performance of GOMP [13–17].

#### 2. Compressive Sensing Model

Compressive sensing requires that the target signal is a -sparse signal. It means that if we regard the signal as a dimensional vector , there should be at most no-zero elements in . However, in practical applications, sparse signals may not exist in all cases. The target signal has to be transformed into a sparse signal based on a set of sparse basis . In this case, can be defined as where ; denotes the number of nonzero elements in a vector. Thus, the signal is equivalently represented by -sparse vector under some linear transformation in some domains. The process of compressive sensing can be regarded as a technique that automatically selects relevant information from signals by a measurement. In the theory, is translated into -dimensional measurements via a matrix multiplication with . We express it as where is defined as the measurement matrix with dimensions . Combining (1) with (2), we can obtain where .

In most scenarios, . Thus, we can interpret that is compressed into with a dimension ranging from to . Clearly, (3) is an underdetermined equation, and it is difficult to obtain an accurate solution based on the equation. That is to say, it is impossible with traditional methods that obtain the inverse of the matrix to reconstruct the original signal accurately. In this case, we can obtain by solving the -minimization problem:Several methods exist for solving this problem. When , the problem is a -minimization problem, which can be solved by using a convex optimization algorithm. When , the problem is a -minimization problem, which can be solved using a greedy algorithm. An appropriate condition for exact recovery is that the matrix satisfies the condition of restricted isometry property (RIP) condition [1].

*Definition 1. *A sensing Matrix is said to satisfy the RIP condition with the smallest number of the -restricted isometry constant , ifholds for any -sparse vector with .

#### 3. GOMP Algorithm

Greedy algorithms are used widely to recover signals in CS due to their simple algorithms and low computational complexity. Two methods exist for improving the OMP algorithm. The first method is based on the idea of backtracking, as in the subspace pursuit algorithm (SP) [11]. The second method selects more than one atom in each iteration, as in GOMP. The computational complexity of the backtracking method is higher, but it yields higher accuracy in most cases. In this section, we will introduce the GOMP algorithm.

GOMP firstly computes the correlation between the columns of the sensing matrix and the residual vector by and then selects columns that correspond to the largest correlation in magnitude adding their indices to the estimated support set in each iteration. Next the projection coefficient vector of measurements onto space of is obtained using the least square method (LS). The residual is revised by subtracting from . These operations are repeated until either the iteration number reaches the maximum , or the -norm of the residual falls below a threshold . For ease of understanding, we describe the GOMP algorithm in Algorithm 1 according to [13, 14].