Journal of Electrical and Computer Engineering

Volume 2018, Article ID 9130531, 11 pages

https://doi.org/10.1155/2018/9130531

## Improved Stochastic Gradient Matching Pursuit Algorithm Based on the Soft-Thresholds Selection

College of Information Engineering, Northeast Electric Power University, Jilin 132012, China

Correspondence should be addressed to Zhao Liquan; moc.361@nauqil_oahz

Received 19 February 2018; Accepted 11 July 2018; Published 24 September 2018

Academic Editor: Panajotis Agathoklis

Copyright © 2018 Zhao Liquan and Hu Yunfeng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The preliminary atom set exits redundant atoms in the stochastic gradient matching pursuit algorithm, which affects the accuracy of the signal reconstruction and increases the computational complexity. To overcome the problem, an improved method is proposed. Firstly, a limited soft-threshold selection strategy is used to select the new atoms from the preliminary atom set, to reduce the redundancy of the preliminary atom set. Secondly, before finding the least squares solution of the residual, it is determined whether the number of columns of the measurement matrix is smaller than the number of rows. If the condition is satisfied, the least squares solution is calculated; otherwise, the loop is exited. Finally, if the length of the candidate atomic index set is less than the sparsity level, the current candidate atom index set is the support atom set. If the condition is not satisfied, the support atom index set is determined by the least squares solution. Simulation results indicate that the proposed method is better than other methods in terms of the reconstruction probability and shorter running time than the stochastic gradient matching pursuit algorithm.

#### 1. Introduction

Compressed sensing (CS) [1, 2] theory has three core problems: sparse representation of signals, and design of the measurement matrix, and design of reconstruction algorithms. The reconstruction algorithm is directly related to the accuracy of the recovery signal and the convergence rate of the algorithm, which determines whether the theory is feasible. The recovery algorithm is described as the recovery of the high-dimensional original signal from the low-dimensional measurement vectors. At present, many reconstruction algorithms have been proposed to recover the signal, which include convex optimization methods, combinational methods, and greedy pursuit methods. Convex optimization methods approximate the signal by transforming nonconvex problems into convex ones, which include basis pursuit (BP) [3], the gradient projection for sparse reconstruction (GPSR) [4] algorithm, iterative threshold (IT) [5], interior-point method [6], Bergman iteration (BT) [7], and total variation (TV) [8]. Although convex optimization algorithms have fewer observations, their higher complexity is not suitable for practical applications.

Combinational methods include Fourier sampling [9,10], chaining pursuit (CP) [11], heavy hitters on steroids pursuit (HHSP) [12]. Combinational methods are low in complexity; however, the accuracy of the recovery signal is not as good as convex optimization algorithms. Greedy algorithms have many advantages, such as simple structure, fast convergence, and low complexity, becoming the first choice for the recovery algorithm.

To date, many greedy pursuit algorithms have been proposed, including the first, the matching pursuit (MP) [13] algorithm. Based on this algorithm, the orthogonal matching pursuit (OMP) [14] algorithm was proposed to optimize MP via orthogonalization of the atoms of the support set. However, the OMP algorithm only selects one of the atoms of the support set at each round of iteration, and the efficiency is lower. Successors include the regularized OMP (ROMP) [15], subspace pursuit (SP) [16] algorithm, compressive sampling matching pursuit (CoSaMP) [17, 18] algorithm, and stagewise OMP (StOMP) [19] algorithm. The ROMP algorithm realizes the effective selection of the atom via the regularized rule, to improve the speed of the OMP. The SP and CoSaMP algorithms are similar. Both are proposed with the backtracking strategy. The difference between the SP and CoSaMP is the number of atoms that are selected from measurement matrix to compose preliminary atomic set in each round of iteration. The SP selects *s* atoms, while the CoSaMP selects 2*s* atoms. The StOMP algorithm selects multiple atoms or columns of the measurement matrix in each round of iteration via a threshold parameter. The greedy algorithms mentioned above all require the sparsity information of the signal, and the choice of the type of atoms is inflexible, which affects the convergence speed, robustness, and reconstruction performance of the algorithms.

Because the traditional greedy pursuit algorithm needs to compute the inverse matrix of the sensing matrix, this process requires a significant amount of computation time and storage space, resulting in lower reconstruction probability. In recent years, some research workers have proposed the fast version of the greedy algorithm, which avoids computation of the inverse matrix [20]. The GP algorithm uses the gradient idea of the unconstrained optimization method to replace the computation of the inverse matrix and to reduce the computational complexity and storage space of the traditional greedy algorithms [21]. However, the GP algorithm has a slow convergence and lower efficiency. In addition, the GP algorithm cannot solve the problem of large-scale data recovery. To improve those problems, the conjugate gradient pursuit (CGP) [22] algorithm and approximate conjugate gradient pursuit (ACGP) [23] algorithm are proposed, respectively. Although CGP and ACGP algorithms effectively reduce the computational complexity and storage space of traditional greedy algorithms, the reconstruction performance still needs to be improved. Based on the GP algorithm, the stagewise weak selection (SWGP) [24] algorithm was proposed to improve the reconstruction performance and convergence speed of the GP algorithm, via introduction of the conjugate direction and weak selection strategy. Motivated by the stochastic gradient descent methods, the stochastic gradient matching pursuit (StoGradMP) algorithm [25] was recently proposed for the optimization problem with sparsity constraints. The atomic selection method of the fixed number in the StoGradMP algorithm (that is, selecting 2s atoms at the preliminary stage of each iteration) will lead to the redundant atoms of the preliminary atomic set. When joined with the candidate atomic set, this will reduce the accuracy of the least squares solution and the inaccuracy of the support atomic set estimation, which affects the precision of the signal reconstruction and increases the computational complexity of the algorithm. In this study, we use the limited soft-threshold selection strategy to realize the second selection of the preliminary atom set after the preliminary stage, which will improve the reconstruction accuracy. The combination with reliability verification conditions ensures the correctness and effectiveness of the proposed method.

#### 2. Compressed Sensing Theory

The recent work in the area of the compressed sensing [26] demonstrated that it was possible to algorithmically recover sparse (and, more generally compressible) signals from incomplete observations. The simplest model is an -dimensional signal with a small number of nonzero entries under no noise conditions:where and are the number of nonzero entries and sparsity level of the signal, respectively. Such signals are called s-sparse. Any signal in can be represented in terms of vectors . For simplicity, we assume that the basis is orthonormal. Forming the basis matrix by stacking the vectors as columns, we can express any signal aswhere is the column vector of projection coefficients, is the projection coefficient. Obviously, and are the equivalent representations of the same signals in the different domains. That is, and are the signals in the time domain and domain, respectively.

When is s-sparse, let , is the unit matrix, and . We use a measurement matrix to make a linear measurement of the projection coefficient vector , obtaining an observation vector , expressed aswhere Equation (3) is a linear projection of the original signals on . Note that the measurement process is nonadaptive; that is, does not depend in any way on the signal . Clearly, the dimension of is much lower than the dimension of . That is, this problem is an under-determined problem; Equation (3) has infinitely many solutions. It is difficult to recover the projection coefficient vector directly from the observation vector . However, the original signal is -sparse and satisfies certain conditions in Equation (3); can be recovered by solving the -minimization problem:where is the -norm of the vector, representing the number of nonzero entries. Candes and Tao demonstrated that if the -sparse signal is to be accurately recovered, the number of measurements (or the dimension of observation vector ) must satisfy , and the measurement matrix must satisfy the restricted isometric property (RIP) [27, 28].

When is not s-sparse in the time domain, the signal recovery process cannot be directly used in the Equation (4). The signal can be sparse representations on the sparse basis matrix . Combining Equations (2) and (3), we obtainwhere is the sensing matrix [29]. According to [30], the equivalent condition of the RIP is that the measurement matrix is not correlated with the sparse basis matrix . Note that if the sensing matrix also satisfies the RIP, the recovery signal (or projection coefficient) can be obtained by solving an optimal -norm problem similar to Equation (4):

From Equation (5), we see that is fixed, so that also satisfies the RIP. The measurement matrix must meet certain conditions. It is shown in [31] that when the measurement matrix is a Gaussian random matrix, the sensing matrix can satisfy the RIP condition with a larger probability.

However in most scenarios, the signal contains noise. In this case, the measurement process can be expressed aswhere the matrix is the sensing matrix and is the -dimensional noise vector. is the s-sparse signal in the domain. In this study, for simplicity, we assume that the signal is *s*-sparse, that is, and . Therefore, Equation (7) can be written as , and we minimize the following formula to recover :where controls the sparsity of the solution to Equation (9).

To analyze (8), we combine Equation (2), where are the projection coefficients of signal . is considered sparse with respect to if is relatively small compared to the ambient dimension . Therefore, we can express the optimization (9) in the form as follows:where , , is a smooth function that is nonconvex; is defined as the norm that captures the sparsity level of . In particular, is the smallest number of atoms in such that can be represented as

For sparse signal recovery, the set consists of basic vectors, each of size in the Euclidean space. This problem can be seen as a special case of Equation (10), with and . We decompose the vector into nonoverlapping vectors with a size of and denote as . is the submatrix of measurement matrix . According to Equation (9), the objective function is . We then denote aswhere , which is a positive integer. We treat each function as , and each function represents a collection (or block) of observations of size . Here, we spilt the function into multiple subfunctions or block the measurement matrix into the submatrix , which will be beneficial for the calculation of the gradient.

#### 3. StoGradMP Algorithm

CoSaMP [17] is a popular algorithm for recovering a sparse signal from its linear measurements. The idea of the CoSaMP algorithm is generalized to prove the GradMP algorithm that solves a broader class of sparsity-constrained problems. In this section, we describe the stochastic version of the GradMP, the StoGradMP [25] algorithm, where at each iteration, only the evaluation of the gradient of a function is required.

The StoGradMP algorithm is described in Algorithm 1, which consists of the following steps in each iteration: *Randomize*. The measurement matrix is subject to random block processing, to determine the region of the submatrix . Then, according to Equation (11), calculates the subfunction . *Proxy*. Compute the gradient of . Here, the gradient is an column vector. *Identify*. Select the column indexes of the submatrix corresponding to the maximum components in the gradient vector, forming a preliminary index set . *Merge*. Merge the preliminary index set and the support index set of the previous iteration to form a candidate index set . *Estimation*. The estimation of signal by a suboptimization method is determined, which is the least squares method. Generally, is the transition signal. *Prune*. Select the column indexes of the measurement matrix corresponding to the maximum components in the signal estimation vector that forms a support index set . *Update*. Update the signal estimation . *Check*. When the residual is less than the tolerance error of the proposed method iteration, stop the iteration. If the loop index is greater than the maximum number of iterations , the proposed method ends and the approximation of signal is output. If the iteration ends, the condition is not satisfied. Otherwise, continue the iteration until the halting condition is met.