Mathematical Problems in Engineering

Volume 2017 (2017), Article ID 4903791, 7 pages

https://doi.org/10.1155/2017/4903791

## Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality Property

College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, China

Correspondence should be addressed to Haifeng Li

Received 23 November 2016; Accepted 15 March 2017; Published 27 April 2017

Academic Editor: Bogdan Dumitrescu

Copyright © 2017 Haifeng Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In this paper, using the near orthogonal property, we analyze the performance of greedy block coordinate descent (GBCD) algorithm when both the measurements and the measurement matrix are perturbed by some errors. An improved sufficient condition is presented to guarantee that the support of the sparse matrix is recovered exactly. A counterexample is provided to show that GBCD fails. It improves the existing result. By experiments, we also point out that GBCD is robust under these perturbations.

#### 1. Introduction

Greedy block coordinate descent (GBCD) algorithm was presented by [1] for direction of arrival (DOA) estimation. In the work of [1], the DOA estimation is treated as the multiple measurement vectors (MMV) model that recovers a common support shared by multiple unknown vectors from multiple measurements. The authors provided a sufficient condition, based on mutual coherence, to guarantee that GBCD exactly recover the nonzero supports with noiseless measurements.

Recently, the work of [2] discussed the following method: with inputs and . denotes the measurement noise and denotes the system perturbation. The perturbations and are quantified with the following relative bounds:where and are nonzero. Here, denotes the largest spectral norm taken over all -column submatrices of . Throughout the paper, we are only interested in the case where and are far less than 1. In (1), is a -group sparse matrix; that is, it has no more than nonzero rows, and , is the th row of . It is assumed that all columns of are normalized to be of unit-norm [3]. Both and are totally perturbed in (1). This case can be found in source separation [4], radar [5], remote sensing [6], and countless other problems. In addition, the total perturbations have also been discussed in [7–9].

One of the most commonly known conditions is the restricted isometry property (RIP). A matrix satisfies RIP of the order if there exists a constant such that for all -sparse vector . In particular, the minimum of all constants satisfying (3) is called the restricted isometry constant (RIC) .

There are many papers [8, 10–14] discussing the sufficient condition for orthogonal matching pursuit (OMP) that is one of the widely greedy algorithms for sparse recovery. In [3], using the near orthogonality property, the authors improved the sufficient condition of OMP. As cited in [3], the near orthogonality property can further develop the orthogonality characterization of columns in ; it will play a fundamental role in the study of the signal reconstruction performance in compressed sensing. In the noiseless case, the work of [15] analyzed the performance of GBCD using near orthogonality property and improved the results in [2].

In this paper, under the total perturbations, we use near orthogonality property to improve the theoretical guarantee for the GBCD algorithm. In [2], the authors stated that is a sufficient condition for GBCD. We improve this condition to . We also present a counterexample to show that GBCD fails. The example is superior to that in [2]. Under the total perturbations, the robustness of GBCD is shown by experiments.

Now we give some notations that will be used in this paper. denotes the th column of a matrix . denotes the transpose of . denotes an identity matrix. The symbol denotes the vectorization operator by stacking the columns of a matrix one underneath the other. The cardinality of a finite set is denoted by . Let . . The support of is denoted by (). denotes the largest spectral norm taken over all -column submatrices of . Let denote the maximum norm of the rows of . We write for the column submatrix of whose indices are listed in set of and for the row submatrix of whose indices are listed in the set . denotes the th unit standard vector.

#### 2. Problem Formulation

Analogous to [1], (1) can be rewritten as

Assume that . Obviously, . The objective function in (4) can be written as where with denoting the Kronecker product and . Combining the quadratic approximation for and standard BCD algorithm, the solution to the th subproblem can be given by a soft-thresholding operator. The authors in [1] only update the block that yields the greatest descent distance. Now, we list GBCD algorithm (Algorithm 1).