Mathematical Problems in Engineering

Volume 2018, Article ID 9547934, 9 pages

https://doi.org/10.1155/2018/9547934

## An Adaptive Gradient Projection Algorithm for Piecewise Convex Optimization and Its Application in Compressed Spectrum Sensing

^{1}School of Physical and Mathematical Science, Nanjing Tech University, Nanjing 211816, China^{2}College of Computer Science and Technology, Nanjing Tech University, Nanjing 211816, China

Correspondence should be addressed to Tianjing Wang; nc.ude.hcetjn@gnijnaitgnaw

Received 18 November 2017; Revised 20 March 2018; Accepted 2 April 2018; Published 14 May 2018

Academic Editor: Paula L. Zabala

Copyright © 2018 Tianjing Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Signal sparse representation has attracted much attention in a wide range of application fields. A central aim of signal sparse representation is to find a sparse solution with the fewest nonzero entries from an underdetermined linear system, which leads to various optimization problems. In this paper, we propose an Adaptive Gradient Projection (AGP) algorithm to solve the piecewise convex optimization in signal sparse representation. To find a sparser solution, AGP provides an adaptive stepsize to move the iteration solution out of the attraction basin of a suboptimal sparse solution and enter the attraction basin of a sparser solution. Theoretical analyses are used to show its fast convergence property. The experimental results of real-world applications in compressed spectrum sensing show that AGP outperforms the traditional detection algorithms in low signal-to-noise-ratio environments.

#### 1. Introduction

The marked advances in signal processing in recent years have been driven by the emergence of new signal models and their applications. Signal sparse representation is an effective model for solving real-world problems, such as brain signal processing [1], face recognition [2], compressed spectrum sensing [3], and singing voice separation [4].

Given a signal , signal sparse representation aims to identify the sparsest solution from an underdetermined linear system , where is a full row rank matrix. The sparsity of a solution can be measured by -norm, which leads to the following optimization problemUnfortunately, problem (1) is NP-hard [5]. Many methods (e.g., the greedy algorithm [6], the -norm minimization [7], the -norm minimization [8, 9], and the Bayesian method [10]) have been used to find sparse solutions. Because -norm is the most effective measurement of the sparsity, some researchers are interested in -norm minimizationThe piecewise convex optimization (2) can be solved using the existing algorithms, including the Focal Underdetermined System Solver (FOCUSS) [11], the Affine Scaling Transformation (AST) method [12], the Iteratively Reweighted minimization (IRL1) [13], and the Iteratively Thresholding Method (ITM) [14]. The solutions they obtain, however, may be suboptimal sparse solutions.

In this paper, we propose a novel Adaptive Gradient Projection (AGP) algorithm for the piecewise convex optimization (2). This algorithm moves the iteration solution out of the attraction basin of a suboptimal sparse solution and finds a sparser solution in another attraction basin. The convergence analysis reveals that AGP performs better than AST when finding the global optimal sparse solution. The experimental results show that the detection performances of compressed spectrum sensing based on AGP are greatly improved compared to other algorithms.

The remainder of the paper is organized as follows. In Section 2, we derive an Adaptive Gradient Projection algorithm that can find a sparser solution than AST. Section 3 presents the application of AGP to compressed spectrum sensing. The detection performances based on AGP are compared to the traditional spectrum sensing method. Finally, conclusions are presented in Section 4.

#### 2. Adaptive Gradient Projection Algorithm for Piecewise Convex Optimization

##### 2.1. Description of Affine Scaling Transformation Method

For convenience, problem (2) can be rewritten asNote that the objective function is nondifferentiable with zero components. AST uses an affine scaling transformation to solve problem (3). For the th iteration, it defines a symmetric scaling matrix and a scaled variableThus, problem (3) in is transformed to the problem in where .

Given a search direction , the new solution is where is an identity matrix, is a Moore-Penrose pseudoinverse matrix, and is a stepsize.

Using a fixed stepsize , AST is summarized as

The convergence theorem of AST is as follows.

Theorem 1. *Starting from an initial point , AST generates a sequence converging to a sparse solution of problem (3).*

From (9), we see that some small entries of iteration solution converge to zero, because they are sequentially compressed by the scaling elements in . Thus, a sequence of iteration solutions of AST will converge to a sparse solution , which may be close to . However, this solution may be not the sparsest solution of problem (3). Making the iteration solution enter the attraction basin of other sparse solution is very important to reduce the effect of the initial point. Furthermore, Theorem 1 shows that AST obtains within an infinite number of iterations, which affects the convergence rate. How to enhance the convergence speed of AST is another problem to be solved.

##### 2.2. Derivation of Adaptive Gradient Projection Algorithm

To solve the above two problems, we first consider the convergence process of AST if an iteration solution has some zero entries.

Lemma 2. *Given a block matrix , we get where is a Schur complement.*

Lemma 3. *If with zero entries is a solution of problem (3), then these zero entries will not change in the remaining iterations.*

For simplicity, let the front entries of be zero (i.e., ), where , . Then, can be computed as follows:where . Partitioning , we calculate in (11) to beand its Moore-Penrose pseudoinverse matrix iswhere

Because is full row rank, is also full rank, which has an invertible submatrix. For convenience, assume that is an invertible submatrix. Otherwise, we partition to get an invertible submatrix according to Lemma 2. Defining , , , , we getwhere , , , , .

Substituting (12) and (15) into (13), we havewhere , . Equation (11) then becomesThe front entries of in (17) are still zero, and the reverse cannot occur in the remaining iterations.

Lemma 3 implies that an unknown sparse solution can be identified in smaller and smaller subspace. It motivates us to accelerate convergence by sequentially shrinking the solving range. Meanwhile, the choice of subspace cannot be limited by the initial point; that is, the iteration solution is able to enter the subspace that does not contain the initial point. AST chooses a fixed stepsize, so it cannot move the iteration solution from one octant to another distant octant. These solutions concentrate in the attraction basin of the suboptimal sparse solution. Moving the iteration solutions out of the current attraction basin is a goal of the Adaptive Gradient Projection algorithm.

To find the search direction at an iteration solution , we define the gradient where and represents that the partial derivative is not computed as , and the corresponding component is deleted from . Definition (18) avoids the nondifferentiable problem of -norm minimization such that the gradient can be calculated in subspace. Then, we provide the algorithm derivation below.

Initially, we reset an initial point aswhere the index set records the locations of entries as . For example, if , , then and .

For the th iteration, let and ; the gradient and the search direction arewhere records the locations of entries as , the column vectors of are selected from due to , and is an identity matrix. In the span space of , the new solution is defined due to (6):where is a stepsize.

The major challenge in solving problem (3) is to identify the most appropriate subspace where the sparse solution locates. Equation (22) states that is important to find an appropriate subspace; thus, a function with respect to is defined to investigate the property of the stepsizeFigure 1(b) shows that the piecewise convex function has nonunique minimum points that correspond to the zero entries of . We can compute these extreme points.