Abstract

Signal sparse representation has attracted much attention in a wide range of application fields. A central aim of signal sparse representation is to find a sparse solution with the fewest nonzero entries from an underdetermined linear system, which leads to various optimization problems. In this paper, we propose an Adaptive Gradient Projection (AGP) algorithm to solve the piecewise convex optimization in signal sparse representation. To find a sparser solution, AGP provides an adaptive stepsize to move the iteration solution out of the attraction basin of a suboptimal sparse solution and enter the attraction basin of a sparser solution. Theoretical analyses are used to show its fast convergence property. The experimental results of real-world applications in compressed spectrum sensing show that AGP outperforms the traditional detection algorithms in low signal-to-noise-ratio environments.

1. Introduction

The marked advances in signal processing in recent years have been driven by the emergence of new signal models and their applications. Signal sparse representation is an effective model for solving real-world problems, such as brain signal processing [1], face recognition [2], compressed spectrum sensing [3], and singing voice separation [4].

Given a signal , signal sparse representation aims to identify the sparsest solution from an underdetermined linear system , where is a full row rank matrix. The sparsity of a solution can be measured by -norm, which leads to the following optimization problemUnfortunately, problem (1) is NP-hard [5]. Many methods (e.g., the greedy algorithm [6], the -norm minimization [7], the -norm minimization [8, 9], and the Bayesian method [10]) have been used to find sparse solutions. Because -norm is the most effective measurement of the sparsity, some researchers are interested in -norm minimizationThe piecewise convex optimization (2) can be solved using the existing algorithms, including the Focal Underdetermined System Solver (FOCUSS) [11], the Affine Scaling Transformation (AST) method [12], the Iteratively Reweighted minimization (IRL1) [13], and the Iteratively Thresholding Method (ITM) [14]. The solutions they obtain, however, may be suboptimal sparse solutions.

In this paper, we propose a novel Adaptive Gradient Projection (AGP) algorithm for the piecewise convex optimization (2). This algorithm moves the iteration solution out of the attraction basin of a suboptimal sparse solution and finds a sparser solution in another attraction basin. The convergence analysis reveals that AGP performs better than AST when finding the global optimal sparse solution. The experimental results show that the detection performances of compressed spectrum sensing based on AGP are greatly improved compared to other algorithms.

The remainder of the paper is organized as follows. In Section 2, we derive an Adaptive Gradient Projection algorithm that can find a sparser solution than AST. Section 3 presents the application of AGP to compressed spectrum sensing. The detection performances based on AGP are compared to the traditional spectrum sensing method. Finally, conclusions are presented in Section 4.

2. Adaptive Gradient Projection Algorithm for Piecewise Convex Optimization

2.1. Description of Affine Scaling Transformation Method

For convenience, problem (2) can be rewritten asNote that the objective function is nondifferentiable with zero components. AST uses an affine scaling transformation to solve problem (3). For the th iteration, it defines a symmetric scaling matrix and a scaled variableThus, problem (3) in is transformed to the problem in where .

Given a search direction , the new solution is where is an identity matrix, is a Moore-Penrose pseudoinverse matrix, and is a stepsize.

Using a fixed stepsize , AST is summarized as

The convergence theorem of AST is as follows.

Theorem 1. Starting from an initial point , AST generates a sequence converging to a sparse solution of problem (3).

From (9), we see that some small entries of iteration solution converge to zero, because they are sequentially compressed by the scaling elements in . Thus, a sequence of iteration solutions of AST will converge to a sparse solution , which may be close to . However, this solution may be not the sparsest solution of problem (3). Making the iteration solution enter the attraction basin of other sparse solution is very important to reduce the effect of the initial point. Furthermore, Theorem 1 shows that AST obtains within an infinite number of iterations, which affects the convergence rate. How to enhance the convergence speed of AST is another problem to be solved.

2.2. Derivation of Adaptive Gradient Projection Algorithm

To solve the above two problems, we first consider the convergence process of AST if an iteration solution has some zero entries.

Lemma 2. Given a block matrix , we get where is a Schur complement.

Lemma 3. If with zero entries is a solution of problem (3), then these zero entries will not change in the remaining iterations.

For simplicity, let the front entries of be zero (i.e., ), where , . Then, can be computed as follows:where . Partitioning , we calculate in (11) to beand its Moore-Penrose pseudoinverse matrix iswhere

Because is full row rank, is also full rank, which has an invertible submatrix. For convenience, assume that is an invertible submatrix. Otherwise, we partition to get an invertible submatrix according to Lemma 2. Defining , , , , we getwhere , , , , .

Substituting (12) and (15) into (13), we havewhere , . Equation (11) then becomesThe front entries of in (17) are still zero, and the reverse cannot occur in the remaining iterations.

Lemma 3 implies that an unknown sparse solution can be identified in smaller and smaller subspace. It motivates us to accelerate convergence by sequentially shrinking the solving range. Meanwhile, the choice of subspace cannot be limited by the initial point; that is, the iteration solution is able to enter the subspace that does not contain the initial point. AST chooses a fixed stepsize, so it cannot move the iteration solution from one octant to another distant octant. These solutions concentrate in the attraction basin of the suboptimal sparse solution. Moving the iteration solutions out of the current attraction basin is a goal of the Adaptive Gradient Projection algorithm.

To find the search direction at an iteration solution , we define the gradient where and represents that the partial derivative is not computed as , and the corresponding component is deleted from . Definition (18) avoids the nondifferentiable problem of -norm minimization such that the gradient can be calculated in subspace. Then, we provide the algorithm derivation below.

Initially, we reset an initial point aswhere the index set records the locations of entries as . For example, if , , then and .

For the th iteration, let and ; the gradient and the search direction arewhere records the locations of entries as , the column vectors of are selected from due to , and is an identity matrix. In the span space of , the new solution is defined due to (6):where is a stepsize.

The major challenge in solving problem (3) is to identify the most appropriate subspace where the sparse solution locates. Equation (22) states that is important to find an appropriate subspace; thus, a function with respect to is defined to investigate the property of the stepsizeFigure 1(b) shows that the piecewise convex function has nonunique minimum points that correspond to the zero entries of . We can compute these extreme points.

Without loss of generality, let every entry of equal to zero (i.e., ). Then, the extreme point is computed as where and . An adaptive stepsize is chosen to find the minimal objective function valueand the new solution is written asThe minimum point , marked as “” in Figure 1(b), is determined by comparing three extreme values. Using the adaptive stepsize to obtain an iteration solution is beneficial to accelerate convergence. However, AST cannot obtain a minimum point at the search direction in (6), and the corresponding point is marked as “+” in Figure 1(a). Using the fixed stepsize makes the iteration solutions gather in the adjacent region of the initial point. On the other hand, we set as , where is a threshold. AGP then determines multiple zero entries at each iteration so that it can quickly identify subspace where the sparse solution locates.

After some entries of become zero, the set of their indices is deleted from (i.e., ). We define that represents the entries of located in that are assigned to . Thus, (26) can be restated as For example, if and , then and . The iterations are performed until the needed support set is determined.

Returning the final solution to -dimensional space is convenient to obtain the sparse solution . By presetting , then represents that the entries of located in that are assigned to . For example, if and , then .

As discussed above, AGP is summarized as follows.

Algorithm 4 (Adaptive Gradient Projection Algorithm).
Step 1 (initialization). Given a threshold , compute an initial point and set an index set and the iteration index .
Step 2 (iteration)while  Stopping criterion not met  do Compute and due to (21),Choose an adaptive stepsize byCompute a solution ,Update the solution ,Increase iteration index .end while Step 3 (solution). An optimal sparse solution .

2.3. Convergence Analysis

The convergence property of AGP is discussed as follows.

Lemma 5. AGP can determine at least one zero entry at each iteration.

Due to (26), there are three cases that AGP determines zero entry at each iteration. The first case is that one entry of is set to zero, if moves from one octant to the coordinate surface of another distant octant. In the second case, more than one entry of become zero when the new iteration solution exactly locates on the coordinate axis of another distant octant. In the third case, if locates on the side of the coordinate axis of another distant octant, we set as . Therefore, AGP can determine at least one zero entry at each iteration.

Theorem 6. Let with nonzero entries be a sparse solution of problem (3). The number of iterations of AGP is no more than .

If has nonzero entries, then the locations of zero entries need to be determined. According to Lemma 5, AGP obtains multiple zero entries at each iteration, and these zero entries do not change in the remaining iterations based on the definition in (20). Therefore, the number of iterations of AGP is no more than .

Remark 7. Theorem 6 shows that AGP obtains a sparse solution within a finite number of iterations, while Theorem 1 shows that AST requires an infinite number of iterations to obtain a sparse solution. In theory, the number of iterations of AGP is less than AST.

Figure 2 gives an example to show the improved convergence performance of AGP compared to AST. There are three sparse solutions , where is the global optimal sparse solution. To clearly display the iteration process of AGP, we return all to three dimensional space, which form a sequence , where represents that the entries of located in which are assigned to .

Starting from , a sequence solved by AST converges to in Figure 2(a). Figure 2(c) shows that all iteration solutions concentrate in the attraction basin of in the contour map. The second iteration solution of AGP in Figure 2(b), however, moves away from and reaches . In Figure 2(d), the iteration solution moves out of the attraction basin of and enters the attraction basin of , in which the adaptive stepsize plays an important role. This example verifies that AGP can find a sparser solution than AST by calculating the minimum point of the search direction at each iteration.

2.4. Comparison of Convergence Performance

To compare the convergence performance of AST and AGP, we give an experiment by the following -norm problemThere exist two sparse solutions and , where is the global optimal sparse solution. We choose as an initial point. Comparing the results in Table 1 with Table 2, we see that AGP quickly finds in smaller and smaller subspace, while AST limited by the initial point just obtains the suboptimal solution . On the other hand, the computing times of AST and AGP are 0.0063 s and 0.0089 s, respectively. AGP costs some time to find an adaptive stepsize at each iteration, but it can obtain the global minimizer. Obviously, it is more important to find the global optimal sparse solution of problem (3).

3. Application of Adaptive Gradient Projection Algorithm in Compressed Spectrum Sensing

The Compressive Spectrum Sensing (CSS) is considered for this study, because it performs the same tasks as signal sparse representation. In [15, 16], the model of CSS is formulated as follows:where is a measurement, is a Primary User (PU) signal, is a Gaussian noise, is a Secondary User (SU) received signal, and is a Gaussian random matrix. Assume that and can be represented on the discrete cosine basis , (i.e., and ), where and are spectrum coefficients. Let ; the model in (30) can be reformulated as CSS intends to reconstruct from , so the reconstruction error is used to evaluate the reconstruction performance.

Corresponding to and in Figure 3(a), Figure 3(b) shows that has nonzero entries, while is distributed throughout the frequency domain. The reconstructed spectrum coefficient is obtained by solving the following problem where . After 21 iterations, solved by AST in Figure 3(d) is unsatisfactory, while solved by AGP converges to after 17 iterations. The corresponding computing times of AST and AGP are 0.0154 s and 0.0227 s, respectively. The reconstructed signals and are shown in Figure 3(c), and the reconstruction errors are 27.22% and 9.99%, respectively. At the cost of a little computing time, the reconstruction performance of AGP is improved compared to AST in noise environment, so AGP exhibits better performance of noise suppression than AST.

Note that the characteristic of noise suppressing can greatly improve the detection performance of CSS, especially when the number of nonzero entries is not given in advance. We can reconstruct the SU received signal by choosing some larger nonzero entries of with a threshold . Setting , Figure 4(b) shows the denoising spectrum coefficients are sparser than the coefficients in Figure 3(d). The corresponding reconstructed SU received signals are shown in Figure 4(a). The reconstruction errors reduce to 21.71% and 8.50%. Meanwhile, the variance of white noise reduces to .

Next, we consider the detection performance using the reconstructed SU signal with denoising coefficient. Let be a false alarm probability and be a judgment threshold. Then, a binary hypothesis testing problem is used to determine whether the PU is presentwhere denotes the energy of the detection signal . The Monte Carlo experiments are performed to test the detection probability , where is the number of accurately detecting PU signal. Given , , Figure 5 shows that the Energy Detection (ED) method [17] exhibits high mistaken probabilities in low signal-to-noise-ratio (SNR) environments. However, the detection probabilities of AST, AGP, IRL1, and ITM are greatly enhanced, when the reconstructed SU received signals are solved by the denoising spectrum coefficients. For example, when SNR equals to −5 dB, the detection probabilities using AST, AGP, IRL1, and ITM improve by 75.47%, 79.25%, 58.49%, and 47.17% compared with ED. Furthermore, when SNR changes from −15 dB to −1 dB, AGP shows better detection performance than AST, IRL1, and ITM because of its improved reconstruction performance. Figure 6 shows the corresponding computing time of five reconstruction algorithms, in which the time consumption of AGP increases at most 43.66% and 23.39% compared with ITM and AST. Spending a little more computing time, AGP can attain the best result of spectrum sensing especially in low SNR environment.

Note that the sparsity of the spectrum coefficient vector has impact on the reconstruction performance of -norm minimization. Given SNR is −7 dB, Figure 7 displays that the detection probabilities of AST, AGP, IRL1, and ITM descend when K changes from 2 to 10. This is because the measurement vector in (32) is not able to attain the whole information of a SU received signal when K is larger than 6. Therefore, the unsatisfactory reconstruction results of -norm minimization greatly affect the detection performance of CSS via AST, IRL1, and ITM, while the performance degradation of CSS via AGP is slower than that of the three reconstruction algorithms. Meanwhile, AST, AGP, IRL1, and ITM cost more computing time to find the sparse solution in Figure 8. The above results reveal that the detection performance of CSS needs to be improved when the number of measurement is insufficient. ED determines the state of PU by measuring the energy of a SU received signal, so the property of the sparsity has little impact on its detection result and computing time.

4. Conclusions

Signal sparse representation has become a fundamental tool that is embedded into various application systems. One of its fundamental problems is finding a sparse coefficient. In this paper, we develop a novel AGP algorithm to solve the -norm minimization. Theoretical analysis demonstrates that AGP can find a sparser solution than AST, because it avoids the iteration solutions concentrating in the attraction basin of a suboptimal sparse solution. Applying AGP to compressed spectrum sensing, it can obtain the better detection performance than ED, AST, IRL1, and ITM by spending a little more computing time. Future research will extend AGP to more scenarios.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (nos. 61501224, 61501223, and 61502230), the Natural Science Foundation of Jiangsu Province (no. BK20150960), and the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (no. 17KJB510024).