Abstract

The nonlinear compressive sensing (NCS) is an extension of classical compressive sensing (CS) and the iterative hard thresholding (IHT) algorithm is a popular greedy-type method for solving CS. The normalized iterative hard thresholding (NIHT) is a modification of IHT and is more effective than IHT. In this paper, we propose an approximately normalized iterative hard thresholding (ANIHT) algorithm for NCS by using the approximate optimal stepsize combining with Armijo stepsize rule preiteration. Under the condition similar to restricted isometry property (RIP), we analyze the condition that can identify the iterative support sets in a finite number of iterations. Numerical experiments show the good performance of the new algorithm for the NCS.

1. Introduction

Compressed sensing (CS) [1, 2] deals with the problem of recovering sparse signals from underdetermined linear measurements. In recent years, it has attracted considerable attention in areas of signal processing, electrical engineering, computer science, and applied mathematics; see [3, 4]. However, many real-life applications in physics and biomedical sciences carry some strongly nonlinear structure, so that the linear model is not suited anymore, even as an approximation. It is of utmost interest to investigate compressive sensing with nonlinear measurements, which is called nonlinear compressive sensing (NCS) [5, 6]. So, it is necessary to consider the following NCS: to find a vector from the observations given bywhere is nonlinear, is some noise term and , and is the -norm of , which refers to the number of nonzero elements in the vector . The optimization problem associated with (1) iswhere is -norm and is the residual function. Let be continuously differentiable and let denote its Jacobian matrix. Then . Clearly, if is linear function, model (1) and the optimization problem (2) reduce to classical CS,and problemrespectively, where .

Greedy methods have already proven useful and efficient to tackle (4) [7]. A variety of greedy methods have been proposed to solve (4), such as matching pursuit (MP) [8], orthogonal MP (OMP) [9], compressive sampling matching pursuit (CoSaMP) [10], subspace pursuit (SP) [11], hard thresholding pursuit (HTP) [12], and conjugate gradient iterative hard thresholding (CGIHT) [13].

There is another greedy method, iterative hard thresholding (IHT) algorithm for problem (4), which was proposed by Blumensath and Davies in [14, 15]. When matrix is row full-rank and the spectral norm , IHT converges to a local minimum of (4) [14]. In [16], the authors showed that the numerical studies of IHT are not very promising and the algorithm often fails to converge when the conditions fail. Then they gave normalized IHT (NIHT) with an adaptive stepsize and line search and proved that it converges to a local minimum if is row full-rank and -regular, where -regular means that any columns of are linear independent [17]. Cartis and Thompson [18] showed NIHT converges to local minimum if matrix is -regular. Blumensath [5] showed that IHT can recover signals from NCS under conditions similar to those required in CS.

Inspired by these works, we propose an approximately NIHT (ANIHT) algorithm to solve the NCS problem (2). Since problem (2) is in general a nonconvex programming, we can only expect to find out the stationary point rather than the local minimizer, which is different from NIHT for CS. Then we show that the accumulation point of the ANIHT algorithm is the stationary point. With the nondegeneracy and strict complementarity of stationary point, the support sets of the sequence are identified in a finite number of iterations. At last, we simulate several experiments to demonstrate the effectiveness of the algorithm.

This paper is organized as follows. Section 2 gives the ANIHT algorithm for (2) and proves its convergence properties. Numerical results are given in Section 3. The last section makes some concluding remarks.

2. Algorithm

In this section, we will present the approximately normalized iterative hard thresholding (ANIHT) algorithm for (2) and then analyze its convergence properties. Denote For and set , projector onto is . Note that the projection onto sparse set , written as , sets all but largest absolute value components of to zero. The definition of -stationarity was proposed in [17] based on the notion of fixed-point equation.

Definition 1. The point is called the stationary point of problem (2), if it holds that

Note that is the stationary point of problem (2) if and only if [17] where denotes the th largest element in absolute value of and .

In NIHT for CS [16], to guarantee the objective function a sufficient decrease per iteration, the authors added a stepsize strategy based on restricted isometry property (RIP) [1]. In ANIHT for NCS (2), we use the approximately optimal stepsize to accelerate the convergence and use Armijo-type stepsize and make a sufficient decrease of the objective function directly without RIP. Here is the subvector (submatrix) obtained by discarding all but the elements (columns) in . The framework of ANIHT is described as follows.

Step 1. Initialize , , , .

Step 2. Let and compute where

Step 3. If , then , , and ; else compute and , where and is the smallest positive integer such that and .

Step 4. If the stopping criterion is met, then stop.

Remark 2. We now briefly illustrate the algorithm:(i)In Step 2, a proposal point is calculated, whether accepting it depends on the relationship between its support set and the previous point’s; see Step 3. In addition, to accelerate the rate of convergence, we utilize the approximately optimal initial stepsize to compute . The linear approximate of at is is the support of the best term approximation to at the current iteration. So we obtain thatThis stepsize is in accordance with the optimal stepsize in NIHT for CS in [16]. Furthermore, by Assumption 4, when is relatively small, the error introduced in the linear approximation is small; then the objective function decreases if the support set is not changed.(ii)Armijo-type stepsize rule in Step 3 makes the choice of stepsize and support set adaptively and a sufficient decrease of the objective function meanwhile per iteration. It is well defined by Lemma 6.

The following assumptions are chosen to ensure the descent property (14) of the objective function .

Assumption 3. There exists a constant such that for .

We also need the assumption that the Jacobian of residual is restricted Lipschitz continuous on .

Assumption 4. There exists a constant such that for .

Lemma 5. Suppose that Assumptions 3 and 4 hold. If there exists for and the iterative point of ANIHT satisfying and , thenwhere and is the initial point of ANIHT.

Proof. We first showSince ANIHT algorithm generates monotonically decreasing function values, then for all . Direct calculation yields that It follows from (15) that which completes the proof.

Lemma 6. Suppose that Assumptions 3 and 4 hold and is the iterative point of ANIHT with . ThenTherefore, is well defined.

Proof. According to the computation in Step 2, we have which implies that that is,If , then the above inequality and the monotonicity of yield that Otherwise, by the Armijo-type stepsize rule and the monotonicity of , we have Then can be smaller than .
By Lemma 5 and (21), we get that If , , then . Otherwise, by letting , we can obtain the desired result by the definition of .

2.1. Convergence

Combining Assumptions 3 and 4 and Lemma 6, the convergence of ANIHT can be established in this subsection.

Theorem 7. Let Assumptions 3 and 4 hold and let be generated by ANIHT with . Then (i);(ii)any accumulation point of is the stationary point of (2).

Proof. (i) It follows from (18) that , where . Then which signifies .
It follows from Assumption 3 that and then . By Lemma 6 and the definition of in the algorithm, we have Therefore, is bounded from below by a positive constant (). We can conclude that (ii) Suppose that is an accumulation point of the sequence ; then there exists a subsequence converging to and by (i). For the simplification of notation, denote as by its boundedness and . Based on in Step 2, we consider two cases.
Case  1 (). The convergence of and guarantees that, for some , , for all . The definition of the projection on shows that Taking , we have .
Case  2 (). If there exists an such that, for all , , the projection implies that Letting and exploiting the continuity of the function , we obtain that On the other hand, if there exists an infinite number of indices of for , as the same proof in Case  1, it follows that . Since is bounded from below by a positive constant, we have which means is a stationary point of (2).

We are now ready to show that under suitable conditions the support set of a point is identified in a finite number of iterations. We can easily verify that if , then the support set of in a sufficiently small neighborhood of is identified. For , we introduce the concept of strict complementarity to identify the support set.

Definition 8. The point is called nondegenerate if . The conditionis called strict complementarity condition of (2), where .

Theorem 9. For any sequence converging to , we have the following:(i)if is nondegenerate point, thenfor all sufficiently large andwhere and .(ii)if satisfies strict complementarity, then (35) holds if and only if (36) holds.

Proof. (i) Suppose that is a nondegenerate point. We have (35) when is sufficiently close to . By (35) and the continuity of , we can easily get that (ii) Suppose that satisfies strict complementarity condition. The “only if” part can be obtained by the similar proof as in (i). Let (36) hold. Since , . Assume that there is an infinite subsequence and an index such that and for all . By (36), While the strict complementarity condition (34) implies that . This contradiction proves that .

3. Numerical Experiments

In this part, sensor localization problem and phase retrieval problem will be stimulated. In both examples, the stop criteria will be set as , where is pretty small in different cases or the maximum iterative times being equal to 5000.

Sensor localization problem can be described as follows: given known anchors , the purpose is to find a sensor satisfying where , , is the noise (which obeys the normal distribution with zero expectation and variance here). The problem of finding an satisfying above equalities is the same as finding an optimal solution to the optimization problem (4) with . We first compare the on an example with , , and . Each component of the vectors was randomly and independently generated from a standard normal distribution. Then the true vector and are designed as following MATLAB codes:For each value of , we ran algorithm from 100 different and randomly generated initial data sets. The numbers of runs of 100 in which the methods found the “correct” solution are given in Table 1. Here, the “correct” solution means that , or and are very close, say As can be clearly seen by the results in the table, the performs well in terms of the success probability. For more details, when the “true” solutions are quite sparse compared to the dimension , can almost recover all the “ture” solutions, while the performance is becoming worse as rises.

Then we run algorithm in a higher dimensional data set, where , , and . For each data set, we run 40 times and record the average results (in which the unsuccessful recoveries are expelled). Figure 1 shows the performance of when addressing this problem.

Phase retrieval is to recover a signal from the magnitude of its Fourier transform, or of any other linear transform. Due to the loss of Fourier phase information, the problem is generally ill-posed. The phase retrieval problem can be described as follows: given known measurement vectors , the purpose is to reconstruct a signal satisfying where is the th column of the general matrix or the discrete Fourier transform (DFT) and , , is the noise (which obeys the normal distribution with zero expectation and variance here). Also this problem is equivalent to recover an optimal solution to the optimization problem (4) with , where , .

There are some other methods for sparse phase retrieval and the codes are available. So we can compare our algorithm with them. We first compare the algorithm with the partial sparse-simplex method () and greedy sparse-simplex () method in [17] with , , and which is identical to those in [17]. The true vector and the measurement vectors are generated as that produced in sensor localization problem; is designed as following MATLAB codes:For each value of , we ran algorithm from 100 different and randomly generated initial data sets. The numbers of runs of 100 in which the methods found the “correct” solution are given in Table 2. As can be clearly seen by the results in the table, the outperforms and in terms of the success probability. What is more, the data in the row with are higher than those with and .

We also compare our algorithm with in [19] to recover a signal from the magnitude of its Fourier transform. Namely, it is to find a real-valued discrete time signal from its magnitude-squared value of an point discrete Fourier transform (DFT): We denote by the DFT matrix; then elements and , where denotes the element-wise absolute-squared value. We get by the pseudo MATLAB codes:

To see the accuracy of the solutions and the speed of these two methods, we run the two methods for increasing from 512 to 3072 and keeping , . We also test them under noiseless and two noise levels, and . From Table 3, we can see that outperforms in terms of both average CPU time and average relative error for large .

4. Conclusion

Nonlinear CS (NCS) not only is of academic interest but also might be important in many real-world applications when the measurements cannot be designed to be perfectly linear. In this paper, we have proposed an ANIHT algorithm for NCS and studied its convergence. We have showed that any accumulation point of the algorithm is the stationary point. The support set of the sequence can be identified with the assumption of nondegeneracy and strict complementarity of stationary point. The numerical experiments show that ANIHT algorithm is effective for NCS. In the future, we will further consider other methods for nonlinear least square problem to improve the rate of convergence, such as L-M method or cubic regularization methods [20].

Competing Interests

The author declared that there are no competing interests in their submitted paper.

Acknowledgments

This research was supported by National Natural Science Foundation of China (11271233) and Shandong Province Natural Science Foundation (ZR2012AM016).