Algorithms for Compressive Sensing Signal Reconstruction with Applications
View this Special IssueResearch Article  Open Access
Xunzhi Zhu, "Approximately Normalized Iterative Hard Thresholding for Nonlinear Compressive Sensing", Mathematical Problems in Engineering, vol. 2016, Article ID 2594752, 8 pages, 2016. https://doi.org/10.1155/2016/2594752
Approximately Normalized Iterative Hard Thresholding for Nonlinear Compressive Sensing
Abstract
The nonlinear compressive sensing (NCS) is an extension of classical compressive sensing (CS) and the iterative hard thresholding (IHT) algorithm is a popular greedytype method for solving CS. The normalized iterative hard thresholding (NIHT) is a modification of IHT and is more effective than IHT. In this paper, we propose an approximately normalized iterative hard thresholding (ANIHT) algorithm for NCS by using the approximate optimal stepsize combining with Armijo stepsize rule preiteration. Under the condition similar to restricted isometry property (RIP), we analyze the condition that can identify the iterative support sets in a finite number of iterations. Numerical experiments show the good performance of the new algorithm for the NCS.
1. Introduction
Compressed sensing (CS) [1, 2] deals with the problem of recovering sparse signals from underdetermined linear measurements. In recent years, it has attracted considerable attention in areas of signal processing, electrical engineering, computer science, and applied mathematics; see [3, 4]. However, many reallife applications in physics and biomedical sciences carry some strongly nonlinear structure, so that the linear model is not suited anymore, even as an approximation. It is of utmost interest to investigate compressive sensing with nonlinear measurements, which is called nonlinear compressive sensing (NCS) [5, 6]. So, it is necessary to consider the following NCS: to find a vector from the observations given bywhere is nonlinear, is some noise term and , and is the norm of , which refers to the number of nonzero elements in the vector . The optimization problem associated with (1) iswhere is norm and is the residual function. Let be continuously differentiable and let denote its Jacobian matrix. Then . Clearly, if is linear function, model (1) and the optimization problem (2) reduce to classical CS,and problemrespectively, where .
Greedy methods have already proven useful and efficient to tackle (4) [7]. A variety of greedy methods have been proposed to solve (4), such as matching pursuit (MP) [8], orthogonal MP (OMP) [9], compressive sampling matching pursuit (CoSaMP) [10], subspace pursuit (SP) [11], hard thresholding pursuit (HTP) [12], and conjugate gradient iterative hard thresholding (CGIHT) [13].
There is another greedy method, iterative hard thresholding (IHT) algorithm for problem (4), which was proposed by Blumensath and Davies in [14, 15]. When matrix is row fullrank and the spectral norm , IHT converges to a local minimum of (4) [14]. In [16], the authors showed that the numerical studies of IHT are not very promising and the algorithm often fails to converge when the conditions fail. Then they gave normalized IHT (NIHT) with an adaptive stepsize and line search and proved that it converges to a local minimum if is row fullrank and regular, where regular means that any columns of are linear independent [17]. Cartis and Thompson [18] showed NIHT converges to local minimum if matrix is regular. Blumensath [5] showed that IHT can recover signals from NCS under conditions similar to those required in CS.
Inspired by these works, we propose an approximately NIHT (ANIHT) algorithm to solve the NCS problem (2). Since problem (2) is in general a nonconvex programming, we can only expect to find out the stationary point rather than the local minimizer, which is different from NIHT for CS. Then we show that the accumulation point of the ANIHT algorithm is the stationary point. With the nondegeneracy and strict complementarity of stationary point, the support sets of the sequence are identified in a finite number of iterations. At last, we simulate several experiments to demonstrate the effectiveness of the algorithm.
This paper is organized as follows. Section 2 gives the ANIHT algorithm for (2) and proves its convergence properties. Numerical results are given in Section 3. The last section makes some concluding remarks.
2. Algorithm
In this section, we will present the approximately normalized iterative hard thresholding (ANIHT) algorithm for (2) and then analyze its convergence properties. Denote For and set , projector onto is . Note that the projection onto sparse set , written as , sets all but largest absolute value components of to zero. The definition of stationarity was proposed in [17] based on the notion of fixedpoint equation.
Definition 1. The point is called the stationary point of problem (2), if it holds that
Note that is the stationary point of problem (2) if and only if [17] where denotes the th largest element in absolute value of and .
In NIHT for CS [16], to guarantee the objective function a sufficient decrease per iteration, the authors added a stepsize strategy based on restricted isometry property (RIP) [1]. In ANIHT for NCS (2), we use the approximately optimal stepsize to accelerate the convergence and use Armijotype stepsize and make a sufficient decrease of the objective function directly without RIP. Here is the subvector (submatrix) obtained by discarding all but the elements (columns) in . The framework of ANIHT is described as follows.
Step 1. Initialize , , , .
Step 2. Let and compute where
Step 3. If , then , , and ; else compute and , where and is the smallest positive integer such that and .
Step 4. If the stopping criterion is met, then stop.
Remark 2. We now briefly illustrate the algorithm:(i)In Step 2, a proposal point is calculated, whether accepting it depends on the relationship between its support set and the previous pointâ€™s; see Step 3. In addition, to accelerate the rate of convergence, we utilize the approximately optimal initial stepsize to compute . The linear approximate of at is â€‰ is the support of the best term approximation to at the current iteration. So we obtain thatâ€‰This stepsize is in accordance with the optimal stepsize in NIHT for CS in [16]. Furthermore, by Assumption 4, when is relatively small, the error introduced in the linear approximation is small; then the objective function decreases if the support set is not changed.(ii)Armijotype stepsize rule in Step 3 makes the choice of stepsize and support set adaptively and a sufficient decrease of the objective function meanwhile per iteration. It is well defined by Lemma 6.
The following assumptions are chosen to ensure the descent property (14) of the objective function .
Assumption 3. There exists a constant such that for .
We also need the assumption that the Jacobian of residual is restricted Lipschitz continuous on .
Assumption 4. There exists a constant such that for .
Lemma 5. Suppose that Assumptions 3 and 4 hold. If there exists for and the iterative point of ANIHT satisfying and , thenwhere and is the initial point of ANIHT.
Proof. We first showSince ANIHT algorithm generates monotonically decreasing function values, then for all . Direct calculation yields that It follows from (15) that which completes the proof.
Lemma 6. Suppose that Assumptions 3 and 4 hold and is the iterative point of ANIHT with . ThenTherefore, is well defined.
Proof. According to the computation in Step 2, we have which implies that that is,If , then the above inequality and the monotonicity of yield that Otherwise, by the Armijotype stepsize rule and the monotonicity of , we have Then can be smaller than .
By Lemma 5 and (21), we get that If , , then . Otherwise, by letting , we can obtain the desired result by the definition of .
2.1. Convergence
Combining Assumptions 3 and 4 and Lemma 6, the convergence of ANIHT can be established in this subsection.
Theorem 7. Let Assumptions 3 and 4 hold and let be generated by ANIHT with . Then (i);(ii)any accumulation point of is the stationary point of (2).
Proof. (i) It follows from (18) that , where . Then which signifies .
It follows from Assumption 3 that and then . By Lemma 6 and the definition of in the algorithm, we have Therefore, is bounded from below by a positive constant (). We can conclude that (ii) Suppose that is an accumulation point of the sequence ; then there exists a subsequence converging to and by (i). For the simplification of notation, denote as by its boundedness and . Based on in Step 2, we consider two cases.
Caseâ€‰â€‰1 (). The convergence of and guarantees that, for some , , for all . The definition of the projection on shows that Taking , we have .
Caseâ€‰â€‰2 (). If there exists an such that, for all , , the projection implies that Letting and exploiting the continuity of the function , we obtain that On the other hand, if there exists an infinite number of indices of for , as the same proof in Caseâ€‰â€‰1, it follows that . Since is bounded from below by a positive constant, we have which means is a stationary point of (2).
We are now ready to show that under suitable conditions the support set of a point is identified in a finite number of iterations. We can easily verify that if , then the support set of in a sufficiently small neighborhood of is identified. For , we introduce the concept of strict complementarity to identify the support set.
Definition 8. The point is called nondegenerate if . The conditionis called strict complementarity condition of (2), where .
Theorem 9. For any sequence converging to , we have the following:(i)if is nondegenerate point, thenfor all sufficiently large andwhere and .(ii)if satisfies strict complementarity, then (35) holds if and only if (36) holds.
Proof. (i) Suppose that is a nondegenerate point. We have (35) when is sufficiently close to . By (35) and the continuity of , we can easily get that (ii) Suppose that satisfies strict complementarity condition. The â€śonly ifâ€ť part can be obtained by the similar proof as in (i). Let (36) hold. Since , . Assume that there is an infinite subsequence and an index such that and for all . By (36), While the strict complementarity condition (34) implies that . This contradiction proves that .
3. Numerical Experiments
In this part, sensor localization problem and phase retrieval problem will be stimulated. In both examples, the stop criteria will be set as , where is pretty small in different cases or the maximum iterative times being equal to 5000.
Sensor localization problem can be described as follows: given known anchors , the purpose is to find a sensor satisfying where , , is the noise (which obeys the normal distribution with zero expectation and variance here). The problem of finding an satisfying above equalities is the same as finding an optimal solution to the optimization problem (4) with . We first compare the on an example with , , and . Each component of the vectors was randomly and independently generated from a standard normal distribution. Then the true vector and are designed as following MATLAB codes:For each value of , we ran algorithm from 100 different and randomly generated initial data sets. The numbers of runs of 100 in which the methods found the â€ścorrectâ€ť solution are given in Table 1. Here, the â€ścorrectâ€ť solution means that , or and are very close, say As can be clearly seen by the results in the table, the performs well in terms of the success probability. For more details, when the â€śtrueâ€ť solutions are quite sparse compared to the dimension , can almost recover all the â€śtureâ€ť solutions, while the performance is becoming worse as rises.

Then we run algorithm in a higher dimensional data set, where , , and . For each data set, we run 40 times and record the average results (in which the unsuccessful recoveries are expelled). Figure 1 shows the performance of when addressing this problem.
Phase retrieval is to recover a signal from the magnitude of its Fourier transform, or of any other linear transform. Due to the loss of Fourier phase information, the problem is generally illposed. The phase retrieval problem can be described as follows: given known measurement vectors , the purpose is to reconstruct a signal satisfying where is the th column of the general matrix or the discrete Fourier transform (DFT) and , , is the noise (which obeys the normal distribution with zero expectation and variance here). Also this problem is equivalent to recover an optimal solution to the optimization problem (4) with , where , .
There are some other methods for sparse phase retrieval and the codes are available. So we can compare our algorithm with them. We first compare the algorithm with the partial sparsesimplex method () and greedy sparsesimplex () method in [17] with , , and which is identical to those in [17]. The true vector and the measurement vectors are generated as that produced in sensor localization problem; is designed as following MATLAB codes:For each value of , we ran algorithm from 100 different and randomly generated initial data sets. The numbers of runs of 100 in which the methods found the â€ścorrectâ€ť solution are given in Table 2. As can be clearly seen by the results in the table, the outperforms and in terms of the success probability. What is more, the data in the row with are higher than those with and .

We also compare our algorithm with in [19] to recover a signal from the magnitude of its Fourier transform. Namely, it is to find a realvalued discrete time signal from its magnitudesquared value of an point discrete Fourier transform (DFT): We denote by the DFT matrix; then elements and , where denotes the elementwise absolutesquared value. We get by the pseudo MATLAB codes:
To see the accuracy of the solutions and the speed of these two methods, we run the two methods for increasing from 512 to 3072 and keeping , . We also test them under noiseless and two noise levels, and . From Table 3, we can see that outperforms in terms of both average CPU time and average relative error for large .

4. Conclusion
Nonlinear CS (NCS) not only is of academic interest but also might be important in many realworld applications when the measurements cannot be designed to be perfectly linear. In this paper, we have proposed an ANIHT algorithm for NCS and studied its convergence. We have showed that any accumulation point of the algorithm is the stationary point. The support set of the sequence can be identified with the assumption of nondegeneracy and strict complementarity of stationary point. The numerical experiments show that ANIHT algorithm is effective for NCS. In the future, we will further consider other methods for nonlinear least square problem to improve the rate of convergence, such as LM method or cubic regularization methods [20].
Competing Interests
The author declared that there are no competing interests in their submitted paper.
Acknowledgments
This research was supported by National Natural Science Foundation of China (11271233) and Shandong Province Natural Science Foundation (ZR2012AM016).
References
 E. J. Candes and T. Tao, â€śDecoding by linear programming,â€ť IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203â€“4215, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 D. L. Donoho, â€śCompressed sensing,â€ť IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289â€“1306, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications, Cambridge University Press, Cambridge, UK, 2012. View at: Publisher Site  MathSciNet
 S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhäuser, New York, NY, USA, 2013. View at: Publisher Site  MathSciNet
 T. Blumensath, â€śCompressed sensing with nonlinear observations and related nonlinear optimization problems,â€ť IEEE Transactions on Information Theory, vol. 59, no. 6, pp. 3466â€“3474, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 H. Ohlsson, A. Y. Yang, R. Dong, and S. S. Sastry, â€śNonlinear basis pursuit,â€ť in Proceedings of the 47th Asilomar Conference on Signals, Systems and Computers, pp. 115â€“119, Pacific Grove, Calif, USA, November 2013. View at: Publisher Site  Google Scholar
 V. N. Temlyakov and P. Zheltov, â€śOn performance of greedy algorithms,â€ť Journal of Approximation Theory, vol. 163, no. 9, pp. 1134â€“1145, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 S. G. Mallat and Z. Zhang, â€śMatching pursuits with timefrequency dictionaries,â€ť IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3397â€“3415, 1993. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 G. Davis, S. Mallat, and M. Avellaneda, â€śAdaptive greedy approximations,â€ť Constructive Approximation, vol. 13, no. 1, pp. 57â€“98, 1997. View at: Publisher Site  Google Scholar  MathSciNet
 D. Needell and J. A. Tropp, â€śCoSaMP: iterative signal recovery from incomplete and inaccurate samples,â€ť Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301â€“321, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 W. Dai and O. Milenkovic, â€śSubspace pursuit for compressive sensing signal reconstruction,â€ť IEEE Transactions on Information Theory, vol. 55, no. 5, pp. 2230â€“2249, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 S. Foucart, â€śHard thresholding pursuit: an algorithm for compressive sensing,â€ť SIAM Journal on Numerical Analysis, vol. 49, no. 6, pp. 2543â€“2563, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. D. Blanchard, J. Tanner, and K. Wei, â€śCGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion,â€ť Information and Inference. A Journal of the IMA, vol. 4, no. 4, pp. 289â€“327, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 T. Blumensath and M. E. Davies, â€śIterative thresholding for sparse approximations,â€ť Journal of Fourier Analysis and Applications, vol. 14, no. 5, pp. 629â€“654, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 T. Blumensath and M. E. Davies, â€śIterative hard thresholding for compressed sensing,â€ť Applied and Computational Harmonic Analysis, vol. 27, no. 3, pp. 265â€“274, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 T. Blumensath and M. E. Davies, â€śNormalized iterative hard thresholding: guaranteed stability and performance,â€ť IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 298â€“309, 2010. View at: Publisher Site  Google Scholar
 A. Beck and Y. C. Eldar, â€śSparsity constrained nonlinear optimization: optimality conditions and algorithms,â€ť SIAM Journal on Optimization, vol. 23, no. 3, pp. 1480â€“1509, 2013. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 C. Cartis and A. Thompson, â€śA new and improved quantitative recovery analysis for iterative hard thresholding algorithms in compressed sensing,â€ť IEEE Transactions on Information Theory, vol. 61, no. 4, pp. 2019â€“2042, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Shechtman, A. Beck, and Y. C. Eldar, â€śGESPAR: efficient phase retrieval of sparse signals,â€ť IEEE Transactions on Signal Processing, vol. 62, no. 4, pp. 928â€“938, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 C. Cartis, N. I. Gould, and P. L. Toint, â€śOn the evaluation complexity of cubic regularization methods for potentially rankdeficient nonlinear leastsquares problems and its relevance to constrained nonlinear optimization,â€ť SIAM Journal on Optimization, vol. 23, no. 3, pp. 1553â€“1574, 2013. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2016 Xunzhi Zhu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.