Abstract
Sparse signals can be recovered from a reduced set of samples by using compressive sensing algorithms. In common compressive sensing methods the signal is recovered in the sparsity domain. A method for the reconstruction of sparse signals which reconstructs the missing/unavailable samples/measurements is recently proposed. This method can be efficiently used in signal processing applications where a complete set of signal samples exists. The missing samples are considered as the minimization variables, while the available samples are fixed. Reconstruction of the unavailable signal samples/measurements is preformed using a gradientbased algorithm in the time domain, with an adaptive step. Performance of this algorithm with respect to the stepsize and convergence are analyzed and a criterion for the stepsize adaptation is proposed in this paper. The step adaptation is based on the gradient direction angles. Illustrative examples and statistical study are presented. Computational efficiency of this algorithm is compared with other two commonly used gradient algorithms that reconstruct signal in the sparsity domain. Uniqueness of the recovered signal is checked using a recently introduced theorem. The algorithm application to the reconstruction of highly corrupted images is presented as well.
1. Introduction
A signal is sparse in a transformation domain if the number of nonzero coefficients is much lower than the number of signal samples. For linear signal transforms the signal samples can be considered as linear combinations (measurements) of the signal transform coefficients. Sparse signals can be fully recovered from a reduced set of samples/measurements. Compressive sensing theory is dealing with sparse signal reconstruction [1–22]. In common signal processing problems, for signals sparse in a transformation domain, signal samples can be considered as observations since they are linear combinations of the transformation domain coefficients. The goal of compressive sensing is to reduce the set of signal measurements while preserving complete information about the whole signal. In some applications unavailable signal samples result from the system physical constraints including unavailability to perform all signal measurements. Signals may also contain arbitrarily positioned samples that are heavily corrupted. In these cases it is better to remove such samples and consider them as unavailable in the signal recovery [6]. Compressive sensing theory based reconstruction algorithms may be also used when the unavailable signal samples are not a result of an intentional strategy to compress the data. Note that one of the initial compressive sensing theory successes in applications (computed tomography reconstruction) was not related to the intentional compressive sensing strategy but to the physical problem constraints, restricting the set and positions of available data.
Various algorithms for reconstruction of sparse signals from a reduced set of observations are introduced [8–17]. Two of the most important approaches are based either on the gradient of the sparsity measure [8] or on the orthogonal matching pursuit methods [16].
The topic of this paper is the reconstruction of signals with some arbitrary positioned unavailable samples. These samples may result from a compressive sensing strategy or from their unavailability due to various reasons. A method for the unavailable signal samples reconstruction, considering them as variables, has been proposed in [22]. In contrast to the other common methods that recover sparse signals in their sparsity domain this method reconstructs missing samples/measurements to make the set of samples/measurements complete. The available samples are fixed (invariant) during the reconstruction process. An analysis of the algorithm performance is presented in this paper. A relation between reconstructed signal bias and the algorithm stepsize is derived and statistically confirmed. A new criterion for the algorithm stepsize adaptation is proposed. The improved version of this algorithm is compared with two other common gradientbased algorithms proving its accuracy and computational efficiency. The discrete Fourier transform domain is used as a case study, although the algorithm application is not restricted to this transform [23]. The solution uniqueness is checked after signal reconstruction by using the recently proposed theorem [24].
The paper is organized as follows. After the definitions in the next section, the adaptive gradient algorithm is presented. In the comments to the algorithm the stepsize and bias are considered and illustrated. A new criterion for the algorithm parameter adaptation is proposed in Section 3 as well. Reconstruction is illustrated on examples and uniqueness of the obtained solution is analyzed. Statistical performance are checked in Section 4. An illustration of the algorithm application to the image reconstruction is given as well.
2. GradientBased Reconstruction of Missing Samples
Discretetime signal with samples will be considered (with vector notation ). Its transform coefficients will be denoted by . If the number of nonzero coefficients for is such that then the signal is sparse in this transformation domain. Next assume that only signal samples are available in the time domain at the discretetime positions:Under some conditions, studied within the compressive sensing theory [1–3], full signal reconstruction is possible based on the reduced set of available samples if the signal is sparse in a transformation domain. In the signal theory this problem can be formulated as a full recovery of the unavailable samples as well. In general, a reconstruction algorithm is based on the sparsity measure minimization. Nonzero transform coefficients’ counting is the simplest sparsity measure. Counting can be achieved by the mathematical form , referred to as the “norm” [1, 20]. The signal reconstruction problem statement is thenVector contains the available signal samples, while the transform vector elements are the unknown transform coefficients. Matrix is the measurement matrix. In common signal transforms it corresponds to the inverse transformation matrix where the rows corresponding to the unavailable samples are omitted. If then the matrix is a submatrix of containing its rows corresponding to the discretetime instants (1).
Although there are some approaches to reconstruct the signal using the norm based formulation [25] in principle, this is an NPhard combinatorial optimization problem. It is the reason why the closest convex norm of the signal transform is used instead of the “norm” in the minimization process:where . Under the conditions defined using the restricted isometry property (RIP) [3, 4], minimization of the norm can produce the same result as (2) [1, 22, 26]. Common gradientbased algorithms reformulate the problem defined by (3) into a form suitable for application of standard minimization tools. Minimization is done in the sparsity domain. One such a method is briefly reviewed in Appendix.
2.1. Algorithm
A gradientbased algorithm that minimizes the sparsity measure by varying the missing sample values is presented next. In this gradientbased minimization approach the missing samples are considered as variables [22]. The missing sample values are varied in an iterative way to produce lower sparsity measure values and finally to approach the minimum of the convex norm based sparsity measure (3) with an acceptable accuracy. Note that if the reconstruction conditions are met for the norm [3] then the norm minimum position will correspond to the true values of the missing samples.
The signalwill be used as the initial estimate. Note that this signal would follow as a result of the norm based minimization of the signal transform. The set of missing sample positions is . Values of the available samples are fixed and will be considered as constants. The missing sample values are varied through iterations. If we denote by the vector of signal values reconstructed after iterations, then the goal is to achieve where the components of are . In order to find the position of function minimum, the relation for iterative missing samples (variables) calculation is defined by using the gradientbased descend on the sparsity measure asVector contains the minimization variables (missing signal sample values) in the th iteration. The gradient vector coordinates can be estimated, in the th iteration, by using the finite differences of the sparsity measure. They are calculated for each missing sample at instants : where Again for the available signal values are not changed, . All gradient vector values, including the gradient zero and nonzero coordinates, in the th iteration are denoted by .
Through statistical study [22] it has been concluded that an appropriate step parameter value in (6) is related to the finite difference step as .
3. Comments on the Algorithm
(i)The algorithm inputs are the signal length , the set of available samples , and the available signal values , .(ii)For the DFT instead of using signals defined by (9) and their transforms for each it is possible to calculate with and . Note that values are not dependent on the signal and the iteration number . They can be calculated only once. Similar simplification can be done for any linear signal transform .
3.1. Adaptive Step
The influence of stepsize to the solution (minimum position) precision is analyzed next. Assume that we have obtained the exact solution for the missing samples. The change of sparsity measure is tested by the change of signal sample for . For a signal of sparsity in the DFT domain, whose form is , the signals with the steps used for the gradient estimate calculation are and . Their sparsity measures are In the worst case, amplitudes are in phase with and when It means that . This is an important fact leading to the conclusion that the stationary state of the iterative algorithm is biased. The algorithm moves the solution to a biased value in order to produce in the algorithm stationary point. For the zero value of the gradient we have and . By replacing with in and with in the worst case sparsity measures are and . In this case the bias value can be obtained from asThe bias limit is proportional to the stepsize . Obviously, the bias limit can be reduced by using small values of the step . Calculation with a very small step would be inefficient and time consuming with an extremely large number of iterations. An efficient way to use the gradientbased algorithm is in adapting its stepsize. In the initial iteration the stepsize of an order of signal amplitude is used:When the algorithm reaches a stationary point with this step then the bias will dominate and the mean squared error will be almost constant. The algorithm can improve the solution precision by reducing the stepsize.
This simple relation between the bias and stepsize has been confirmed through statistical analysis on more complex cases, with a large number of missing samples.
3.2. Adaptation Criterion
The gradientbased algorithm convergence improvement by changing steps is analyzed in detail in [22]. A criterion that can efficiently detect the event that the algorithm has reached the stationary point with regard to the mean square error (the vicinity of the sparsity measure minimum defined by the bias) described in the previous subsection is proposed next. The presented criterion is based on the direction change of the gradient vector. When the vicinity of the minimum sparsity measure point is reached within the region defined by the bias value, then the gradient estimate oscillates and changes its direction for almost degrees. The angle between two successive gradient vector estimations and denoted as can be calculated as If this angle is, for example, above then we can conclude that the signal missing samples (variables) reached oscillatory nature around the minimal sparsity measure position. When these values of the angles are detected the algorithm step is reduced, for example, for or . The iterations are continued starting from the reconstructed signal values reached in the previous step.
When the minimum of the sparsity measure is reached with a sufficiently small , the value of can also be used as an indicator of the solution precision. For example, if the signal amplitudes are of order and taking in the first iteration will produce the solution with a precision better than 20 [dB]. Then if the step is reduced to a precision better than 40 [dB] will be obtained and so on. Value of can be used as the algorithm stopping criterion.
3.3. Stopping Criterion
The precision of the result in iterative algorithms is commonly estimated based on the change of the result values in the last iterations. Therefore an average of changes in a large number of variables is a good estimate of the achieved precision. The value ofcan be used as a rough estimate of the reconstruction error to signal ratio. In this relation is the signal reconstructed before reduction and is the reconstructed signal after iterations done with the reduced step. Value of can be used as the algorithm stopping criterion. If the value of is above the required precision (e.g., if dB), the calculation procedure should be continued with reduced values of step .
3.4. Large Step Influence
A possible divergence of a gradientbased algorithm is related to large steps . We can write In general for a complex number , with and a large , from the problem geometry we can show that the following bounds hold . Therefore, Lower limit is obtained if is imaginaryvalued, while the upper limit follows if is realvalued. If is large the missing signal values will be adapted for a value independent on . The missing sample values will oscillate within the range of the original signal values until the step is reduced in iterations. The variable values will be arbitrary changed within the signal amplitude order as far as is too large. It will not influence further convergence of the algorithm, when the step assumes appropriately small values.
A pseudocode of this algorithm is presented in Algorithm 1.

Illustrative Example 1. The effects analyzed within the comments on the algorithm will be illustrated on a simple signal: with and sparsity . Assume that the samples at positions are missing. Reconstruction of the signal is done in iterations. The initial value of the step and the initial values of missing samples and are used. The missing sample values in the first iterations are given in Figure 1(a) by dots (connected by a line). After iterations the algorithm does not significantly change the missing sample values with given stepsize (lower subplot within the figure presents changes of variables in a highly zoomed scale). When the stationary point vicinity is reached with the gradient estimate is almost zerovalued. Its direction change for almost degrees. The measure values are on the contour with approximately the same value (circles on contour plot). The algorithm resumes its fast approach toward the exact missing sample values after the algorithm step is reduced to in the iteration. When a new stationary state is reached change of to is done and the approach to the correct missing sample values is continued.
(a)
(b)
(c)
(d)
(e)
The bias upper limit in the stationary state for step , according to (13), is . After each reduction of the stepsize from to the bias caused MSE will be reduced for almost 20 [dB]. The result for the signal reconstruction and the obtained MSE for the calculated missing values and are presented in Figure 1.
Illustrative Example 2. The reconstruction process is repeated using the signal and . Its sparsity is . Missing sample positions are at . The reconstruction results for this signal are presented in Figure 2. After the signal is reconstructed the problem is to establish if this reconstruction is unique in the sense that there is no other signal (missing sample values) with the same or lower sparsity. A very simple check of the solution uniqueness, using the missing sample positions and the positions of the reconstructed signal nonzero components in the sparsity domain, is given in [24] by the following.
(a)
(b)
Theorem 1. Consider a signal that is sparse in the DFT domain with unknown sparsity. Assume that the signal length is samples and that samples are missing at the instants . Also assume that the reconstruction is performed and that the DFT of reconstructed signal is of sparsity . Assume that the positions of the reconstructed nonzero values in the DFT are Reconstruction result is unique if the inequalityholds. Integers and are calculated as where .
Signal independent uniqueness corresponds to the worst case signal form, when .
The answer is obtained almost immediately, since the computational complexity of the Theorem is of order . The proof is given in [24].
For the considered set of missing samples , from the previous example, we easily get For signal independent uniqueness, It means that , with considered , guarantees uniqueness of the reconstructed signal for any signal parameters. When a sparse signal is reconstructed, for some sets of uniqueness may be guaranteed for higher values of sparsity . Additional details and examples may be found in [24].
4. Statistical Analysis and Comparison
Consider a general realvalued form of a signal sparse in the DFT domainThe total number of signal samples is . The signal sparsity is varied from to . The signal amplitudes , frequencies , and phases are random variables. Signal amplitudes are Gaussian random variables with unity variance; the frequency indices are random numbers within . The phases are uniform random variables , in each signal realization. Signal is reconstructed in different realizations for each sparsity and random positions of missing samples. The statistical calculations are performed for . The reconstructed signals are denoted by . The reconstruction results are presented in Figure 3(a). The signaltoreconstructionerror ratio (SRR) is given in [dB]
(a)
(b)
Bright colors indicate the region where the algorithm had fully recovered missing samples (compared to the original samples) in all realizations, while dark colors indicate the region where the algorithm could not recover missing samples in any realization. In the transition region for slightly greater than we have both the cases when the signal recovery is not achieved and the cases of full signal recovery. A stopping criterion for the accuracy of 120 [dB] is used. It corresponds to a precision in the recovered signal of the same order as in input samples, if they are acquired by a 20bit A/D converter. The case with is repeated with an additive input Gaussian noise such that the input signaltonoise ratio is 20 [dB] in each realization (Figure 3(b)). The reconstruction error in this case is limited by the input signaltonoise ratio.
The average reconstruction error in the noisefree cases is related to the number of the full recovery events. For the number of the full recovery events is checked and presented in Figures 4(a) and 4(b). The average number of the algorithm iterations to produce the required precision, as a function of the number of missing samples and signal sparsity , is presented as well (Figure 4(c)) along with the corresponding average computation time (in seconds) for the Windows PC with Intel Dual Core processor (Figure 4(d)). The average computation time is proportional to the average number of iterations multiplied by the number of missing samples (variables) .
(a)
(b)
(c)
(d)
The efficiency of the presented algorithm is compared with the standard routines used for the norm based minimization problems. The performance of the proposed algorithm are compared with the algorithm that recasts the recovery problem (3) into regression framework (LASSOISTA) and a linear programming framework with the primaldual interior point method (L1magic code in MATLAB). A simple review of the LASSOISTA algorithm is presented in Appendix. All there algorithms are run using 100 sparse signals with random parameters. The algorithm parameters are set in such a way that the average computational time for all cases is about ms. The results are presented in Table 1. Columns’ notation in the table is for sparsity and for the number of missing samples and MAE stands for the mean absolute error. Calculation time using MATLAB is presented in all cases.
We can conclude that in most of the considered cases the proposed algorithm outperforms other two algorithms in both the calculation time and accuracy.
Example 3. The algorithm can be applied to other signal transforms as well [23]. An example with missing (saltandpepper noise) pixels in an image with the 2DDCT as its sparsity domain will be presented. Consider a standard MATLAB test image canoe.tif (Figure 5) and assume that 50% of the samples are corrupted (noisy image). These pixels with extreme values are removed and considered as unavailable. Since the coefficients in the 2DDCT converge fast we will assume that the sparsity condition is satisfied (without imposing it in an explicit way by making some 2DDCT coefficients equal zero). The image is reconstructed by minimizing the 2DDCT sparsity measure and varying missing pixel values according to the presented algorithm. The original, corrupted, and reconstructed images after 4th and 50th iteration are shown in Figure 5.
(a)
(b)
(c)
5. Conclusion
Performance of the recently proposed gradientbased signal reconstruction algorithm in the domain of the signal samples/measurements is studied. Application of this algorithm is very efficient in signal processing problems with reduced set of data, since it is easy to define a set of missing signal samples/measurements. The values of signal samples are considered as variables, while the available samples are fixed and not changed in the algorithm. An efficient criterion, based on the gradient directions, is proposed for the algorithm stepsize adaptation. Statistical analysis confirms calculation efficiency. The algorithm can easily be adapted to the cases on nonuniform signal samples, when only a reduced set of linear combinations of the signal samples are available.
Appendix
GradientBased Reconstruction in the Sparsity Domain
One of the forms for formulation of minimization problem (3), in order to find its solution, is based on the LagrangianThe minimization function is denoted by . The signal reconstruction in its sparsity domain is formulated as [9]This problem can be solved using least absolute selection and shrinking operator (LASSO) minimization. Parameter defines a balance between the squared absolute error and sparsity.
The norm based reconstruction algorithms do not have a closed form solution. In order to define an iterative procedure a nonnegative termis added. It has a zero value at the position of the problem solution, denoted by . This term is added to the function . New minimization function is The solution follows from as The iterative procedure can be defined byThe equation of the form can be solved by using the softthresholding rule The same rule can be used for each coordinate of the sparsity vector in (A.7). The iterative form for the signal reconstruction in the sparsity domain isThis is the iterative softthresholding algorithm (ISTA) for problem (A.2) minimization [25]. The initial estimate is , corresponding to the transform of initial signal (4).
Competing Interests
The authors declare that they have no competing interests.