Abstract

An approach to sparse signals reconstruction considering its missing measurements/samples as variables is recently proposed. Number and positions of missing samples determine the uniqueness of the solution. It has been assumed that analyzed signals are sparse in the discrete Fourier transform (DFT) domain. A theorem for simple uniqueness check is proposed. Two forms of the theorem are presented, for an arbitrary sparse signal and for an already reconstructed signal. The results are demonstrated on illustrative and statistical examples.

1. Introduction

In many engineering applications, an incomplete set of samples/measurements arises due to physical system constraints. In some cases randomly positioned samples/measurements are heavily corrupted so that it is better to omit them and consider them as unavailable [1, 2]. In these applications reduction of the considered dataset is not a result of an intentional compressive sensing strategy [38]. Nevertheless, the primary goal is the signal recovery, as in the compressive sensing theory [815]. Recently, an adaptive-step gradient-based method for the reconstruction of sparse signals with missing/omitted samples has been proposed [16]. The proposed method reconstructs the remaining missing samples/measurements in order to make a complete set of samples/measurements, in contrast to the common reconstruction methods that recover the signal in their sparsity domain. The final result in all algorithms is the same. It is full recovery of the signal.

In general, the reconstructed signal uniqueness is guaranteed if the restricted isometry property is used and checked with appropriate isometry constants [5]. However, two problems exist in the implementation of this method. For a specific measurement matrix, it produces quite conservative bounds. In practice, it produces a large number of false alarms for nonuniqueness. In addition, uniqueness check with the restricted isometry property requires a combinatorial approach, which is an NP-hard problem (like the solution of the problem itself using the norm-zero in the minimization). In the adaptive gradient-based method, the missing samples/measurements are considered as the minimization variables. The available samples values are known and fixed. Obviously, the number of variables in the minimization process is equal to the number of missing samples/measurements in the observation domain. This approach is possible when the common signal transforms are the domains of signal sparsity. Then missing and available samples/measurements form a complete set of samples/measurements.

The discrete Fourier transform (DFT), as the most important signal transform, is considered in this paper as the sparsity domain of the signal. A theorem for the uniqueness of the reconstructed solution, based on the missing sample variations, is presented. Two forms of the theorem are presented: one stating the uniqueness condition for a given missing sample transformation matrix and the other providing the uniqueness check if a sparse signal is already recovered using a reconstruction algorithm. The solution is unique in the sense that the variation of the missing sample values cannot produce another signal of the same or lower sparsity. The theorems provide an easy and computationally efficient uniqueness check.

The paper is organized as follows. After the introduction, the uniqueness theorems and corollaries are defined and illustrated on examples. The proofs are presented in Section 3. The worst case signal is derived and related to the group delay function in Section 4. Theoretical results are demonstrated on simple illustrative and statistical examples as well.

2. On the Reconstructed Signal Uniqueness

Consider a signal with . Assume that of its samples at the positions are missing/omitted. The signal is sparse in the DFT domain, with sparsity . The reconstruction goal is to get , for all , using available samples at . We will consider a new signal of the formwhere for the available signal positions and may take arbitrary values at the positions of missing samples . The DFT of this signal is Positions of nonzero values in are with . In the minimization process the values of missing samples of for are considered as variables. The goal of the reconstruction process is to get or for all . This goal should be achieved by minimizing the sparsity of the signal transform . The existence of the unique solution of this problem depends on the number of missing samples, their positions, and the signal itself.

First, assume that the signal can take any form, including the worst possible one. Then the number of missing samples and their positions will be considered only. The uniqueness, in this case, means that if a signal with the transform of sparsity is obtained using a reconstruction method, with a given set of missing samples, then there is no other signal of the same or lower sparsity that satisfies the given set of available samples values, using the same set of missing samples as variables.

Theorem 1. Consider a signal that is sparse in the DFT domain with unknown sparsity. Assume that the signal length is samples and that samples are missing at the instants . Assume that the reconstruction is performed and that the DFT of reconstructed signal is of sparsity . The reconstruction result is unique if the inequalityholds. Integers are calculated as

Example. Consider a signal with and missing samples atUsing the theorem, we will find the sparsity limit when we are able to claim that the reconstructed sparse signal is unique for any signal form.

(i) For , we use and get .

(ii) For , the number is the greater value of that is, the maximal number of even or odd positions of missing samples. Thus with .

(iii) Next is calculated as the maximal number of missing samples whose distance is multiple of . For various initial counting positions the numbers of missing samples with distance being multiple of are , and , respectively. Then with

(iv) For the numbers of missing samples at distances being multiple of are found for various . The value of is with .

(v) Finally we have two samples at distance (samples at the positions and ) producing with .

The reconstructed signal of sparsity is unique if or

The theorem considers general signal form. It includes the case when the amplitudes of signal components are related to each other and related to the missing sample positions. The specific signal form required by the theorem, to reach its bound, is analyzed in Section 4. Since this kind of relation is a zero-probability event, we will define a corollary, neglecting the probability that the signal values are dependent on each other and related to missing sample positions at the same time.

Corollary 2. Consider a signal that is sparse in the DFT domain. Assume that the signal length is samples and that samples are missing at the instants . Also assume that the reconstruction is performed and that the DFT of reconstructed signal is of sparsity . Assume that the amplitudes of signal components are arbitrary with arbitrary phases so that the case when all of them can be related to the values defined by using the missing sample positions is a zero-probability event. The reconstruction result is not unique if the inequalityholds. Integers are calculated in the same way as in Theorem 1.

Example. Consider a signal with and missing samples atThe sparsity limit when we are able to claim that the reconstructed sparse signal is not unique is

Pseudocode for uniqueness check according to Theorem 1 and Corollary 2 is presented in Algorithm 1. Note that in Corollary 2 we used the condition that the reconstruction result is nonunique (instead of the condition that the reconstruction result is unique in Theorem 1) since the zero-probability events are included here.

Require:
 (i) Set of missing sample positions
 (ii) Total number of signal samples ,
(1)
(2)
(3) for do
(4)  for do
(5)    and
(6)  end for
(7)  
(8)  if then                 Theorem 1 check
(9)  
(10) end if
(11)  if then                 Corollary 2 check
(12)  
(13)  end if
(14) end for
Output: ,
 (i) Every solution with sparsity is unique.
 (ii) Solution with sparsity is unique with probability one, excluding zero-probability event
   (when the amplitudes of signal components are related to each other with a relation defined by
   the missing sample positions).

Corollary 2 provides the uniqueness test for the given positions of unavailable samples. In the cases with it exploits the periodic structure of the transformation matrix of missing samples. The periodical form assumes that the positions of possible zero values in do not interfere with the signal nonzero value positions. This is possible in the worst case analysis. For example, with two missing samples at the positions and , the reconstruction process assumes that there are zero values in and that the same number of zero values is preserved in . This event can occur if we assume that all nonzero values of have the same structure as . In this specific case, it means that all of the signal nonzero coefficients are either on odd or even positions in the frequency domain.

Numerical Example. Signal with samples is considered. The signal sparsity is varied from to . For each signal sparsity the number of missing samples is varied from to . For each pair 1000 trials are performed with randomly positioned missing samples. Uniqueness is checked by Theorem 1 and Corollary 2. Percentage of trials where uniqueness is guaranteed is presented in Figure 1. We can clearly see two regions: one where uniqueness was achieved in each trial () and the other where the solution was always nonunique (). In the transition between these two regions the uniqueness highly depends on the missing sample positions, producing .

We see that there is a sharp transition, for example, at (for ), from when to for (marked with white dots in Figure 1(a)). It means that the difference of probabilities is almost . Let us explain this effect. Consider Theorem 1 condition for . If , the solution will be unique for and nonunique for . This equality is satisfied for , , , and . It means that at least one of the following holds: (1) there are samples at distance , (2) there are samples at the distance being multiple of , (3) there are samples at the distance being multiple of , and (4) there are samples at the distance being multiple of . The probability that among samples out of there are samples at the distance is . Since other events are with lower probabilities, this one is sufficient to explain the sharp change in . We may write .

After signal reconstruction, we are in a position to additionally specify the uniqueness requirements using the reconstructed signal. When a sparse signal is reconstructed we want to check the uniqueness of this signal. It means that the signal , with its transform which is nonzero at , is obtained and we want to check if there is another signal with the same or lower sparsity, where is the DFT of arbitrary values of samples at the missing sample positions. The positions of nonzero values in are not arbitrary, while the positions of zero and nonzero values in could change to produce minimal possible sparsity of . In the previous example with two missing samples and , when the signal is already recovered, it means that we can not assume that all are either odd or even. They are given since we have already reconstructed a sparse signal.

Theorem 3. Consider a signal that is sparse in the DFT domain with unknown sparsity. Assume that the signal length is samples and that samples are missing at the instants . Also assume that the reconstruction is performed and that the DFT of reconstructed signal is of sparsity . Assume that the positions of the reconstructed nonzero values in the DFT are . Reconstruction result is unique if the inequalityholds. Integers and are calculated as where .

Note that for this theorem reduces to Theorem 1. For the DFT values equally distributed over all positions, this theorem produces result close to

Corollary 4. Consider a signal that is sparse in the DFT domain with unknown sparsity. Assume that the signal length is samples and that samples are missing at the instants . Also assume that the reconstruction is performed and that the DFT of reconstructed signal is of sparsity . Assume that the positions of the reconstructed nonzero values in the DFT are . Assume that the amplitudes of signal components are arbitrary with arbitrary phases so that the case when all of them can be related to the values defined by using the missing sample positions is a zero-probability event. Reconstruction result is not unique if the inequalityholds. Integers and are calculated as in Theorem 3. The case when all of signal components can be related to the values defined by using the missing sample positions is considered here.

Pseudocode for uniqueness check according to Theorem 3 and Corollary 4 is presented in Algorithm 2.

Require:
 (i) Set of missing sample positions
 (ii) Set of nonzero values in the reconstructed DFT
 (iii) Total number of signal samples ,
(1)
(2)
(3)
(4) for do
(5) for do
(6)    and
(7) end for
(8)  
(9) for do
(10)   and
(11) end for
(12) Sort array in non-decreasing order
(13) 
(14) if then                  Theorem 3 check
(15)  
(16) end if
(17) if then                  Corollary 4 check
(18)  
(19) end if
(20) end for
(21)
Output:
 (i) when the considered solution is unique.
 (ii) when the considered solution is unique with probability one excluding zero-probability event
  (when amplitudes of the signal components are related to each other with a relation defined by
  missing sample positions).
 (iii) when the considered solution is not unique.

Example. Consider a signal with and missing samples atAssume that with these missing samples we have reconstructed signals with nonzero DFT values at the positions (a)(b)By testing these two signals, we get the following decisions. According to Theorem 1 we cannot claim uniqueness in either of these cases since in the first case and in the second case. Both are greater than Theorem 1 bound . The same holds for Corollary 2 since both are . By testing these results with Theorem 3 we get that in case (a) the solution is nonunique. It is due to a very specific form of the reconstructed signal with all components being found at the odd frequency positions. Since the sparsity was defined by periodicity in , then variations of two signal samples and can produce a signal with the same sparsity as the reconstructed signal. These two samples, as variables, are able to produce many zero values in either at odd or even positions in frequency (Section 4). In this case, they are at even positions of . However, in signal (b) that is not the case. Nonzero values are distributed over both even and odd frequency positions. Although sparsity of this signal is the reconstruction is unique. The distribution of nonzero values in the reconstructed is such that by varying two samples and we cannot produce a signal of the same or lower sparsity with nonzero and . The limit, in this case, is defined by the lower periodicity in than . Thus, if we obtain this signal using a reconstruction algorithm the solution is unique.

Example. Consider a signal with and missing samples at . The reconstructed signal is at the frequencies (a) , (b) . We can easily check that in all cases with Theorem 1, Corollary 2, and Theorem 3, the reconstruction is nonunique although or is much smaller than the available number of samples . The answer is obtained almost immediately, since the computational complexity of Theorem 1, Corollary 2, and Theorem 3 is of order .

Numerical Example. Signal with samples is considered. The signal sparsity is varied from to . For each signal sparsity the number of missing samples is varied from to . For each pair 1000 trials are performed with randomly positioned nonzero DFT values and randomly positioned missing samples. In each trial the uniqueness is checked by Theorem 3 and Corollary 4. Percentage of trials where the uniqueness is guaranteed is presented in Figure 2. We can clearly see two regions: one where the uniqueness was achieved in each trial () and the other region where the solution was nonunique in each trial (). The transition between regions is quite sharp. In this transition region, for given , the uniqueness highly depends on the positions of DFT values and missing samples producing . The transition region is wider for Theorem 3 comparing to transition region for Corollary 4.

In Figure 3, regions where the probability is higher than 99% are presented. The first region is defined by Theorem 3, which guarantees uniqueness for any signal. The second region is defined by Corollary 4 and it guarantees uniqueness with a high probability (this region is signal dependent). These regions are combined in the third subplot.

3. Proofs of the Theorems and Corollaries

3.1. Proof of Theorem 1

Consider the DFT of a signal with samples. For the presentation simplicity we assumed common although the results can be generalized for any . Assume that available samples are at the instants and the missing samples are at the instants and . Assume that the reconstruction process is done and the obtained signal is sparse in the DFT domain. Its sparsity is with nonzero coefficients . Under this assumption the signal isAmplitudes of the signal components are arbitrary with arbitrary phases .

Let us form a signalwhere for and takes arbitrary values at the positions of missing samples . The DFT of this signal isDenote the number of nonzero values in as . The DFT of signal isDenote the number of nonzero values in by . The aim of minimization of is to produce the smallest possible value . If this is possible in trivial case only, then our solution is unique. If there exists a nontrivial solution for with , then we have two different solutions and with the same sparsity meaning that our solution is not unique.

The DFT of signal can be written in a matrix form as

Since we look for zeros in , without loss of generality, we can rewrite system (22) in the form normalized with the first column corresponding to aswhere

In general, during the minimization, we have variables ( degrees of freedom). First assume that there is no common period (smaller than ) in columns of the transform matrix for the missing sample positions . This case corresponds to pairwise coprime numbers . Then variables can be used to produce zeros in at the positions where there is no signal and, in the worst possible case, to cancel out all nonzero coefficients in . Therefore the largest possible number of zero values in is . The lowest possible number of nonzero values in is . Ifthen the considered solution is unique since only the trivial solution results in sparsity and every nontrivial solution results in . If we obtain the reconstructed signal with sparsitythen the solution is not unique. However, if , then it still does not mean that the solution is unique since we assumed that there is no periodicity in transform matrix (23). Next we will include all possible cases when some of the columns of transform matrix in (23) may have a common period lower than .

The matrix of coefficients cannot be periodic with period smaller than . Thus, with regard to the period of the whole matrix, the worst case would be the transformation matrix which repeats after exactly rows, where is divisor of . Denote the number of repetitions by .

For periodic structure of transformation matrix in (23), with such a period that the rows are repeated times within , the largest number of zero values in is now . In addition, the nonzero values in , in the worst possible case, can cancel all nonzero DFT values of . Thus, in this case, the lowest possible number of nonzero values in is . For the unique solution it should be greater than the signal sparsity; that is, or . This is the result of the uncertainty principle in the DFT, presented in Section 3.2.

The process does not end here. In the minimization process, we must also consider all subsets of missing samples . Namely, it can happen that the worst case regarding the maximal number of zero values in is not the case with the full set of variables . It could happen that some subsets of the variables may produce a higher number of zero values in than the whole set of variables. It means that the reconstruction algorithm may find some variables to be zero valued and vary only a subset of remaining variables .

Subsets of Missing Samples. We have concluded that the periodicity reduces the sparsity of , by increasing the number of possible zero values in . Consider a general set of missing sample positions . Then the algorithm for uniqueness should check periods for all possible subsets of missing samples. Assuming as in Theorem 1, without loss of generality, that the cases are as follows.

(1) Using all missing samples, assuming that there is no common period for all of them (in the sense of (23)), the unique solution is obtained if . The cases with periodic matrix structure will be included in the steps that follows.

(2) Consider minimal possible number of periods for . It is repetitions in the matrix of coefficients (23) with period . Any subset of containing only even or only odd positions can be considered as a set of minimization variables with period . The number of missing samples at even positions can be written asThe same should be done for odd positions in :Since we look for the worst case in our reconstruction algorithm, we choose the set with more variables (more degrees of freedom). It isSince the number of periods in matrix of coefficients is for these two subsets of missing samples, it means that at mostzero values can be produced in . It can be larger than . Therefore beside the condition considered in step (1) we have also to checkand use the worst case as the limit for . It means that, at this point, we should compare and .

Note that the largest possible is . In this case, the solution is unique ifThis corresponds to the case when all even (or all odd) signal samples are missing. Then we cannot uniquely reconstruct a signal even for sparsity .

(3) This analysis should be continued for all possible periods in . For the next possible period is with the number of periods . Coefficients in the matrix (23) are periodic with if the distance between missing samples is four. Thus, we should divide the set of all missing sample positions into subsets where distances between are multiples of , that is, when is a constant. There are such subsets obtained for , where . In the same way as in step (2), denote the cardinality of the largest subset by :If we find such thatthen it means that in we can consider as variables (nonzero values) only the samples from the set containing such that producing . Then the unique solution is obtained only ifThe worst case is when the positions of all missing samples are such that . Then the solution is unique if . In the worst case with as many as of available samples () we may reconstruct only signals with sparsity .

(4) Next possible period for is . The periodicity with period should be considered for any subset of calculatingIfthen the sparsity further reduces toThe worst case when the positions of all missing samples are such that the solution is unique for . In this case even with of available samples we may reconstruct only the signals with sparsity .

(5) The process should continue for all possible periods , . We should calculateIf it is such thatthen the sparsity constraint for uniqueness is

(6) Summing up all the cases we get the theorem result that the uniqueness condition is

In the final stage if just two special samplesare missing, then with the solution being nonunique ifThus special positions of two missing samples reduce the maximal number of components that can uniquely be detected to .

(7) This kind of proof can be generalized for any signal length with possible periods of matrix obtained as divisors of .

3.2. Uncertainty Principle and Theorem 1 Bounds

The bounds for sparsity are compared with the results following from the uncertainty relation for the DFTwhere the number of nonzero values in is denoted by and the number of the nonzero values in its DFT is . In the worst case, for a given and for the worst possible distribution of positions , we haveConsider first the case when we cannot exclude the possibility that the signal DFT assumes values related to the missing sample positions and the worst possible distribution of positions . The maximal number of nonzero values is . In the worst case of nonzero signal DFT values can cancel nonzero values in meaning that the minimal number of nonzero values, in the worst case, is . Since it should not be the solution of our minimization problem, the case with all , producing the signal and its sparsity should be lower: or

For arbitrary signal whose values cannot be described by the missing sample positions and for the worst possible distribution of positions we get or

These two cases are obtained through the previous analysis as the special worst cases of Theorem 1 and Corollary 2 as well.

3.3. Proof of Corollary 2

In Theorem 1 we assumed that takes the maximal possible number of zero values and that at the same time the remaining nonzero values of cancel out all signal components . This assumption is very unlikely since after we assumed the maximal possible number of zeros in , we have only one remaining degree of freedom. It means that we can cancel out one signal component with one remaining variable and we assume that all other components have specific values so that they will also be canceled out. Consider now the case when signal components are not adjusted to these fixed values of . Then, in reality, we can expect to cancel out only one component in with one variable. Repeating Theorem 1 proof withfor nonuniqueness instead of for uniqueness produces the proof of Corollary 2.

3.4. Proof of Theorem 3

Here we will start with two missing samples and at the distance . The worst case is with the uniqueness conditionThese special positions of two missing samples reduce the maximal number of components that can uniquely be detected to . However in this case the positions of nonzero DFT values should be very specific. All of them should be on either even or odd positions, producingIf that is not the case, thenwill reduce the count of zero values that can be achieved in the worst case in . Since of the signal nonzero coefficients are not at the positions of nonzero values of then, in the best possible case, only out of nonzero values of the DFT can be canceled out by the predetermined missing sample values (as in Theorem 1). In addition to nonzero values of at the nonzero positions of , there are nonzero values of positioned at the zero-value positions of . They cannot be canceled by . The uniqueness condition for will therefore require This correction of the uniqueness condition with should be done only if . Denote by the sorted arrayThe correction with can be calculated asFor (when there are no two samples at the distance ) the upper summation limit will be and this kind of correction will not be done; . Note that the sum of all , is equal to the signal sparsity .

In the same way, for (missing samples at a distance being multiple of ) the period of is . In the worst case when , the maximal number of nonzero values in at a distance being multiple of 4 can be canceled out (with nonzero values of ). The remaining nonzero values in the DFT of signal will be on the zero positions of and cannot be canceled out. Note that if , then two nonzero values in will remain (with 3 variables we can make only 2 zero values in each period of ). The total number of the nonzero values of that can be canceled is the sum of the two largest numbers in defined by , whereThe number of remaining nonzero DFT values that cannot be canceled out isNote that if , then three nonzero values exist in one period of (only one zero value of can be obtained within a period) meaning that the total number of signal values that can be canceled out is . The number of remaining nonzero values of is .

The uniqueness condition is thatwith and for . The same analysis is done for all , using (58)-(59). The value of satisfying (58) for all produces Theorem 3 statement.

3.5. Proof of Corollary 4

This corollary follows from Theorem 3 neglecting the probability that several DFT coefficients can be canceled out by predetermined values of the missing samples. Then instead of in (58) we have just one sample andis used for nonuniqueness instead of the uniqueness condition

4. The Worst Case Signal Form

For the worst case, defined by Theorem 1, the set of possible amplitudes and phases of signal components is related to the missing sample positions. For the missing sample positions used in the minimization process, the worst case in the minimization process is when the period of the transformation matrix is such that it repeats immediately after samples, in the sense described in the proof of Theorem 1. Then the minimal number of nonzero values in DFT is . It means that variables can be determined such that values of within are zero-valued and only one is nonzero. In the worst case, this scenario repeats immediately after values (it repeats times). The maximal number of zero values in is then . The number od nonzero values (sparsity) in is .

Now we will investigate the form of the signal DFT so that the Theorem 1 sparsity bound holds with equality sign. The worst case assumes existence of the maximal number of zeros in and one nonzero value of in each period . It also assumes that all signal components can be canceled out by this nonzero value of and corresponding periodic nonzero values . The maximal number of zeros in within one period defines values of all nonzero values of variables in the time domain (with a possibility to cancel out one signal component in within the considered period).

Let us consider subsets of equations defined by (22). The first subset will be written for frequencies , the second for , and so on until the last one for . Assume the signal DFT coefficient (the one that we want to use along with zero values of to calculate variables ) is within the subset of equations for . Then the solution ofwill produce maximal sparsity for this frequency range; that is, for all considered frequencies . Solution for the missing sample values isValues of , obtained from this system, do not change for other nonzero signal DFT values , . With the previous system of equations and its solution for missing samples we have used all degrees of freedom of -dimensional variable . More zero values in the DFT of the resulting signal (lower sparsity of this signal) can be obtained only if the remaining signal DFT values , are canceled out, by chance. In the worst case, assumed by Theorem 1, all remaining nonzero DFT coefficients should be canceled out, with the already determined missing sample values . Since the transformation matrix is periodic, this will happen only if all remaining signal DFT coefficients assume very specific positions and specific valuessince the periodicity is established with respect to .

For a given set of missing sample positions probability that all components of a measured signal assume specific positions with specific amplitudes and phases (related to the missing sample positions) defined by (64) are a zero-probability event.

4.1. Group Delay and Missing Sample Positions

Consider a signal , periodic with period and defined for aswhere is the reminder after is divided by . Note that the worst case requires that all missing sample positions are at the positions being multiple of . It means that the reminder after division of by is a constant denoted by . The DFT of the considered signal is

It is interesting to note that the signal DFT values , defined with (64), are obtained as a subset of values from ; that is,

In the worst case, the DFT values of signal should be samples of a full DFT of a periodic signal which would have group delay coinciding with the missing sample positions , since in this case. It means that if the missing samples produce a periodic structure, then the signal values should follow this structure as well.

Example. For the signal with and missing samples atthe limit for sparsity (when we claim that the reconstructed sparse signal is unique, assuming that all signal amplitudes may be related to the missing sample positions) is . In this example, we will show what properties a signal must satisfy in the limit case so that the solution is not unique. To simplify the notation assume that one DFT value of reconstructed signal is .

The limit of sparsity is obtained in the first example with and . As explained, it corresponds to the missing sample positions and . It means that the missing sample values of other samples will be adjusted to their correct zero positions and only and will assume nonzero values. In this case the set of missing samples variables reduces to with . The DFT of such a signal is equal to In the worst case should have maximal possible number of zeros. We conclude that either or should hold; otherwise the sparsity of would be 32. In addition, should cancel out all signal components including assumed . Since would produce it would not be able to cancel . Therefore we must use withIt means that and

In order to cancel all nonzero values of they must be located at odd positions (where is nonzero)and must be of opposite sign and equal amplitude to the corresponding (determined) values of :resulting in

In this case sparsity of is 8, the same as the sparsity of the . Two solutions of our minimization problem are signal and whereBoth of these signals have the same sparsity and satisfy the same set of available samples.

However, if sampled signal is not the signal of very specific form (74), then the solution of sparsity will be unique for a given set of available samples. Then will not be in position to cancel all 8 DFT values of signal and the sparsity of will be 8 only for , producing unique solution.

Signal is . It is periodic with period . Group delay of this signal is with period . Therefore within group delays and of correspond to the missing sample positions. The signal must have the form , with corresponding to producing .

5. Conclusion

Reconstruction of a sparse signal, using recently proposed gradient-based method, is done by considering missing samples as variables. Theorems for the uniqueness of the solution obtained by varying missing samples, in the case of an arbitrary and already reconstructed signal, are stated and proved. The calculation complexity of the proposed theorems is low. The theory is illustrated on numerical and statistical examples.

The proposed approach can be extended to other signal transforms including nonredundant basis (dictionaries). One of the possible redundant basis closely related to the presented DFT is short time Fourier transform with overlapping windows [17, 18].

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.