Abstract

Compressed sensing (CS) is a theory which exploits the sparsity characteristic of the original signal in signal sampling and coding. By solving an optimization problem, the original sparse signal can be reconstructed accurately. In this paper, a new Tree-based Backtracking Orthogonal Matching Pursuit (TBOMP) algorithm is presented with the idea of the tree model in wavelet domain. The algorithm can convert the wavelet tree structure to the corresponding relations of candidate atoms without any prior information of signal sparsity. Thus, the atom selection process will be more structural and the search space can be narrowed. Moreover, according to the backtracking process, the previous chosen atoms’ reliability can be detected and the unreliable atoms can be deleted at each iteration, which leads to an accurate reconstruction of the signal ultimately. Compared with other compressed sensing algorithms, simulation results show the proposed algorithm’s superior performance to that of several other OMP-type algorithms.

1. Introduction

Compressive sensing (CS) [1, 2] aims to recover sparse or compressible signal with low amount of information and high probability. It breaks the traditional rule of Nyquist sampling theorem, which states that a signal’s information is preserved if it is uniformly sampled at a rate at least two times faster than its Fourier bandwidth. By this state-of-the-art signal compression and processing theory, the signal sampling frequency, the cost of processing time, data storage, and transmission can be greatly reduced.

For a given orthogonal basis , the signal can be represented in terms of the coefficient vector as

The corresponding inverse transformation is , , and . Here, is the identity matrix. We say that is -sparse under the orthogonal basis if only coefficients of are nonzero.

Usually, the signal is not sparse but its coefficient can be considered to be sparse or compressible after some transformations, such as the wavelet transformation.

Suppose that a matrix represents the measurement matrix. Then is accomplished by collecting a measurement vector of dimension with . can be expressed as . Then, (1) becomes is called as the CS measurement matrix and its columns are called atoms. The matrix is rank deficient and hence loses information in general. The CS reconstruction problem wishes to recover the coefficient vector from the set of linear measurements . Since , the reconstruction of from is generally ill-posed.

The two major algorithmic approaches to sparse recovery are methods based on () minimization and iterative methods (matching pursuits). We now briefly describe these methods, as follows.

1.1. () Minimization

The sparse recovery of this approach can be stated as the problem of finding the sparsest signal with the given measurements :

Donoho and his associates advocated the principle that for some measurement matrices , the highly nonconvex combinatorial optimization problem () should be equivalent to its convex relaxation: Reference [3] showed that if the measurement matrix satisfies the restricted isometry property (RIP), then a -sparse signal can be recovered exactly; that is, is called as the Restricted Isometry Constant of . It has been shown that () minimization can recover a sparse signal exactly under various conditions on restricted isometry constants, see [4, 5]. Then, the convex problem () can be solved using method of convex and even linear programming.

1.2. Orthogonal Matching Pursuit (OMP)

An alternative approach to sparse recovery is via iterative algorithms, which find the support of the -sparse signal progressively. Once is found correctly, it is easy to compute the signal from its measurements as , where denotes the measurement matrix restricted to columns indexed by .

A basic iterative algorithm is Orthogonal Matching Pursuit (OMP) [6]. OMP recovers the support of , one index at a time, in steps. Under a hypothetical assumption that is an isometry, that is, the columns of are orthogonal, the signal can be exactly recovered from its measurements as .

The problem is that the matrix is never an isometry in the interesting range where the number of measurements is smaller than the ambient dimension . Even though the matrix is not an isometry, one can still use the notion of coherence in recovery of sparse signals. In that setting, greedy algorithms are used with incoherent dictionaries to recover such signals, see [7, 8]. In our setting, for the commonly used random matrices, one expects the columns to be approximately orthogonal, and the observation vector to be a good approximation to the original signal .

Tropp and Gilbert [6] analyzed the performance of OMP for Gaussian measurement matrices ; a similar result holds for general sub-gaussian matrices. They proved that, for every fixed -sparse -dimensional signal and a random Gaussian measurement matrix , OMP recovers (the support of) from the measurements correctly with high probability, provided the number of measurements is .

The ()-minimization method has the strongest known guarantees of sparse recovery. Once the measurement matrix satisfies the Restricted Isometry Condition, this method works correctly for all sparse signals . ()-minimization is based on linear programming, which has its advantages and disadvantages. One thinks of linear programming as a black box and any development of fast solvers will reduce the running time of the sparse recovery method. On the other hand, it is not very clear what this running time is, as there is no strongly polynomial time algorithm in linear programming yet. All known solvers take time polynomial not only in the dimension of the program but also on certain condition numbers of the program. While for some classes of random matrices the expected running time of linear programming solvers can be bounded, estimating condition numbers is hard for specific matrices. For example, there is no result yet showing that the Restricted Isometry Condition implies that the condition numbers of the corresponding linear program is polynomial in .

OMP is quite fast, both theoretically and experimentally. It makes iterations, where each iteration amounts to a multiplication by a matrix (computing the observation vector ) and solving a least squares problem in dimensions at most . This yields strongly polynomial running time. In practice, OMP is observed to perform faster and is easier to implement than ()-minimization. For more details, see [6]. OMP is quite transparent; at each iteration, it selects a new coordinate from the support of the signal in a very specific and natural way. In contrast, the known ()-minimization solvers, such as the simplex method and interior point methods, compute a path toward the solution. However, the geometry of () is clear, whereas the analysis of greedy algorithms can be difficult simply because they are iterative.

On the other hand, OMP has weaker guarantees of exact recovery. Unlike ()-minimization, the guarantees of OMP are nonuniform: for each fixed sparse signal and not for all signals, the algorithm performs correctly with high probability. Rauhut has shown that uniform guarantees for OMP are impossible for natural random measurement matrices [9].

Moreover, OMP’s condition on measurement matrices given in [6] is more restrictive than the Restricted Isometry Condition. In particular, it is not known whether OMP succeeds in the important class of partial Fourier measurement matrices.

These open problems about OMP, first stated in [6] and often reverberated in the Compressed Sensing community, motivated the recent works on the modified OMP algorithms, such as the model-based Compressive Sensing [10], Tree-Based Orthogonal Matching Pursuit [11], Compressive Sampling Matching Pursuit (CoSaMP) [12], Regularized Orthogonal Matching Pursuit (ROMP) [13], and Backtracking-Based Matching Pursuit (BAOMP) [14]. ROMP and CoSaMP require the sparsity level as an input parameter. However, in the most practical applications, this information may not be known before reconstruction. Although the sparsity level is not required for the OMP and BAOMP, they do not use the characteristics of the sparse representation, such as the tree structure of wavelet transform. In this paper, a new Tree-based Backtracking Orthogonal Matching Pursuit (TBOMP) algorithm is presented based on the tree model in wavelet domain. Our algorithm converts the wavelet tree structure to the corresponding relations of candidate atoms without the prior information of signal sparsity level. Also, combing with the backtracking algorithm, the unreliable atoms can be deleted. Compared with OMP, ROMP, and BAOMP algorithms, the atom selection process will be more traceable, normalizable, and structural.

2. Tree-Based Backtracking Orthogonal Matching Pursuit (TBOMP) Algorithm

In this section, we will first review the wavelet tree structure. Second, the proposed TBOMP algorithm will be presented in detail.

2.1. Wavelet Tree Structure

Consider a signal of length , after -level wavelet transformations, the set of -tree sparse signals is defined as where is the scaling function and is the wavelet function at scale and offset . The wavelet transform consists of the scale coefficient and wavelet coefficients at scale , , and position , with .

Suppose that is the vector of the scaling and wavelet coefficients of with the maximum decomposition level . Also, it is a set of wavelet coefficients forms a connected subtree [10]. The set defines a subspace of signals whose support is contained in Ω, which means that all wavelet coefficients outside are approximately zero. The nested structure of wavelet coefficients creates a parent/child relationship between wavelet coefficients at different scales. We say that ( denotes rounded down) is the parent of . Also, and are the children of . These relations can be expressed graphically by the wavelet coefficient tree in Figure 2(a). The relationship between the parent and child nodes is that the index value of the parent node in a level is 1/2 times the index of the child node.

A kind of tree structure (greedy tree) was proposed in [15]. For the greedy tree, if a coefficient is significant then its child and all of its grandchildren are likely significant [11]. Figure 1 depicts two cases of greedy tree approximation. The number of each node is the wavelet coefficient modulus. Nodes not labeled depict zeros. In the first case, the wavelet coefficients decay monotonically along the tree branches toward the leaves. Suppose that the wavelet tree containing wavelet coefficients; that is, . The -term greedy tree approximation (here, we assume that ) can be proceeded in three steps: (1) find the , largest wavelet coefficient terms; (2) form the smallest connected rooted subtree that contains all of these coefficients; and then (3) increase until .

Initializing , two coefficients 10 and 8 will be found and will form a minimum, connected subtree Ω. Gradually increase until , the greedy tree approximation forms the connected rooted subtree , 10-8-4-3, with 4 nodes that maximize the sum of the wavelet coefficients in the subtree. This process was shown in Figure 1(a), the error is small. Another case was shown in Figure 1(b), when the wavelet coefficients do not decay monotonically along the tree branches toward the leaves, an isolated significant coefficient away from the root will be selected, either of its all ancestor coefficients. These ancestor coefficients may be very small, which will increase the approximation error. For example, initializing , then two coefficients 10 and 8 will be found and the resulted subtree is 10-0-0-8 with . Obviously, the error is large.

We can see that the process of greedy tree approximation is simple, but when the tree includes isolated large coefficients far from the tree root, the approximation error will be increased. Thus, backtracking is imposed to deleting the wrong nodes selected by the greedy tree. This will be illustrated in the Section 2.2.

2.2. Tree-Based Backtracking Orthogonal Matching Pursuit (TBOMP) Algorithm

Our proposed Tree-based Backtracking Orthogonal Matching Pursuit (TBOMP) is as follows.

Algorithm 1 (TBOMP).
Symbol Description—wavelet high frequency coefficient vector;—reconstruction wavelet high frequency coefficient vector;—measurement matrix, ;—the th column vector of , ;—parameters of thresholds, ;—index set, denotes the index set of all columns of matrix ;—number of maximum iterations allowed;—atom-deleting set in the th iteration;—candidate set of the root atoms in the th iteration;—family set that consists of the subtrees corresponding to the root nodes in .
Initialization. (initial residual), , , and .
Loop(1)Initial selection: select the candidate set with absolute values of correlations satisfying: (2)According to the 2-times relationship of wavelet tree node indices, find the wavelet tree rooted at each node in . Then the family set consists of the atoms indexed by and all of their families can be found.For example, assume that , then the wavelet subtrees rooted at will be found, respectively, in this step. The index sets of these trees are denoted as .(3)Compute , .(4)Find such that minimizing the residual as follows: (5)Select atom deleting index set satisfying (6)Set , , and update the residual as follows: (7)If or if , quit the iteration; otherwise, set , go to step 1.
End Loop.

Output. the estimated support set and the nonzero values .

As seen in the above algorithm, we combined the characteristics of tree structure and the BAOMP algorithm. In the first step, TBOMP selects candidate set whose correlations between the columns of and the residual are not smaller than , . Here, the constant is used to adaptively decide how many atoms are chosen at each time. Then the atoms corresponding to the elements of are set as the root nodes of subtrees. As we mentioned in Section 2.1, due to the 2-times relationship between the indices of parent and child nodes, the subtree of each atom corresponding to an index in can be found to form the family set , which consists of the indices of the family atoms in the th subtree. In the third step, least square method is applied to obtain the reconstruction wavelet high frequency coefficients corresponding to the atoms indexed by . Then the optimal subtree indexed by will be selected according to step (4). In this step, there may exist insignificant atoms in . This is because that we only simply applied the 2-times relationship discipline in the searching processing of subtrees. Thus, the backtracking deleting method is introduced in the algorithm to identify the true support set of . The backtracking deleting set consists of the indices corresponding to all the reconstructed coefficients satisfying (9). Then, the index set is updated by at this iteration. According to the atoms corresponding to the indices in the set , the reconstruction coefficients can be computed. Finally, update the residual by (10) and go to the next iteration. If or , quit the iteration.

In the TBOMP, the process of tree nodes selection was shown in Figure 2; the first step of the algorithm is to select candidate set by (7). For example, suppose that was chosen at the first iteration. The nodes of subtree ② rooted at and the family nodes rooted at are the significant coefficients needed to be found. According to the 2-times relationship of wavelet tree node indices and Figure 2(b), and are the child and grandchild nodes of . Thus, subtree ① rooted at the node will be found in the first iteration.

Now we assume that the subtree ① is the optimal tree corresponding to . At the end of this iteration, the backtracking algorithm will remove the node according to step 5 of the TBOMP algorithm described above. In the remaining iteration, node will be choosen as the child node of . Ultimately, subtree ② will be found accurately. Analogously, the searching process of the subtree rooted at the node is the same, and it can be proceeded simultaneously.

These characteristics of tree structure provide a new way for the study of reconstruction algorithm. Thanks to the tree structure of wavelet coefficients, when the signal is sparsely represented by the wavelet transform, it also provides a clew for the selection of atoms in the reconstruction algorithm. This will greatly improve the reliability of the atom selection.

The coefficients of wavelet decomposition include low-frequency coefficients and high-frequency coefficients (scaling coefficients and wavelet coefficients in ). The more levels of wavelet decomposition, the less low-frequency coefficients, and more important information is reserved in the high-frequency coefficients. Compared with the high-frequency coefficients, the number of low-frequency coefficients are much less if the decomposition level is big enough. Since the low-frequency coefficients play an important role in the wavelet reconstruction, in our proposed algorithm, only the high-frequency coefficients are measured by measurement matrix. For the reconstruction, we combine the reconstructed high-frequency coefficients and the unprocessed low-frequency coefficients. Then the inverse wavelet transform is applied to obtain a reconstructed of the original signal .

3. Simulation Results

In this section, several experiments will be given for the TBOMP algorithm. In the first experiment, the original signal is a one-dimensional blocks signal with length . It was recovered from measurements by using the Gaussian random measurement matrix. The wavelet decomposition level is 4 and the wavelet function is Db1. Figure 3 shows the reconstruction result of 7th iterations by using the TBOMP algorithm.

In the first iteration of the TBOMP algorithm, according to the parent-child relations of wavelet tree, some unreliable atoms will be chosen, which leads to a wrong reconstruction result. As marked by the cycles in Figure 4(a). Then according to the backtracking deleting method, the wrong selected atoms can be deleted. After the second and the third iterations, some atoms are still not found. After the 7th iteration, the reconstruction result (Figure 3(c)) with TBOMP algorithm is exactly same as the original wavelet coefficients shown in Figure 3(b).

Similar results can be obtained for other signals. Reconstruction results of Doppler and Heavysine signals by using our TBOMP algorithm are shown in Figure 5. Here, we compared our reconstruction results with the classical OMP algorithm, .

In the next experiment, we will compare the TBOMP with some popular algorithms such as OMP, ROMP, and BAOMP. Here, only the high frequency coefficients are measured; the low-frequency coefficients will not be processed [16]. The wavelet function is choosen as the “sym8” in MATLAB. The decomposition level is 5 for these four algorithms. Define , where std denotes the standard deviation. Because of the randomness of the sensing matrix, numerical result at each time is different. Hereafter, we use the same sensing matrix in one experiment for these four algorithms.

We use the Bumps signal of length and change the values of simultaneously in order to guarantee the same experiment condition. After 5 layers of wavelet decompositions, there are 64 low-pass coefficients in the 5th decomposition layer and total 1984 high-pass coefficients in the 5 decomposition layers. In order to obtain a fair comparison, in the Figure 6, the measurements number used in these four algorithms is . For sake of simplicity, when we mention that measurements in the TBOMP, we means that is the sum of the low-pass coefficient number and the measurement number of the high-pass coefficients.

When , the compression ratio is about 1⁄4. The reconstruction results of Bumps signal of TBOMP and BAOMP are shown Figure 6. The SNR of TBOMP is about 1.8 dB higher than the BAOMP.

Since ROMP requires the sparsity level to be known for exact recovery, in the experiments, the best sparsity value of the wavelet coefficients can be estimated according to repeated experiments and then used in the simulations. Figure 7 shows the SNR comparison results for different values of . The values of are selected as 200, 500, 800, 1100, and 1400, respectively. For each , we conduct the experiment 10 independent trials and calculate the average SNR. It is obviously that the reconstruction result of TBOMP algorithm is superior to others.

4. Conclusion

Sparse reconstruction algorithm is one of the three core problems (signal sparse representation, measurement matrix design, and reconstruction algorithm design) of CS. The existed sparse reconstruction algorithms such as ROMP and CoSaMP algorithms employ the sparsity as the prior knowledge for exact recovery, which has many limitations for the realistic applications. However, although the sparsity level are not required for OMP and BAOMP algorithms, they do not use the characteristics of special sparse basis to improve the performance of the algorithms. In this paper, a new Tree-based Backtracking Orthogonal Matching Pursuit (TBOMP) algorithm was proposed based on the tree model in wavelet domain. Our algorithm can convert the wavelet tree structures to the corresponding relations of candidate atoms without any prior information of signal sparsity level. Moreover, the unreliable atoms can be deleted according to the backtracking algorithm. Compared with other compressive sensing algorithms (OMP, ROMP, and BAOMP), the signal reconstruction results of TBOMP outperform the above mentioned CS algorithms.

Acknowledgment

This work was supported by the National Natural Science Foundation of China (Grants nos. 61272028, 61104078, 61073079, and 61273274); the Fundamental Research Funds for the Central Universities of China (Grant no. 2013JBZ003); the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grants nos. 20110162120045 and 20120009110008); the Program for New Century Excellent Talents in University (Grant no. NCET-12-0768); the National Key Technology R&D Program of China (Grant no. 2012BAH01F03); the National Basic Research (973) Program of China (Grant no. 2011CB302203); and the Foundation of Key Laboratory of System Control and Information Processing (Grant no. SCIP2011009).