Research Article  Open Access
Wavelet Compressive Sampling Signal Reconstruction Using UpsideDown Tree Structure
Abstract
This paper suggests an upsidedown treebased orthogonal matching pursuit (UDTOMP) compressive sampling signal reconstruction method in wavelet domain. An upsidedown tree for the wavelet coefficients of signal is constructed, and an improved version of orthogonal matching pursuit is presented. The proposed algorithm reconstructs compressive sampling signal by exploiting the upsidedown tree structure of the wavelet coefficients of signal besides its sparsity in wavelet basis. Compared with conventional greedy pursuit algorithms: orthogonal matching pursuit (OMP) and treebased orthogonal matching pursuit (TOMP), signaltonoise ratio (SNR) using UDTOMP is significantly improved.
1. Introduction
Compressive sampling (CS) [1, 2], as a new emerging signal processing theory, which has received considerable attention in signal processing. CS exploits the sparse structure in signal, and it enables signal reconstruction from a small number of random samples. A variety of signal recovery algorithms have been proposed to reconstruct the sparse signal. Generally, there are two classes of methods: convex optimization and greedy pursuit algorithms. Although the convex optimization method, such as basis pursuit (BP) [3], is powerful for sparse signal reconstruction, it may be computationally burdensome. Considering the computational complexity and difficulty of realization, greedy pursuit algorithms, especially matching pursuit (MP) [4] and orthogonal matching pursuit (OMP) [5], are attractive for engineering problems.
MP algorithm is computationally efficient and often features good performance; however, when the basis is not given by an orthogonal basis, MP algorithm cannot find the best approximation for original signal [6]. As an alternative, OMP orthogonalizes each selected column vector associated with maximum projection in basis , and OMP does not suffer the aforementioned flaw. In CS field, although there are many works about theory analysis and practice implementation of OMP, these recovery algorithms are generic in the sense that they do not exploit any particular structure in the signal besides its sparsity in some basis. However, for some signals there is additional a priori information that we can exploit for improving recovery performance. For example, the piecewise smooth signals, which are widely used in practice, are not only sparse in wavelet domain, but also form a connected subtree.
In this paper, we present an improved OMP signal recovery algorithm by employing an upsidedown tree structure of signal in wavelet domain (we refer to this treebased algorithm as UDTOMP). Proposed algorithm is evaluated by signaltonoise ratio (SNR) as a measure of quality of reconstructed signal. We have compared the performance of UDTOMP with OMP and treebased orthogonal matching pursuit (TOMP) with SNR as a function of number of measurements.
2. Compressive Sampling (CS) Background
CS is a novel sampling paradigm that goes against the common wisdom in data acquisition. CS states that a sparse or compressible signal can be recovered from a small salient set of random projections. To make it possible, there are two fundamental premises [7]: sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality. Sparsity expresses the idea that the “information rate [8]” of a highdimensional signal may be much lower than the maximum frequency presented in the signal. Consider an dimensional signal , which can be sparsely represented in an appropriate basis , and the transfer coefficients of are given by . There are no more than nonzero entries in , and the signal is called Ksparse signal. Just the opposite, incoherence means that the measurement matrix has dense representation in the basis , and is independent of .
CS also extends to that socalled compressible signals that are not exactly sparse but can be closely approximated as such (i.e., wavelet coefficients of signal and image). Sparse signals have coefficients that, when sorted, decay according to a power law: for some positive constant and ; the smaller the decay exponent , the faster the decay and the better the recovery performance we can expect from CS. In practice, most of manmade and natural signals are sparse or compressible in the sense that they have concise representations when expressed in an appropriate basis, such as Fourier basis and wavelet basis. For a Ksparse signal , we can find its [9]) linear measurements , and is the measurement matrix of size , and signal can be reconstructed by solving the following inverse problem: where the norm used here simply counts the number of nonzero entries in .
Since , (2.1) is an illposed problem, and there are many possible solutions of (2.1). The original signal can be reconstructed from by exploring its sparse expression, that is, among all possible that satisfy , seek the sparsest. It is known to be NPhard to solve problem (2.1) and different suboptimal strategies are adopted in practice such as BP and OMP.
3. Orthogonal Matching Pursuit (OMP) Algorithm
MP approximation is improved by orthogonalizing the directions of projection with a GramSchmidt procedure; it is known as orthogonal matching pursuit (OMP). This orthogonalization was introduced in [6], and it was intensively studied in [5]. The improved precision of this orthogonalization is penalized by the higher computation of the GramSchmidt. In CS, OMP selects column vectors from dictionary that minimizes the difference, also called the residual , between measurement and approximation. Specially, starting with = y, the OMP algorithm searches for the kth atom with maximum projection as and it updates the residual as Here, is the ith column vector of D, and denotes the orthogonal projection onto the subspace , and . OMP algorithm in parallel applies the GramSchmidt orthogonalization upon chosen atoms for efficient computation of projections. It can be formulated as a subset selection problem where a minimum subset of columns of D matrix is chosen to approximate the observation vector in the least square sense. The OMP algorithm successively chooses an additional column of D matrix to reduce the approximation error. Equivalent, the OMP method begins with a tentative solution of with a single nonzero entry, and gradually adds nonzero entries one by one until the approximation error of meets a predetermined criterion. The accuracy of approximation increases with the number of iterations; however, the iterative number must not be bigger than the number of random measurements and we cannot expect to recover signal from higher dimensions than that of measurements.
4. UDTOMP Algorithm
4.1. Tree Structure in Wavelet Domain
In CS, both BP and OMP reconstruct signal based on sparsity without considering any other particular structure that may exist in the signal. The wavelet transform of a piecewise smooth signal (many punctured realworld phenomena give rise to such signal), an important subclass of sparse signals, yields a sparse, structured representation of signals in this class: the significant coefficients tend to form a connected subtree of the wavelet coefficient tree. Figure 1 shows an example of wavelet representation of such signal. Only a few wavelet coefficients are significant, which form sparse subtrees when represented in multiscale wavelet transform. The number of such subtrees equals to that of the discontinuities in signals.
(a)
(b)
(c)
(d)
In this work, we only focus on 1D signals, and similar arguments apply for 2D and multidimensional signals. Consider a signal of length , given a bandpass wavelet function and a lowpass scaling function , the discrete wavelet transform (DWT) coefficients of size in terms of shifted versions of and shifted and dilated versions of . The wavelet representation of is given [10] where denotes the scale of analysis and scale indicates the coarsest scale or lowest resolution of analysis. is the number of coefficients at scale , and is the position, . In terms of matrix notation, has the representation , where is a matrix containing the scaling and wavelet functions as columns and the scaling and wavelet coefficients are as follows:
According to the statistical analysis in wavelet domain, wavelet coefficients have the following two properties [11].Compression. The wavelet transforms of realworld signals decay exponentially as the scale become finer, and they tend to be sparse (as depicted in Figure 1(b)).Tree structure. Those significant wavelet coefficients propagate across scale in wavelet tree, and they are well organized in a tree structure (as depicted in Figure 1(c)).
This tree structure was exploited by previous CS reconstruction algorithms known as iterative reweighted norm minimization with the wavelet Hidden Markov tree model (HMTIRWL1) [12] and TOMP [13]. HMTIRWL1 integrates the HMT model to enforce the wavelet coefficient structure during IRWL1, which updates the weight values with state probability of HMT model and highly depends upon accuracy of model. If accurate Markov model is available, then the HMTIRWL1 could be powerful to recover the sparse signal. Although, in practice CS applications, only a small set of random measurements are available, one cannot get an accurate Markov model from these measurements. TOMP evaluates the sums of projections along each wavelet coefficients connected by a subtree, which is an improved OMP recovery algorithm. Both HMTIRWL1 and TOMP are based on upright subtree, and one father node connects with two children nodes, as depicted in Figure 1(d).
4.2. UDTOMP Algorithm
In previous treebased CS reconstruction algorithms, trees are assumed to be upright, which means that those significant coefficients propagate from coarser scale to finer scale. Examining the tree structure in Figure 1(c), we noticed that not only do they organize in subtrees, but also those subtrees are upside down. From finer scale to coarser scale, wavelet coefficients become larger, and there are more significant coefficients in coarser scales. In this paper, we motivate an improved version OMP by employing weighting an upsidedown tree (UDTOMP).
The input of UDTOMP is a dictionary D with size of , a measurement vector with length of , upward extending coefficient , weighting value , and iterative number . UDTOMP returns a reconstruction sparse vector of length that is subject to .
UDTOMP evaluates the projection of each single column vector and searches for the maximum projection. According to the columns associated with the maximum projections, UDTOMP constructs an upsidedown subtree upward coarser scale with depth of , which will be weighted in the next iteration of searching for the maximum projection.
In (4.2), we noticed that the wavelet transform of signal can be divided into two parts: scaling coefficients and wavelet coefficients [13] where contains all the scaling coefficients and contains all the wavelet coefficients of signal . Since the scaling coefficients are significant, we aim to recover all the scaling coefficients and those significant wavelet coefficients in . In particular, we recover and separately and revise (2.1) as follows:
UDTOMP algorithm consists of two steps. We first limit our search space in the columns associated with the scaling coefficients and then in the columns associated with the wavelet coefficients. Let be the selected set of columns in dictionary D from the beginning iterations. Let be the GramSchmidt version of . According to (3.1) and (3.2), OMP aims to search for the columns with the maximum projections, which associate with the significant coefficients. Firstly, we set the selected sets and U_{0} to be empty. In step 1, we select all columns of D corresponding to the scaling coefficients since these coefficients are significant. According to the construction of wavelet basis , the wavelet transform level (also called the scale of analysis) is deterministic and so as the positions of scaling coefficients (as depicted in (4.2)):
All vectors in are sequentially orthogonalized using GramSchmidt and stored in U_{0}. At end of step 1, the residual is updated by In step 2, we focus on recovering the wavelet coefficients which have sparse tree structure. Step 2 is the repetition of iterations. Let be the candidate set of vectors, which will be weighted in the next searching of maximum projection. We initialize all the elements of as 1.
In the first iteration, we initialize the counter . Then UDTOMP investigates the weighted projections of the current residual on all the column vectors of D
According to the maximum projection position , we construct the kth upsidedown subtree with depth of ( is an integer, should be greater than 1 and not greater than the level of wavelet transform). At each finer scale, every four consecutive nodes connect two nodes of nearest coarser scale, which forms an upsidedown subtree. For example, if (as depicted in Figure 1(d)) is current maximum projection, and (both connect with four consecutive nodes , , , and ) would be children nodes in the nearest coarser scale, and and are children nodes in the next coarser scale: Since we want to enforce the selected columns, the weighting value should be greater than 1. We add the newly chosen node into our selected set : where (·) denotes orthogonalization using GramSchmidt. The residual is updated as follows: After iterations of (4.7)–(4.10), the significant coefficients are determined by the columns in , and they are represented as follows: and the approximation of the original signal is
The pseudocodes of UDTOMP are described as in Algorithm 1.

For simplicity, sparsity level (assumed known) can be used as halting criterion here. If is unknown, we can modify the iteration in the above by letting run from 1 to but adding a threshold for below which the iteration is terminated.
5. Experiment Result
To demonstrate the advantage of upsidedown tree structure, we evaluated UDTOMP algorithm by comparing the performance of OMP, TOMP, and UDTOMP. We used piecewise smooth signal of length . 4level Daubechies 8 wavelets were applied to sparsify the test signal, upward extending coefficient was set , and weighting value (natural number). Samples were obtained using a measurement matrix with i.i.d. Gaussian entries.
In the first experiment, we reconstructed the signal from 300 random measurements by OMP, TOMP, and UDTOMP, respectively. Figure 2 depicts the reconstructions, the SNR of the OMP reconstructed signal is 27.63 dB, the SNR of the TOMP reconstructed signal is 34.57 dB, and the UDTOMP reconstruction achieves SNR of 39.62 dB. In this experiment, UDTOMP gains more than about 12 dB and 5 dB over OMP and TOMP, respectively. From experimental result, the advantage of the UDTOMP method is clearly demonstrated.
(a)
(b)
(c)
(d)
In the second experiment, we reconstructed the test signal from different numbers of measurements using OMP, TOMP, and UDTOMP. Numbers of measurements over the range of 100 to 400 in increment of 30 are tested. 100 trials are repeated for each specific number, and the averaged reconstruction SNR is plotted in Figure 3. Through exploring the upsidedown sparse tree structure, the UDTOMP method outperforms OMP and TOMP in piecewise smooth signal reconstruction.
6. Conclusions
This paper introduced an upsidedown tree structure weighting scheme for OMP algorithm in wavelet domain CS signal reconstruction. The UDTOMP weights the nodes that connect in the subtree with significant values. Different from tree structure presented in the previous CS recovery algorithms, UDTOMP constructs the tree using upsidedown structure rather than upright structure. UDTOMP weights the projections that should have larger coefficients. The experimental results show that our method outperforms OMP and TOMP, and it can achieve more accurate approximation in piecewise smooth signal reconstruction. In this paper, we only considered the constant weight value. We can also adopt different weight values at different scales.
Acknowledgments
The work is partially supported by National Natural Science Foundation of China (no. 60827001) and China Scholarship Council (no. 2009607046).
References
 E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at: Publisher Site  Google Scholar
 D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 J. A. Tropp, “Algorithms for simultaneous sparse approximation. Part II: convex relaxation,” Signal Processing, vol. 86, no. 3, pp. 589–602, 2006. View at: Publisher Site  Google Scholar
 J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simultaneous sparse approximation. Part I: greedy pursuit,” Signal Processing, vol. 86, no. 3, pp. 572–588, 2006. View at: Publisher Site  Google Scholar
 J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655–4666, 2007. View at: Publisher Site  Google Scholar
 R. A. DeVore and V. N. Temlyakov, “Some remarks on greedy algorithms,” Advances in Computational Mathematics, vol. 5, no. 23, pp. 173–187, 1996. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 E. J. Candes and M. B. Wakin, “An introduction to compressive sampling: a sensing/sampling paradigm that goes against the common knowledge in data acquisition,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008. View at: Publisher Site  Google Scholar
 T. Blu, P. L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, “Sparse sampling of signal innovations: theory, algorithms, and performance bounds,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 31–40, 2008. View at: Publisher Site  Google Scholar
 R. G. Baraniuk, “Compressive sensing,” IEEE Signal Processing Magazine, vol. 24, no. 4, pp. 118–121, 2007. View at: Publisher Site  Google Scholar
 S. Mallat, A Wavelet Tour of Signal Processing: The Sparse Way, Elsevier/Academic Press, Amsterdam, The Netherlands, 3rd edition, 2009.
 M. S. Crouse, R. D. Nowak, and R. G. Baraniuk, “Waveletbased statistical signal processing using hidden Markov models,” IEEE Transactions on Signal Processing, vol. 46, no. 4, pp. 886–902, 1998. View at: Publisher Site  Google Scholar
 M. F. Duarte, M. B. Wakin, and R. G. Baraniuk, “Waveletdomain compressive signal reconstruction using a Hidden Markov Tree Model,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 5137–5140, Las Vegas, Nev, USA, April 2008. View at: Publisher Site  Google Scholar
 C. La and M. N. Do, “Signal reconstruction using sparse tree representations,” in Wavelets XI, vol. 5914 of Proceedings of SPIE, pp. 1–11, San Diego, Calif, USA, August 2005. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2011 Yijiu Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.