Abstract

In solving the partial Fourier Multiple Measurement Vectors (FMMV) problem, existing greedy pursuit algorithms such as Simultaneous Orthogonal Matching Pursuit (SOMP), Simultaneous Subspace Pursuit (SSP), Hybrid Matching Pursuit (HMP), and Forward-Backward Pursuit (FBP) suffer from low recovery ability or need sparsity as a prior information. This paper combines SOMP and FBP to propose a Hybrid Orthogonal Forward-Backward Pursuit (HOFBP) algorithm. As an iterative algorithm, each iteration of HOFBP consists of two stages. In the first stage, indices selected by SOMP are added to the support set. In the second stage, the support set is shrank by removing indices. The choice of and is critical to the performance of this algorithm. The simulation results showed that, by using proper parameters, HOFBP has better performance than other greedy pursuit algorithms at the expense of more computing time in some cases. HOFBP does not need sparsity as a prior knowledge.

1. Introduction

Compressed Sensing (CS) [13] is a new information collection theory which has broken through the Nyquist sampling theorem. The Multiple Measurement Vector (MMV) problem [4] is a natural extension of CS to deal with multiple signals. It has potentials in many practical problems, such as underwater acoustic channel estimation [5], source localization [6], Direction-Of-Arrival (DOA) [7] as well as image super-resolution [8], face recognition [9], and image reconstruction [10]. In MMV problems, if the partial Fourier measurement matrix is used to measure signals, we have a partial Fourier MMV problem.

Many algorithms were proposed to solve the MMV problem, such as Multiple Signal Classification algorithms [1013], Bayesian learning algorithms [1416], and greedy pursuit (GP) algorithms [1720] as well as   [6] and algorithm [21]. Generally, GP algorithms are easily implemented and have low computational complexity. It is known that the GP algorithms are based on the iterative selection of the strongest components in the measurements projected onto the measurement matrix columns. Different from Gaussian random measurement matrix, partial Fourier matrix has the property that its adjoining atoms (columns of the measurement matrix) are strongly correlated [17, 22]. Because of this, greedy pursuit (GP) algorithms have some problems in solving the partial Fourier MMV problem. Simultaneous Orthogonal Matching Pursuit (SOMP) [18] lacks a mechanism to revaluate the selected indices. Once a wrong index is selected, there is no chance to remove it. Simultaneous Subspace Pursuit (SSP) [19] and Simultaneous Compressive Sampling Matching Pursuit (SCoSaMP) [20] select multiple atoms in each selection. This makes them weak in distinguishing adjoining atoms. G. Li and R. J. Burkholder [17] combine SOMP and SSP to propose the Hybrid Matching Pursuit (HMP) algorithm. However, in our simulations, HMP’s performance is not always good enough. Moreover, SSP, SCoSaMP, and HMP need sparsity as a prior information, which is impossible to know a prior for most practical applications. Forward-Backward Pursuit (FBP) [23] does not need the sparsity constraint as a prior information. However, it is also weak in distinguishing adjoining atoms. As stated in [24], the Bayesian-based CS reconstruction algorithms run in the opposite direction to GP algorithms. In the Bayesian reconstruction, the sparsity is reduced from its maximal value by using hyperparameters. In this case, if a component is missed, then the signal cannot be recovered. While the inclusion of a nonexisting component will only influence the noise in the resulting reconstruction, the omission of such a component will prevent correct reconstruction.

In this paper, we combine SOMP and FBP to propose a Hybrid Orthogonal Forward-Backward Pursuit (HOFBP) algorithm. Inheriting the advantages of both SOMP and FBP, HOFBP has high-resolution in distinguishing adjoining atoms, as well as a mechanism to revaluate the selected indices, and does not require sparsity as a prior information. Simulation results show that, by using proper parameters, HOFBP has better performance than other GP algorithms at the expense of more computing time in some cases.

The rest of this paper is organized as follows. In Section 2, we introduce some notation and concepts. In Section 3, we introduce related GP algorithms, including SOMP, SSP, HMP, and FBP. In Section 4, we propose the HOFBP algorithm. In Section 5, the computational complexity of HOFBP is analyzed. In Section 6, we provide the simulation results. Conclusions are stated in Section 7.

2. Notation and Concepts

In this paper, we use the following notations. denotes the number of signals. denotes the set of indices which correspond to the rows of the joint signal . For the matrix , represents the column of , and represents the row of . For the set , denotes the cardinality of , represents the matrix composed of the columns , and represents the matrix composed of the rows . denotes the support set, which records the indices (positions of nonzero rows in ). If , is called jointly sparse, and is called the sparsity. In this paper, we assume . represents the conjugate transpose matrix of . represents the pseudoinverse matrix of . denotes the Frobenius norm of . denotes the space spanned by the columns of .

2.1. Single Measurement Vector (SMV)

The measurement process of CS can be denoted as , where is a sparse signal, is the measurement matrix, is the measurement vector, and is the additive noise. Given and , estimating from is named as the Single Measurement Vector (SMV) problem [13, 25]. minimization algorithms [2628] and GP algorithms are proposed to solve the SMV problem. There are many GP algorithms, such as Orthogonal Matching Pursuit (OMP) [29], Subspace Pursuit (SP) [30], Compressive Sampling Matching Pursuit (CoSaMP) [31], Forward-Backward Pursuit (FBP) [23] as well as Regularized Orthogonal Matching Pursuit (ROMP) [32], Stagewise Orthogonal Matching Pursuit (StOMP) [33], and Generalized Orthogonal Matching Pursuit (gOMP) [34].

2.2. Multiple Measurement Vector (MMV)

The MMV problem is an extension of the SMV problem. In the MMV problem, the measurement matrix measures signals, which can be denoted aswhere represents the signal, denotes the measurement vector, denotes the measurement matrix, and denotes the additive noise. The measurement process can be rewritten as , where , , and . If only rows of are nonzero vectors, is called jointly sparse and the sparsity is . Supposing is jointly sparse, the problem of recovering from , which is also called the MMV problem, can be formulated as where is the error bound.

2.3. Partial Fourier Measurement Matrix

In MMV, many measurement matrices can be used to measure signals, such as Gaussian random matrix [1], partial Fourier matrix [2, 3], and Bernoulli random matrix [2] as well as deterministic measurement matrices [35, 36].

If Gaussian random matrix is used to measure signals, we call the MMV problem as Gaussian random MMV problem. If the partial Fourier measurement matrix is used to measure signals, we call the MMV problem as partial Fourier MMV problem.

The elements of the Gaussian random measurement matrix follow a Gaussian distribution with mean and variance . The Gaussian random measurement matrix can be defined as

The Gaussian random measurement matrix is a random projection matrix [3739], which is widely used in machine learning and dimensionality reduction.

The discrete Fourier transform (DFT) matrix is defined as

The partial Fourier measurement matrix is formed by randomly choosing rows from , where .

The elements of Gaussian random measurement matrix follow Gaussian distribution. Its adjoining atoms are weakly correlated. The measurement matrix used for radar imaging is typically a Fourier-like measurement matrix. As illustrated in [17, 22], compared with Gaussian random measurement matrix, the adjoin atoms of the Fourier-like measurement matrix are strongly correlated.

As GP algorithms iteratively select the atoms with the largest correlation to the measurement signal, it is harder for them to distinguish adjoining atoms of partial Fourier measurement matrix than that of Gaussian random measurement matrix. Our algorithm solves this problem by combining SOMP and FBP.

What should be pointed out is that because Gaussian random measurement matrix has not the problem as partial Fourier measurement matrix, our algorithm has no advantage over some GP algorithms in solving the Gaussian random MMV problem. The experimental results in the Appendix illustrate this aspect.

3. Greedy Pursuit Algorithms

In all greedy pursuit algorithms, a two-step strategy is adopted to solve problem (2). Firstly, the support set is estimated. Then, the nonzero elements of the signal are estimated by using the least squares method, and the solution of problem (2) can be calculated aswhere and .

To simplify the description of the following algorithms, we define two helping functions. The function selects a certain number of indices which correspond to the largest amplitude entries of a vector. The function selects a certain number of indices which correspond to the smallest amplitude entries of a vector.

SOMP is summarized in Algorithm 1. In SOMP, the support set expands iteratively. In each iteration, the index corresponding to the atom which is most related to the residual vectors is added to the support set. If the sum of all the residual vectors’ is larger than the termination threshold, we will terminate the iteration process, output the support set , and then estimate the signals by using the least squares method as defined in (5); otherwise, the iteration process continues.

Input: the measurement vectors where
, the measurement matrix , the
termination threshold .
Initialization: Let the residual vectors
where , the support set and the
number of iteration .
Iteration:
(1) Select and let
.
(2) Update the residual vector
where .
(3) . If , terminate the
iteration; otherwise, go back to step (1).
Output: Compute the estimated signals by (5).

SSP is summarized in Algorithm 2; it searches the support set iteratively. Each iteration is divided into index selection stage and backtracking stage. In the index selection stage, indices corresponding to the atoms which are most related to all the residual vectors are added to the support set. In this way, the size of the support set expands to (except on the first iteration). In the backtracking stage, measurement vectors are individually projected onto the subspace spanned by the atoms. Then, the support set is refined by selecting indices which correspond to the largest entries in the fusion result of projection coefficients. Once the sum of all the residual vectors’ does not decrease, the iteration process is terminated and the signals can be estimated by (5).

Input: The measurement matrix , the
measurement vectors where , the
sparsity .
Initialization: Let the residual vectors
where and the support set .
Iteration:
(1) Let the temporary support set
.
(2) Let
(3) The residual vector is updated as
where .
(4) If , let ,
where and go back to step
(1); otherwise, we terminate the iteration and let the
final support set .
Output: Compute the estimated signals by (5).

Similar to SSP, FBP is also a two-stage algorithm. It has two parameters: forward step size and backward step size , where . In the first stage, indices are added to the support set. In the second stage, indices corresponding to the atoms which have the smallest contributions to projection coefficients are removed from the support set. Because , the support set expands with the step size . When the residual energy is less than a threshold, the iteration process is terminated. The choices of and are critical to the performance of FBP. According to the simulation results in [23], and lead to the best performance. In case of a failure, in order to avoid too many iterations, the size of the estimated support set is controlled by the parameter , which is usually set as .

HMP combines the strengths of SOMP and SSP. It also searches the support set iteratively. Each iteration of HMP can be divided into index selection stage and backtracking stage. In the index selection stage, it utilizes the strategy of SOMP to select indices. Firstly, solutions are obtained by individually applying OMP on residual vectors. Then, indices corresponding to the largest amplitude entries of solutions’ fusion result are added to the support set. In the backtracking stage, HMP utilizes the revaluation mechanism of SSP to revaluate the selected indices. Firstly, projection coefficients are obtained by individually projecting measurement vectors onto the subspace spanned by , where is the support set. Then, the support set is refined by selecting indices which correspond to the largest entries in the fusion result of the projection coefficients. Once the sum of all the residual vectors’ no longer decreases, the iteration process is terminated and the signals can be estimated by (5); otherwise, the iteration process continues.

4. HOFBP

4.1. Strengths and Drawbacks of SOMP, SSP, FBP, and HMP

From the description of SOMP and SSP, we can see that SOMP and SSP have both strengths and drawbacks in solving the partial Fourier MMV problem. SOMP selects one index in each iteration, which ensures the strong ability in distinguishing adjoining atoms. However, it lacks a mechanism to revaluate the selected indices. Once a wrong index is selected, it will lie in the support set forever.

SSP has a mechanism to revaluate the selected indices. If the selected indices are incorrect, they will be removed from the support set in the next iteration. However, because the adjoining atoms of partial Fourier matrix are strongly correlated and SSP selects multiple atoms at once, it has a weak ability in distinguishing adjoining atoms. We observe that SSP performs worse than SOMP in solving the partial Fourier MMV problem.

HMP combines the strengths of SOMP and SSP. On the one hand, HMP selects atoms by individually applying OMP on local residual vectors (Algorithm 4). This ensures the atoms are selected one by one. On the other hand, in the backtracking stage, it utilizes the mechanism of SSP to revaluate the selected indices, which will remove the wrong selected indices from the support set. However, in this paper, we summarized the drawbacks of HMP. Firstly, our simulation results demonstrate that it does not always performs better than other algorithms; e.g., it performs worse than SOMP in the case of relative few signals. Moreover, it needs sparsity as a prior information which is almost impossible to be known in most practical applications. Although G. Li and R. J. Burkholder [17] propose a method to estimate sparsity, it is not always effective. Once the estimated sparsity is smaller than the real sparsity, the recovery process will fail.

Being a two-step algorithm, FBP has mechanism to revaluate the selected atoms, and it does not need sparsity as a prior information (Algorithm 3). However, in the forward step, it selects multiple atoms at once, then it is still weak in distinguishing adjoining atoms.

Input: The measurement vectors where
, the measurement matrix , the
forward step size , the backward step size , the
terminate threshold , the limit of the size of the
estimated support set .
Initialization: Letresidual vectors where
, the support set and the
number of iteration .
Iteration:
(1) Let the temporary support set
.
(2) Let
.
(3) Update the residual vector as
where .
(4) . If or , terminate
the iteration and let the final support set ;
otherwise, let and go back to step (1).
Output: Compute the estimated signals by
(5).
Input: The measurement matrix , the
measurement vectors where , the
sparsity .
Initialization: Let the residual vectors
where and the support set .
Iteration:
(1) Let the temporary support
set
where is the recovered signal
by applying OMP on the residual vector.
(2) Let
(3) The residual vector is updated as
where .
(4) If , let ,
and go back to step (1); otherwise,
terminate the iteration and let the final support set
.
Output: Compute the estimated signals by
(5).
4.2. Description of HOFBP

Inspired by the GP algorithms described above, we combine the strengths of SOMP and FBP to propose a Hybrid Orthogonal Forward-Backward Pursuit (HOFBP) algorithm.

HOFBP is summarized in Algorithm 5. In the initialization, the residual vectors are initialized as the measurement vectors and the support set is initialized as the empty set. In HOFBP, the support set expands iteratively. In the first step, as in HMP, recovered signals are obtained by individually applying OMP on residual vectors, then, indices corresponding to the largest amplitude entries of the recovered signals’ fusion result are added to the support set. In the second step, projection coefficients are obtained by individually projecting measurement vectors onto the space spanned by , where is the support set. Then, indices which correspond to the minimum entries in the fusion result of the projection coefficients are removed from the support set. In the third step, the residual vectors are updated. In the fourth step, the termination condition is checked. If the sum of all residual vectors’ is smaller than the termination threshold or the size of the estimated support set exceeds the maximum allowed size of the support set, the iteration process is terminated, the support set obtained in the last iteration is taken as the final support set and the signals are estimated by using the least squares method as stated in (5). We should point out that this is because sparsity is assumed unknown in HOFBP, we take as sparsity in the OMP algorithm.

Input: The measurement vectors where
, the measurement matrix , the
forward step size , the backward step size , the
terminate threshold , the limit of the size of the
estimated support set .
Initialization: Letresidual vectors where
, the support set and the
number of iteration .
Iteration:
(1) Let the temporary support
set
where is the recovered signal
by applying the OMP algorithm on the
residual vector.
(2) Let
.
(3) Update the residual vector as
where .
(4) . If or , terminate
the iteration and let the final support set ;
otherwise, let and go back to step (1).
Output: Compute the estimated signals by
(5).

The index selection stage of HOFBP utilizes SOMP. Because OMP is a simplified version of SOMP for one signal, then HOFBP also seems closely related to OMP. HMP combines SOMP and SSP, and HOFBP combines SOMP and FBP. Because SSP and FBP are similar to each other, then, HOFBP also seems closely related to HMP. Although HOFPB seems closely related to OMP and HMP, because SOMP and FBP are basic algorithms of HOFPB, we still say that HOFBP is an algorithm combining SOMP and FBP.

The Halting Conditions. The terminate threshold should be set as a very small positive number. In the noiseless case, we set . In noisy cases, it is set as , where is the signal-to-noise ratio (SNR). Moreover, in case of a failure, in order to avoid too many iterations, is used to limit the size of the support set. In order to ensure all correct atoms are found before the support set size reaches , it should be set larger than the real sparsity . Because is unknown, can be set large enough or simply as . If the sum of all residual vectors’ is smaller than the terminate threshold or the number of iterations exceeds the maximum allowed number of iterations, the iteration process is terminated.

The Choice of Forward and Backward Step Size. The forward step size and backward step size are two important parameters for HOFBP. The choices are critical to the performances of the algorithm. should be set larger than and smaller than the measurement number . According to the research result of the paper [18], leads to the best performance. According to the definition in HOFBP, should be set smaller than . The smaller the value of , the more carefully the indices are selected. When , HOFBP has the best performance.

4.3. Relations to Other GP Algorithms

We can see that HOFBP has more strengths over other greedy pursuit algorithms. Firstly, compared with SOMP, it has a mechanism to revaluate the selected indices. Secondly, compared with SSP, it can distinguish adjoining atoms well by individually applying OMP on local residual vectors. Moreover, it does not need sparsity as a prior information. Thirdly, compared with HMP, the support set expands iteratively, which makes it does not need sparsity as a prior information. Lastly, compared with FBP, in the forward step, it selects atoms one by one, then it can distinguish adjoining atoms well.

We noted the existence of two improved versions of FBP: Iterative Forward-Backward Pursuit (IFBP) [40] and Adaptive Forward-Backward Orthogonal Matching Pursuit (AFBOMP) [41]. However, our algorithm is very different from them. IFBP is an algorithm embedding FBP into the iterative framework. The performance improvement is mainly due to the iterative framework. In AFBOMP, an adaptive method of selecting and is proposed. By using this method, FBP’s performance is improved. Different from them, the main innovation of our algorithm is the atom selection strategy. The innovations of these three algorithms come from different aspects.

5. Computational Complexity Analysis

SOMP and SSP are extended versions of OMP and SP. According to [30], both OMP and SP have the computational complexity of . Because there are measurement vectors in the MMV problem, both SOMP and SSP have computational complexity of . According to [17], the computational complexity of HMP mainly comes from the computations of applying OMP on local residual vectors. Thus, each iteration of HMP has the computational complexity of . Since HMP performs iterations, it has the total computational complexity of .

In each iteration of HOFBP, the computational complexity is mainly composed of two parts. Part 1 mainly comes from applying OMP on residual vectors. Because is taken as sparsity, the operation of applying OMP on each residual vector has the complexity of . Therefore, the operation of applying OMP on residual vectors has the computational complexity of . Part 2 mainly comes from the projections of measurement vectors on the space corresponding to the support set in step 2 of the iteration. In step 3 of the iteration, the size of support set is , then, the computations of projecting measurement vectors have the complexity of . Generally speaking, , in comparison with the complexity of part 1, and the complexities of part 2 are marginal. Therefore, each iteration of HOFBP has the computational complexity of . HOFBP has a maximum of iterations, then the total computational complexity of HOFBP is limited by .

6. Simulation Results and Analysis

In this part, we conduct some numerical simulations to evaluate the performance and efficiency of HOFBP. The performance and efficiency of HOFBP are compared with those of HMP, SSP, SOMP, IFBP, and AFBOMP. SOMP, SSP, and HMP are traditional GP algorithms. IFBP and AFBOMP are improved versions of FBP. For IFBP, AFBOMP, and HOFBP, the forward step size is set as . Noisy and noiseless cases are both considered.

We use the following hypothesis in the simulations.(I)The measurement matrix is constructed by randomly selecting rows from a discrete Fourier matrix which has the size of . Every column of is normalized to unit norm.(II)Randomly generate signals. They have same nonzero elements positions. indices of nonzero elements positions are uniformly chosen from the set . The nonzero elements of each signal are independently chosen from the standard normal distribution.(III)All curves in the following figures are obtained by averaging the results over independent trials. In each trial, the signals and measurement matrix are generated independently.(IV)Perfect recovery percentages are used to evaluate all algorithms’ performances. In the trial, if , where and , respectively, denote the original joint signal and the recovered joint signal in the trial, we announce the trial is a perfect recovery. Supposing there is perfect recovery in trials, the perfect recovery percentage is .(V)Average run times are used to evaluate all algorithms’ efficiencies. They are computed by averaging the run times of independent trials.(VI)The parameters of algorithms are set as follows: the limit of the size of the estimated support set in HOFBP, IFBP, and AFBOMP is set as ; the termination threshold of HOFBP, SOMP, IFBP, and AFBOMP is set as ; the stop conditions of SSP and HMP are the real sparsity .(VII)The simulations are performed in Matlab R2014a running on a computer with 3.7 GHz, Intel Xeon E3 processor, with 16.0 GB of RAM, and Windows 7 system.

6.1. Robustness against the Step Size

Firstly, we analyze the influence of the value of step size on the performances and efficiencies of HOFBP, IFBP, and AFBOMP. Figure 1 shows the perfect recovery percentages and average run times of these three algorithms against the sparsity changing from to in the noiseless case. We fix other parameters as , , and . varies in .

For these three algorithms, we can see that leads to the best performances. With the increase of the value of , the performances of these three algorithms decline. That is because the smaller the value of , the more carefully the indices are selected. For the efficiencies, we can see that the smaller the value of is, the more time these three algorithms consume. The reason is that the smaller the value of is, the more iterations are needed for the algorithms to reach convergence.

We can see that, for the same , HOFBP has the highest recovery percentage. For efficiency, HOFBP consumes more time than AFBOMP, however, much less time than IFBP.

6.2. Robustness against Sparsity

Figure 2 shows the perfect recovery percentages and average run times of all algorithms against the sparsity changing from to in the noiseless case. We fix other parameters as , , and . HMP performs better than SOMP at the beginning. With the increase of sparsity, HMP begins to perform worse than SOMP at the sparsity . SSP performs nearly consistently worse than SOMP, which is consistent with our previous analysis. HOFBP performs better than the other algorithms. IFBP has similar performance with HOFBP; however, it consumes much more time than HOFBP. For efficiency, HOFBP consumes similar time to HMP and AFBOMP but performs better than them.

6.3. Robustness against the Measurement Number

Figure 3 shows the perfect recovery percentages and average run times of all algorithms against the measurement number changing from to in the noiseless case. We fix other parameters as , , and . With the increase of the measurement number, the perfect recovery percentages of all algorithms increase. HMP and SSP perform consistently worse than SOMP. HOFBP has the best performance among all algorithms. IFBP performs similarly to HOFBP; however, it consumes much more time. For efficiency, HOFBP is similar to AFBOMP and HMP, but with better performance. Moreover, HOFBP needs the least measurement number to reach perfect recovery.

6.4. Robustness against the Number of Signals

Figure 4 shows the perfect recovery percentages and average run times of all algorithms against the number of signals changing from to in the noiseless case. We fix other parameters as , , . With the increase of , the perfect recovery percentages of all algorithms increase. This is because the more measurement vectors are, the more correlations the algorithms can utilize. We can see that HMP performs worse than SOMP when . This illustrates that HMP has less strengths than SOMP in the case of relative less signals. HOFBP performs better than other algorithms. Moreover, it needs less number of signals to reach perfect recovery. For efficiency, we have similar results to those of Sections 4.2 and 4.3.

6.5. Recovery from Noisy Observation

In this part, we test algorithms’ performances against noise. We assume the noise is i.i.d Gaussian noise. The signal-to-noise ratio (SNR) is used to measure the relation between the measurement signal and the noise, which is defined as

What should be pointed out is that, in the noisy case, the terminate parameters of SOMP, HOFBP, IFBP, and AFBOMP are set as . The terminate parameters of SSP and HMP are still set as the real sparsity.

Figure 5 shows the perfect recovery percentages and average run times of all algorithms against the change of noise. Other parameters are fixed as , , , and . With the increase of SNR, the perfect recovery percentages of all algorithms increase. SSP and HMP perform consistently worse than SOMP. AFBOMP performs worst, which means it is not suitable for the noisy situation. HOFBP performs consistently better than other algorithms. For efficiency, we have similar result to those of Sections 4.2 and 4.3.

7. Conclusion

The strengths and drawbacks of previous GP algorithms are analyzed. On this basis, we combine SOMP and FBP to propose the HOFBP algorithm. The choices of the parameters are critical to the performance of this algorithm. However, the simulation results show that, by using proper parameters, HOFBP has better performance than other GP algorithms at the expense of more computing time in some cases. Moreover, HOFBP does not need sparsity as a prior information. It is important to give the exact expressions for the mean square error in the construction using our algorithm. This will be researched in our future work. Moreover, in my opinion, AFBOMP, IFBP, and HOFBP can be combined to improve the performance further. This also can be researched in the future work.

Appendix

As illustrated in Section 2.3, Gaussian random measurement matrix is very different from partial Fourier measurement matrix. The adjoining atoms of Gaussian random measurement matrix are not highly correlated to each other. In order to demonstrate this point, we do some experiments.

IFBP, AFBOMP, and our algorithm are improved versions of FBP. In Section 6, our algorithm shows better performance than IFBP and AFBOMP in solving partial Fourier MMV problem. In this part, our algorithm is compared with them in solving the Gaussian random MMV problem.

The experimental design is the same as in Section 6, except the measurement matrix has Gaussian random entries.

Figures 6(a), 6(b), and 6(c) individually show the performances of these three algorithms against the change of sparsity K, measurement number M, and the number of signals L. It can be seen that our algorithm has not advantage over IFBP and AFBOMP in solving a Gaussian random MMV problem.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by National Natural Science Foundation of China (no. 51574232 and no. 61379100).