Abstract

In this paper, we propose a modified version of the hard thresholding pursuit algorithm, called modified hard thresholding pursuit (MHTP), using a convex combination of the current and previous points. The convergence analysis, finite termination properties, and stability of the MHTP are established under the restricted isometry property of the measurement matrix. Simulations are performed in noiseless and noisy environments using synthetic data, in which the successful frequencies, average runtime, and phase transition of the MHTP are considered. Standard test images are also used to test the reconstruction capability of the MHTP in terms of the peak signal-to-noise ratio. Numerical results indicate that the MHTP is competitive with several mainstream thresholding and greedy algorithms, such as hard thresholding pursuit, compressive sampling matching pursuit, subspace pursuit, generalized orthogonal matching pursuit, and Newton-step-based hard thresholding pursuit, in terms of recovery capability and runtime.

1. Introduction

Compressed sensing has been applied in many industrial fields, such as image processing [1, 2], wireless channel estimation [3], group testing [4], and sparse signal recovery [57]. One of the key components in compressed sensing is the reconstruction of a signal using measurements smaller than the signal length. The underlying assumption used to achieve this is that the signal is required to be sparse. The corresponding mathematical model can be expressed as the following sparse optimization problem:where represents the total number of nonzero entries of , s is the sparsity level, is the measurement matrix with , and is the measurement vector.

Many different types of algorithms exist to solve the aforementioned sparse optimization problem. The algorithms are divided into three categories: optimization [812], greedy [1317], and thresholding [1823]. In principle, which algorithm should be selected depended on the actual situation. For example, the runtime is one of the criteria for algorithms. Basic pursuit is one of the optimization methods, and its runtime depends on the choice of algorithm used for the minimization. If sparsity is much smaller than the signal length , the greedy Orthogonal Matching Pursuit (OMP) is extremely fast. A major advantage of hard thresholding algorithms is that they are almost not affected by the level of sparsity. In addition, the theoretical analysis tools for these algorithms maybe different as well; see [24] for more discussion. In this study, we focused on a popular thresholding method, the hard thresholding pursuit (HTP) algorithm [2123]. HTP is a modified version of the iterative hard thresholding (IHT) algorithm that adds an orthogonal projection/debiasing process. IHT [18] is a combination of the gradient descent direction and hard-thresholding operator. Although the gradient descent direction is used, the value of the objective function may not decrease because of the application of the hard-thresholding operator, which guarantees the sparsity of signals without simultaneously considering the reduction of the objective function value. This drawback is overcome using a new thresholding operator called Optimal -Thresholding, which selects components that make the objective function reach the minimum [25, 26]. However, this modification causes a significant increase in runtime because the subproblem is not easy to solve. Recently, using a binary regularization and linearization technique, Zhao and Luo [27] proposed Natural Thresholding algorithm to reduce the computational cost. In addition, Liu and Barber introduced a new operator called the Reciprocal Thresholding [28], which lies between hard and soft thresholding. As mentioned earlier, an advantage of the IHT is that its runtime is almost uninfluenced by the sparsity level. Along with its simple structure, this property makes IHT considerably attractive. Moreover, orthogonal projection aims to find a vector that best fits the measurements in the corresponding subspace; therefore, its combination with IHT will alleviate the instability of IHT and lead to the HTP algorithm. Additional information on hard thresholding algorithms and their applications can be found in [5, 2935].

The main contributions of this study are given as follows:(1)It should be noted that many algorithms for solving compressed sensing only use the gradient information of the current point , such as IHT [18], HTP [22], compressive sampling matching pursuit (CoSaMP) [15], and subspace pursuit (SP) [13]. In the design of an algorithm, it is possible to improve the efficiency of the algorithm utilizing more information provided by previous iteration points. Considering this motivation, we used the information of the current point and the previous point to improve HTP performance. Notably, compressed sensing has a combinational structure; hence, it does not strictly belong to the field of continuous optimization. This motivated us to study the convex combination rather than the (extra) momentum technique because the hard-thresholding operator would break down the advantage imposed on the (extra) momentum technique shown in standard optimization problems. In other words, we present a novel idea of using a convex combination of and to generate an intermediate point and then perform the gradient descent direction at (not ). Based on this idea and using orthogonal projection, we propose a new algorithm called the modified hard thresholding pursuit (MHTP) algorithm.(2)To guarantee the convergence of our algorithm from a theoretical perspective, we discuss the Restricted Isometry Property (RIP) of measurement matrix . More precisely, if the Restricted Isometry Constant (RIC), , of the measurement matrix satisfiesthe iterative sequence generated by the MHTP converges. This bound is the same as that for HTP [22]; indeed, the best RIP-based bound for HTP is [22] at present. The algorithm proposed in this paper includes HTP as a special case; hence, the RIP-based bound cannot be larger than . We establish the convergence analysis, finite termination, and stability of the MHTP without decreasing this upper bound.(3)To demonstrate the effectiveness of the MHTP for sparse signal recovery, numerical experiments are conducted by comparing it with several mainstream algorithms such as HTP, Newton-step-based hard thresholding pursuit (NSHTP) [30], generalized orthogonal matching pursuit (gOMP) [36], CoSaMP, and SP from both synthetic data and real images. For synthetic data, simulations are conducted in noiseless and noisy environments, wherein the successful frequency, average runtime, and phase transition of algorithms are considered. The numerical results indicate that the MHTP outperforms the other algorithms in terms of the recovery capability of sparse signals and consumes less time than NSHTP, CoSaMP, SP, and gOMP. In real-world experiments, the reconstruction capability of the algorithms using several standard test images is compared in terms of the peak signal-to-noise ratio (PSNR) and total runtime. Simulations show that the MHTP is competitive with other algorithms in terms of image reconstruction quality and runtime.

The notations frequently used in this study are given as follows. For a given index set , denotes the cardinality of and denotes the complement of . For a fixed vector , is obtained by retaining the elements of indexed in and zeroing out the rest of elements. The notation is an index set whose elements correspond to -largest absolute entries of , and represents the hard thresholding of , that is, . Let be the support of , that is, . For a given real number , denotes the ceil function of . The remainder of this paper is organized as follows. The HTP (Algorithm 1) and MHTP (Algorithm 2) algorithms are described in Section 2. Section 3 presents a theoretical analysis of the MHTP. Numerical experiments are presented in Section 4. Finally, conclusions are drawn in Section 5.

Input: a measurement matrix , a measurement vector , a sparsity level , a step size . Perform the following steps:
 S1. Start with -sparse vector , typically .
 S2. Repeat
             
   until a stopping criterion is met.
Output: the -sparse vector .
Input: a measurement matrix , a measurement vector , a sparsity level , and two parameters , . Perform the following steps:
 S1. Start with -sparse vectors , typically .
 S2. Repeat
              
  until a stopping criterion is met.
Output: the -sparse vector .

2. Algorithm: MHTP

As mentioned earlier, the classical HTP uses the negative gradient at the current point as the search direction and then resorts to a hard thresholding operator to ensure feasibility. Hence, the new iteration significantly depends on the information at .

To improve the performance of HTP, we attempt to use the information of the current point and the previous point to construct a new search direction. First, we introduce an intermediate point that is a convex combination of and , where . Replacing in the HTP by the intermediate point leads to MHTP; clearly, MHTP is reduced to HTP when .

3. Theoretical Analysis

A variety of available tools has been introduced to analyze compressed sensing, such as coherence, RIP, null-space properties, and range-space properties. These differences and relationships are described in [24]. In this section, the RIP concept is used to establish the convergence analysis, finite convergence, and stability of the MHTP. Hence, we first define the RIC and RIP.

Definition 1. [9, 24] Letbe a matrix withand s be a positive integer. The restricted isometry constant (RIC) of order s denoted byis the smallest numbersuch thatfor any s-sparse vector. The matrix A is said to satisfy the RIP of order s if.

Note that the RIC satisfies as . The following are some basic inequalities related to RIC, which are frequently used in the theoretical analyses.

Lemma 1. [22, 24] Let and be the vectors, be an index set, and be a positive integer. The following statements hold.(i)If , then .(ii)If , then

Because is produced using the current point and the previous point , the following three-terms inequality plays a key role.

Lemma 2. [32] Suppose that a nonnegative sequence with satisfieswhere , , and . Thenwith

The following result estimates the error on using and , where and .

Lemma 3. Suppose that the sequence is generated by the MHTP with an inaccurate measurement . Let and . Thenwhere and .

Proof. Note thatSince for and , it follows from Lemma 1 thatThis together with Equation (8) leads towhere and are used to ensure the coefficients above are nonnegative. It yields Equation (7) and the proof is complete.

The following property of orthogonal projection is essential in the analysis of the MHTP, see for example [22, Equation (3.21)] and Zhao [25, p. 49].

Lemma 4. Given a vector and its corresponding index set , let be the inaccurate measurement of , i.e., , and be an index set such that . If satisfiesthenwhere .

HTP is a mainstream thresholding algorithm for sparse signal recovery. The best RIP-based bound for HTP is [22]. The MHTP algorithm proposed in this paper includes HTP as a special case; hence, the RIP-based bound cannot exceed . The following theorem provides an affirmative answer as to whether the convergence of the MHTP can be established without decreasing the upper bound.

Theorem 1. Suppose that the RIC, , of the measurement matrix and the step size satisfywhere . Then, the sequence generated by the MHTP with satisfieswhere , , ,and

Proof. For and in MHTP, we have . Replacing and in Lemma 4 obtainsNote that , where is given in Lemma 3. The term can be estimated as follows:Since and is the index set of -largest absolute elements of , thenwhere the last equality is due to . It follows from Equations (19) and (20) thatUsing Lemma 3, Equation (21) becomeswhere . Inserting Equation (22) into Equation (18) yieldswhere and are given in Equation (17).
Next, let us check whether Lemma 2 is applicable for Equation (23) under the condition Equation (14). From Equation (17) and , we see that . Hence, it only needs to prove that . Based on the fact and the condition in Equation (14), we havewhich implies thatIf , thenwhere the first equality is given by Equation (17). On the other hand, if , thenIt follows that under Equation (14). Applying Lemma 2 to Equation (23), we obtain Equation (15) with

Remark 1. From the above argument, the requirement on is a key factor in algorithm convergence. It is well-known that if the measurement matrix is taken as a standard Gaussian random matrix, then the restricted isometry property can be attained with high probability. More precisely, for a standard Gaussian matrix , if andthen the restricted isometry constant of satisfieswith probability greater than or equal to , wherePlease see [24, Chapter 8] for more information.

The following result shows that, in the case of the accurate measurement , the sequence generated by the MHTP converges to the -sparse signal .

Corollary 1. Suppose that the RIC, , of the measurement matrix and the step size satisfy Equation (14). Then the sequence generated by the MHTP with and satisfieswhere and are given in Theorem 1. Moreover, converges to as .

The following results establish the finite termination properties of the MHTP. It only needs to consider the case where is nonzero; otherwise, is recovered at the first iteration because the initial point is taken as in the MHTP.

Theorem 2. Suppose that the RIC, , of the measurement matrix and the step size satisfy Equation (14). Then MHTP can recover the nonzero vector with from in at mostiterations, where , , is given in Theorem 1 and

Proof. For , we claim that the following conditionis sufficient to ensure . Indeed, since and is the least-square solution in the subspace , that is,then in this case. To establish the finite termination property, we just need to show the validity of Equation (34) as , where is given by Equation (32). Since , we have as . From Lemma 1(i), one hasThis together with Equation (8) yieldsSince and , it follows from Equations (26) and (27) thatCombining Equations (37) and (38) and Corollary 1, we obtainwhere the last inequality is ensured by and , and is given by Equation (33). It follows thatSince , by using the triangle inequality, we getfor any . Because of and , then . For , it follows from Equations (40) and (41) thatwhere the second inequality is ensured by Equation (32) and . This shows the validity of Equation (34) and the proof is complete.

Given two integers , definewhich describes the error of the best -term approximation of a vector with respect to -norm [24, Definition 2.2]. Recall in the following lemma a useful inequality associated with and , which is used to analyze the stability of MHTP.

Lemma 5. [24, Theorem 2.5]. For a given and a positive integer , we have

The stability of MHTP is given in the following result.

Theorem 3. Suppose that the RIC, , of the measurement matrix and the step size satisfy Equation (14). Then for a given and , the sequence generated by the MHTP with satisfieswhere , , C are given in Theorem 1, and .
Moreover, every cluster point of the sequence , say , satisfies

Proof. Using the triangle inequality, we haveDenote where the positive integers satisfy and . Note that and . For , and , we havewhere the equalities can be obtained from Equation (43) and the inequality follows from Lemma 5. By Theorem 1 and the triangle inequality, one hasIt follows from [24, Lemma 6.10] thatCombining Equation (3) with Equation (51) yieldswhere the last inequality is due to the factCombining Equations (47)–(49) with Equation (52) yields Equation (45).
From Equation (45) and , we conclude that the sequence is bounded and hence there exists at least one cluster point. Assume without loss of generality that is a cluster point of . The Formula (46) follows by taking in Equation (45).

4. Numerical Experiments

All experiments are conducted on a personal computer with the processor Intel(R) Core(TM) i5-7200U CPU @ 2.50 GHz and 4 GB memory. To demonstrate the effectiveness of MHTP for sparse signal recovery, we compare it with several mainstream algorithms in synthetic and real-world experiments. For the sake of convenience, we provide the following parameter descriptions and abbreviated forms for these comparing algorithms used in numerical experiments in Table 1. There are a total of six algorithms involved.

Let be the initial points for the MHTP and be the initial point for other algorithms. Because the dimensions in this study are larger than those in [30], the algorithmic parameters in the NSHTP are taken as and through numerical tests to acquire the better recover capability, where and are the largest and smallest singular values of the realized sensing matrix, respectively. Let be the maximum number of iterations for the gOMP [36], where represents the index corresponding to largest correlation in magnitude. For other algorithms, the maximum numbers of iterations are set to 50.

4.1. Synthetic Data

In this subsection, we discuss the performances of these algorithms using synthetic data. All measurement matrices and sparse vectors are randomly generated with , whose elements are independently and identically distributed. Specifically, is taken as a standard Gaussian random matrix, and the nonzero entries of and their positions follow standard normal and uniform distributions, respectively. The experiments are performed with accurate measurement and inaccurate measurement , where is the noise level and is a standard Gaussian noise vector. Letbe the criterion for a successful recovery.

4.1.1. Successful Frequencies and Average Recover Time

In this section, we compare the recover capability of these algorithms with a fixed , and the sparsity level ranges from 1–296 with a step size of 5. For each given , 100 random instances are used to test the successful frequencies. We first consider the influence of the parameter on the recover capability of the MHTP with a step size of . The corresponding results are shown in Figures 1 and 2, wherein Figures 1 and 2 correspond to the accurate and inaccurate measurement with the noise level . The novel idea of the MHTP is the combination of the current point and the previous point , where the parameter is the coefficient of this convex combination. In general, different values of represent different weights of information used for and . It is preferable that is not selected too close to or , which helps us make more use of the information provided by the two points and . The numerical experiment illustrates this phenomenon: the recover capability of the MHTP is weaker as whether in the absence or presence of noise. However, the recover capability of the MHTP is powerful and does not significantly change with respect to because it lies in the interval . Based on this observation, we set and for MHTP in the remainder of this subsection.

Let and . Note that and under Equation (14) in Theorem 1. Since and , then

Let . Then

Thus strictly decreases in the interval . In theory, under the convergence condition Equation (14), decreases with a increased , which corresponds to a better recovery effect. This is consistent with the phenomenon shown in Figures 1 and 2, where . However, checking condition Equation (14) is intractable because of the difficulty of computing the RIC . By contrast, the theoretical analysis above is not consistent with the numerical results shown in Figures 1 and 2 with , which may be the results of the range of in Equation (14) not being satisfied or other numerical limitations, such as the maximum number of iterations. This is not clear at present and will be further discussed in the future. Note that there often exists a gap between the theoretical and numerical results for sparse optimization algorithms, including the MHTP, owing to the theoretical analysis technique or numerical limitations.

In Figure 3, the performances of the algorithms, including MHTP, NSHTP, HTP, SP, gOMP, and CoSaMP are compared under noiseless and noisy settings. Let be the step size for the HTP and let the parameter for the gOMP be . These parameters are selected via numerical tests based on the recover capabilities of the algorithms. The results reveal that the MHTP performs better than other algorithms, even in the noisy settings, and all algorithms are stable under small perturbation with except for CoSaMP.

Corresponding to Figure 3(a), comparisons of the average number of iterations and average recover time (CPU) of these algorithms in noiseless environments are shown in Figure 4. From Figure 4(a), it is clearly observed that the average number of gOMP depends on the sparsity level and its maximum is , whereas the average number of other algorithms is not more than 50. The algorithm is regarded as failing to recover the sparse vector provided that the average number reaches the maximum. Figure 4(a) indicates that the average numbers of all algorithms except the MHTP reach their maximums at , that is, only the MHTP can recovery the sparse signal as , which is consistent with Figure 3(a).

A comparison of the average recover time for these algorithms with is shown in Figure 4(b). The average recover time is closely related to the computational complexity of the algorithm. Given a measurement matric and a -sparse vector , the complexity of these algorithms after iterations is shown in Table 2. It is not difficult to see that the complexities of HTP, MHTP, CoSaMP, and SP are of the same order, while that of NSHTP is the largest and that of gOMP is the smallest (because the iteration step for gOMP being dependent on and less than ). In Figure 4(b), NSHTP requires more time than the other algorithms, which is consistent with the results in Table 2. The result of Figure 4(b) shows that the average time consumed by gOMP and SP is much longer than that consumed by MHTP and HTP, and HTP always requires less time than MHTP. Recall from Figure 4(a) that the average number of iterations of the MHTP is greater than that of the HTP for successful recovery as . However, the average time of the MHTP is almost the same as that of the CoSaMP as , but the latter increases quickly with an increase in as . In summary, the MHTP is competitive with other algorithms in terms of recover capability and average recover time.

4.1.2. Phase Transition Curves

In the following experiments, we display the performance of these algorithms in terms of Phase Transition Curves (PTC) [32, 37] using logistic regression curves with a 50% success rate. Different from the logistic regression model used in [32, 37], we select “glmfit” function in Matlab to gain PTC directly. Let be the undersampling rate and be the oversampling rate. The -plane is often called the phase transition plane, where the region below the PTC corresponds to the successful recovery region, and the region above corresponds to the failure region. Detailed information regarding the PTC and recovery regions can be found in [37]. To generate PTC, we collect 13 groups of by equally dividing the interval , and determine the values of in PTC using ‘glmfit’ function.

Let us further discuss the influence of on MHTP from a PTC perspective. Take 10 different values of to generate the PTCs in Figure 5, where Figures 5(a) and 5(b) correspond to and , respectively. In Figure 5(a), we find that the PTC with is lower than that of the others as is small, which indicates that the recover capability of the MHTP in these cases is weaker. As , the corresponding PTC does not change significantly. This phenomenon shows that the MHTP admits a relatively larger step size with a suitable compared with that of the HTP, which is a special case of the MHTP with . However, as shown in Figure 5(b), the recovery performance of the MHTP is insensitive to changes in when . By picking , we select different to generate the PTC as shown in Figure 6, indicating that the recover capability of the MHTP with is stronger than that of . That is, a stronger recover capability of the MHTP can be obtained by increasing the step size properly. However, acquiring the optimal step size is intractable in practice. Subsequently, we discuss the relationship between and based on the recover capability.

It is well known that the step size plays a vital role in the gradient descent method, and hence, the choice of step size in the MHTP is crucial for the recovery of sparse signals. Base on recover capability, we select the appropriate parameter for MHTP through numerical experiments as ranged from 200 to 1,400 with a step size 200. In this process, the recommended value of is given as

Using Lagrange interpolation, we establish the following relation between and from the above data approximatelyand the corresponding curve is shown in Figure 7(a). Similarly, the recommended value of in HTP is given bywhich results insee Figure 7(b).

In the following experiments, the parameters and are obtained from Equations (58) and (60), respectively. The corresponding curves are presented in Figure 7, which show that the value of is nonlinear and decreases as increases. The trends for the MHTP and HTP are similar. Numerically, the value of is almost twice that of for the fixed . A comparison of the PTC is shown in Figure 8, where Figure 8(a) corresponds to accurate measurements, whereas Figure 8(b) corresponds to inaccurate measurements with a noise level . Figure 8 shows that the PTC of the MHTP is the highest in the noiseless and noisy settings, indicating that recover capability of the MHTP is stronger than other algorithms. Compared with Figures 8(a) and 8(b), we find that all algorithms, except CoSaMP, are stable under a small disturbance account .

In the common recovery region of the algorithms, we compare their average recover time in noiseless environments, see Figures 911, where is ranged from 0.02 to 0.8 with a step size 0.02. For a given , 10 random data are tested, and the average runtime for recovery is recorded when the success rate is greater than or equal to 50%. The algorithm with the shortest average recover time is selected, as shown in Figure 9, often called an Algorithm Selection Map [32, 37]. It should be noted that only three algorithms including MHTP, HTP, and CoSaMP are shown in the Figure 9 because the other algorithms are always slower than one of them. Figure 9 shows that the MHTP is the fastest algorithm when is large, whereas HTP and CoSaMP are the fastest algorithms when is medium and small.

The average runtime for successful recovery of the fastest algorithm is shown in Figure 10, which reveals that the average time consumed by the fastest algorithm is less than one second as , whereas it significantly increases in other regions when is large. The ratios of average runtime of the other algorithms to that of the fastest algorithm are summarized in Figure 11. The ratio of MHTP is less than 1.5 in most areas, and is close to 1 for large , which implies that the advantage of MHTP lies in the region with larger . Figure 11(c) shows that the average runtime of HTP is close to that of the fastest algorithm almost everywhere. Figure 11(e) shows that the ratio of CoSaMP is close to 1 in the regions with a small sparsity level, which corresponds to the case with small or large and small . However, the ratio lies in the interval [2, 6] with the increase of and . From Figure 11(b), we see that NSHTP takes more time than other algorithms because its ratio is greater than 20 in many areas. Figures 11(d) and 11(f) show that the ratios corresponding to SP and gOMP are nearly 2–4 and 4–7, respectively. These phenomena indicate that MHTP and HTP are faster than NSHTP, SP, CoSaMP, and gOMP in most cases.

4.2. Reconstruction of Real Images

In the following real-world experiments, we use standard test images, including the Lena, Goldhill, Peppers, and Barbara images, to evaluate the performances of these algorithms. The size of each image is pixels. For a given image with , using the discrete wavelet transform, we obtain , where is a sparse or compressible matrix and is an orthonormal wavelet matrix. Taking a normalized standard Gaussian matrices in as the measurement matrix of , where and is the sampling rate, we obtain the measurement . It follows that , where and are by matrices. We now aim to recover from the linear measurements , and reconstruct the original image .

Comparisons of the peak signal-to-noise ratio (PSNR) and runtime of these algorithms with the sampling rates are shown in Tables 3 and 4, wherein the “sym8” and “coif5” wavelet are used to acquire the sparse representation, respectively. In the experiments, the sparsity level is set as and the parameter in gOMP is selected as . Corresponding to , the step sizes (for HTP) and (for MHTP) are set to and , respectively.

For a given image and a fixed , let be the largest value of the PSNR (see bold) and be the minimum runtime. Let , represent the PSNR and runtime of MHTP, respectively. We denote and . Thus, the reconstruction capability of MHTP increases with the decrease of . Similarly, the speed of MHTP increases with a reduced ; in particular, MHTP is the fastest algorithm when . From the values of or PSNR (see bold), we observe that the PSNR of MHTP is the largest in most cases whenever using coiflet or symlet wavelet transform; that is, MHTP outperforms the other algorithms in terms of reconstruction quality (compare Tables 3 and 4). In particular, when , the MHTP consumes more time than the gOMP, however, the MHTP is faster than the gOMP in other situations. Moreover, the MHTP always requires less time than the NSHTP, SP and CoSaMP for any given , whereas the time consumed by the MHTP is close to that of the HTP. By comparing Tables 3 and 4, it is found that the source of is different for ‘Goldhill’. Specifically, in Table 3, only comes from NSHTP at ; in Table 4, for any , all comes from NSHTP. Hence the sparsity of the same image in the discrete wavelet transform domain is different with respect to different wavelets (such as ‘sym8’ and ‘coif5’), which in turn affects the reconstruct capability of the algorithm.

Corresponding to the first row in Table 3, the performance of these algorithms in visual quality via the image “Lena” is shown in Figures 12 and 13. From Figure 12, the reconstruction quality of the MHTP is improved remarkably with an increase in the sampling rate in the interval . A comparison of the images reconstructed using these algorithms with is shown in Figure 13, which indicates that the reconstruction quality of all algorithms is similar, except for CoSaMP; and one possible reason for CoSaMP failure is high sparsity level. Using the above mentioned experiments, we show that the MHTP is competitive with the other four mainstream algorithms in terms of reconstruction quality and runtime.

5. Conclusion

A new algorithm for solving the sparse optimization problem, called MHTP, is proposed using the information of the current point and previous point to produce the new iteration point . Numerical experiments show that the recovery capability of MHTP is stronger than that of several mainstream algorithms, such as HTP, NSHTP, CoSaMP, SP, and gOMP. The runtime of MHTP is close to that of HTP and is often less than that of the other four algorithms. Two interesting but challenging research topics merit further investigation: one is to study other acceleration techniques such as the Nesterov-accelerated gradient method to improve the efficiency of the algorithms. In addition, MHTP is reduced to the standard HTP because the coefficient of the convex combination takes . Hence, another research topic is whether the efficiency of MHTP can be further improved by dynamically adjusting the coefficients of the convex combinations in the iterative process.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

Li-Ping Geng: Methodology, Writing–original draft. Jin-Chuan Zhou: Investigation, Supervision. Zhong-Feng Sun: Investigation, Visualization. Jing-Yong Tang: Methodology, Software.

Acknowledgments

The second author’s work is supported by National Natural Science Foundation of China (12371305 and 11771255), Shandong Province Natural Science Foundation (ZR2023MA020 and ZR2021MA066), and Young Innovation Teams of Shandong Province (2019KJI013). The fourth author’s work is supported by Natural Science Foundation of Henan Province (222300420520) and Key Scientific Research Projects of Higher Education of Henan Province (22A110020).