Abstract
Sparse phase retrieval mainly solves nonconvex and nonsmooth problems. Aiming at the nonsmooth problem in sparse phase retrieval, we propose a smoothing algorithm which is called sparse smoothed amplitude flow (SPSAF). The proposed SPSAF algorithm is an amplitudebased nonconvex sparse smoothing phase retrieval algorithm. First, the original phase retrieval loss function is smoothed without modifying the gradient in the gradient refinement stage, thereby reducing the computational complexity of the overall algorithm. Secondly, the support of the original signal is estimated by differential analysis of the gaps, and the initialization can be obtained through a carefully designed method based on this support. Finally, we get sparse estimates by gradient descent based on hard thresholding. Numerical experiments show that the proposed SPSAF algorithm has significant improvements in recovery performance, convergence speed, and sampling complexity. Further, the SPSAF algorithm is stable in noisy environments.
1. Introduction
In realworld applications, especially in the field of channel estimation and image processing, the primary signal is naturally sparse or accepts sparse representation after some known deterministic linear transformations [1–5]. When the measurements are undersampled, phase retrieval will introduce sparse assumption or direct sparse transformation for the signal [6–11]. For instance, the sparsely distributed stars observed in astronomy and the sparsely distributed atoms or distributions observed in crystallography can all be regarded as applications of sparse phase retrieval. This recovery method for solving sparse signals is called sparse phase retrieval or compressed phase retrieval, which has fundamental significance in signal recovery and reconstruction [12, 13].
Sparse prior is essential for solving sparse phase retrieval problems, and many methods for sparse phase retrieval have been developed. Based on the PhaseLift algorithm, the CPRL algorithm [14] is designed by utilizing the norm regularization term, which improves the onedimensional (1D) signal to matrix, and finally obtains the sparse solution. Aiming at the application problem of the PhaseMax algorithm under sparse conditions, the SparsePhaseMax algorithm was proposed in [15, 16] by utilizing norm regularization. Based on the alternating minimization algorithm, [17] proposed the Sparse AltMinPhase algorithm, which solves the sparse solution of the equation by alternating between the missing phase information and the candidate solution. [18] developed a probabilistic phase retrieval algorithm based on generalized approximate message passing. [19] proposed a greedy sparse phase retrieval algorithm (GESPAR) by combining the damped GaussNewton method with the twoelement optimization method. It dynamically updated the support of the signal by using the twoelement optimization method and solved the optimal value of the current support by using the damped GaussNewton method. [20] proposed a soft threshold TWF algorithm for sparse phase retrieval. Qiu and Palomar developed CPRIME and SCPRIME algorithms based on the majorizationminimization method [21]. They replaced the objective function approximately with the convex function. [22] proposed the sparse truncated amplitude flow algorithm (SPARTA), a sparse phase retrieval algorithm based on the TAF algorithm. The SPARTA algorithm improves the performance of sparse phase retrieval by introducing truncation procedures. Zhang et al. proposed compressive reweighted amplitude flow (CRAF) based on the RAF algorithm in [23], making the gradient descent direction of sparse phase retrieval more effective and accurate by introducing a weighted program.
When phase retrieval adopts a noisefree Gaussian random measurement model, the CPRL algorithm can accurately recover any sparse signal from measurements, and the computational complexity is [24]. Sparse AltMinPhase and soft threshold TWF algorithm need at least measurements, and the computational complexity is . The SPARTA algorithm accurately recovers the signal from about random Gaussian measurements with a computational complexity of about , and the total running time is a positive correlation to the time required to read the data. The CRAF algorithm can accurately recover the signal from random Gaussian measurements.
The performance of the amplitudebased phase retrieval method is better than that of the intensitybased phase retrieval algorithm in both numerical and experimental verification, which is the same in a sparse environment [25, 26]. The soft threshold TWF algorithm in [20] solves the smoothing problem in the sparse phase retrieval model based on the intensity . It adopts the adaptive hard threshold iterative algorithm based on compressed sensing. In the gradient optimization stage, each iteration only retains some maximum indexes.
On the other hand, the Sparse AltMinPhase algorithm in [17] estimates the support of the original signal and solves the sparse phase retrieval problem. The Sparse AltMinPhase algorithm solves the sparse phase retrieval problem by alternating minimization and resampling conditions and performs matrix inversion in each iteration. Numerical experiments show that resampling is a necessary condition for the Sparse AltMinPhase algorithm; that is, many measurements are needed to estimate the support accurately, and the importance of the recovery of the support for signal recovery and even the whole phase retrieval is selfevident.
Among the above methods, the illustrious amplitudebased sparse phase retrieval algorithms include the SPARTA algorithm [22] and CRAF algorithm [23]. The SPARTA algorithm applies the TAF algorithm in a sparse environment, divided into two stages: initialization and gradient refinement. Firstly, a reasonable rule is used to solve the support, and some simple power iterations are used to solve the initialization problem to obtain the sparse initialization. Through a series of truncated gradient iterations and the hard threshold of each iteration, all indexes except maximum values are set to zero to realize the continuous update of the initial value, where refers to the signal sparsity. The CRAF algorithm further considers the structured sparsity pattern based on the SPARTA and RAF algorithms. It proposes the amplitudebased (block) sparse phase retrieval problem. The CRAF algorithm developed a new sparse spectrum initialization method in the initialization phase, wisely assigning negative or positive weights to each sample. By this method, the mean value of the initialization matrix obtained by the CRAF algorithm has improved performance. Then, the CRAF algorithm uses the reweighted gradient in the gradient refinement stage to gradually improve the initialization of the hard threshold iteration.
Based on the SAFPR algorithm and CRAF algorithm, we propose a new sparse phase retrieval algorithm, called sparse smoothed amplitude flow (SPSAF). The SPSAF algorithm is an amplitudebased nonconvex sparse smoothing phase retrieval algorithm with two stages: initialization and gradient refinement. In the initialization stage, we estimate the support by a reasonable rule. Then, the initial estimation is obtained by a carefully designed initialization method based on the support. The gradient descent method updates the initialization estimate based on the hard threshold in the gradient refinement stage. Numerical experiments show that the SPSAF algorithm is robust to additional noise in the finite support. Compared with the existing typical algorithms, the recovery performance and speed of the SPSAF algorithm are significantly improved.
The rest of this paper is organized as follows. In Section 2, the sparse smoothing phase retrieval problem is formulated. Section 3 introduces the SPSAF algorithm in initialization and gradient refinement two stages, respectively. Numerical experiments are provided in Section 4. Finally, Section 5 summarizes this paper.
2. Sparse Phase Retrieval Problem
2.1. AmplitudeBased Sparse Phase Retrieval
The amplitudebased sparse phase retrieval problem is a kind of problem to reconstruct the sparse signal from the phaseless measurements [19, 20, 22]. Mathematically, we can describe the sparse phase retrieval problem as a set of phaseless quadratic equations, namely, where represents the level of sparsity. express a zeronorm operator, that is, the number of nonzero elements. The signal is sparse, consisting of up to nonzero elements and zero elements. The goal of amplitudebased sparse phase retrieval is to reconstruct the sparse signal or based on the given measurements .
For theoretical analysis, suppose sparsity is prior known. In the real case, the number of measurements required to the sparse signal recovery is at least . [27] pointed out that sparse signals can be reconstructed by selecting measurements on general positions in real number space . Similarly, in the complex cases, the number of measurements required for sparse phase retrieval is at least [28]. Due to the lack of more phase information in the noise environment, stable sparse or compressed phase retrieval needs as many measurements as the relevant compressed sensing problem. Therefore, as with compressed sensing, stable sparse phase retrieval requires at least measurements [29]. [30] proves that sparse phase retrieval requires measurements to recover sparse signals stably in the real case.
We adopt a realvalued Gaussian model to analyze sparse phase retrieval in this paper. The model assumes sparse signal and i.i.d. standard Gaussian sensing vector . Nevertheless, the proposed algorithm is also applicable to the complexvalued Gaussian phase retrieval model, namely, signal and i.i.d. standard Gaussian sensing vector . Given the data and assuming that the equation has a uniquesparse solution, our goal is to develop a simple and efficient algorithm that can recover any sparse dimensional signal from as few amplitude measurements as possible in (1).
Using the least square criterion, the problem of reconstructing sparse solution from phaseless quadratic equation (1) can be naturally transformed into the problem of minimizing the amplitudebased empirical loss function: where represents the support of and is the signal sparsity. The objective function in (2) is nonconvex and nonsmooth, which is subject to the combined constraint of zero norm . Therefore, the sparse phase retrieval optimization problem is NPhard, which is difficult to solve in the calculation [31–33]. The methods to solve these problems include the following: (1)In the initialization stage, the support of the signal needs to be accurately estimated, and the accurate support information can correctly restore the original sparse signal. The selection of initialization methods is also critical. At present, there are many initialization methods, including spectral initialization, orthogonal promotion initialization, and reweighted maximum correlation initialization, which can be applied to nonconvex sparse phase retrieval models(2)In the gradient refinement stage, inspired by the iterative hard threshold algorithm of compressed sensing [34, 35], in the descent process, the adaptive hard threshold process that only retains some maximum indexes in each iteration is proved to be effective
2.2. Smoothing Scheme of Sparse Phase Retrieval
Due to the existence of modulus in the sparse phase retrieval objective function (2), there will be dramatic changes and discontinuous changes in the gradient function at point when , which makes the normal gradient descent method ineffective. Therefore, the sparse optimization problem in (2) is nonconvex and nonsmooth. Usually, the way to solve this kind of nonsmooth function is to bypass this nonsmooth function, such as the SPARTA or CRAF algorithm. [22] points out that if is less than a certain threshold, the SPARTA algorithm will remove these “bad” gradient components by the truncation program. The CRAF algorithm [23] is based on the same principle, but it is not a simple removal of these “bad” gradient components, but according to the size of to reweight gradient components, the effect is better than the SPARTA algorithm. However, the above two methods will introduce additional calculation for the gradient, resulting in changes in the search direction and thus affecting the update of the entire gradient direction.
In recent years, there have been many research on the nonconvex phase retrieval algorithm, among which the optimization of the nonsmooth term in the empirical loss function has made great progress. [36] proposed a mixed optimization method to modify the empirical loss function and solve the nonsmooth problem through the classical proximal method. Pinilla et al. [37] proposed to smooth the phase retrieval problem. They introduced a special smoothing function to replace the nonsmooth term in the original loss empirical function and solved the related problems using the projection conjugate gradient method. [38] adopted the same smoothing strategy. By introducing the smoothing approximation function, it replaced the nonsmooth term and used the same operation to put forward a new smoothing loss empirical function. [39] introduced two smoothing functions for the nonconvex phase retrieval loss function and proposed two smoothing algorithms. [40] further improves the smoothing operation for the loss function and proposes a faster stochastic smoothing phase retrieval algorithm.
By introducing the smoothing function, we approximate and replace the absolute value function in (2) to obtain an approximate sparse smoothing phase retrieval loss empirical function. Therefore, no additional operation on the gradient is needed in the iterative update process, which reduces the computational complexity. There are many substitution methods for the absolute value function. We mainly use the following method to smooth the phase retrieval loss function (2).
We first define the concept of smoothing function to be used.
Definition 1. Define the absolute value function . If is smooth for any real constant in and for any fixed then is the smoothing function of .
By Definition 1, the smooth substitution function of the absolute value function is defined as where the relaxation factor is a real constant and the smoothing parameter .
As a smooth approximation function of the absolute value function , must have the following properties: (1) is a Lipschitz continuous function(2) is even, namely, (3) uniformly converges to in
Proof. (1)By Definition 1, is smooth for any real constant in , so we can haveSince the real constants , , therefore, According to (5) and (6), it can be obtained that Therefore, is a Lipschitz continuous function with bounded firstorder function, and its Lipschitz constant is 1. (2)By definition of the absolute value, it is obvious that (3)According to Definition 1 and (4), we can haveAccording to the Minkowski inequality, it can be obtained that Therefore, can converge uniformly to and only depends on the value of .
Let , according to (2) and (4), the smoothing sparse phase retrieval loss function can be expressed as
To ensure that the introduction of does not affect the global minimum of the original nonconvex loss function (2), the measurement adopts the same smoothing strategy, i.e.,
Due to the fact that smoothing vector is a positive correlation to the amplitude of signal , we approximate it as where the coefficient is a constant.
Combining (10) and (11), we can obtain the amplitudebased sparse smoothing phase retrieval leastsquare optimization problem:
Function is an approximate smoothing function of the original loss function . Both of and have the same global minimum. When the smoothing relaxation factor or the smoothing parameter , will be reduced to the original function .
3. The Proposed SPSAF Algorithm
To solve the smoothing sparse phase retrieval leastsquare optimization problem (13), we propose a new smoothing sparse phase retrieval algorithm, named SPSAF. It can be classified into two periods: initialization and gradient refinement, which are described in detail.
3.1. Sparse Weighted Spectrum Initialization
Based on the weighted initialization method proposed in [41], the SPSAF algorithm proposes an improved sparse weighted initialization estimation method. Compared with the orthogonal promotion initialization method used in the SPARTA algorithm, this method abandons the truncation procedure. It adopts the reweighted strategy so that the information of all samples is effectively applied. We divide the initialization algorithm of SPSAF into three parts, including the general initialization method, support recovery method, and sparse initialization method.
3.1.1. General Initialization Method
First of all, assume without loss of generality. It can be seen from [25, 41] that there is a certain correlation between and . The larger the amplitude of the inner product of and is, the higher the correlation between and the unknown solution is, so it carries more profitable direction information of signal . Similar to the weighted maximum correlation method in [41], the corresponding sequence of can be obtained by sorting the amplitude , and then, the unknown solution can be estimated.
Let set denote the set of indexes that participate in the calculation initialization of the picking sensing vector . The vector corresponding to the largest measurements have the largest correlation with the real solution , where is an integer of . Therefore, we can estimate the real solution by indexing with , that is,
Secondly, to further strengthen the ability of to point to the real solution , the initialization method uses the reweighted strategy to add weights to different . Sorting index set according to the size of . The larger corresponds to the larger weight, and the smaller index corresponds to the smaller weight or even negative weight, namely, where is the weight corresponding to different indexes. According to the size of , we define the weight
When , utilizing the strong law of large numbers and rotation invariance of Gaussian sampling vector , we can get the initiation
From Theorem 1 in [23], we can see that given a constant , there exist constants and , and when , the error between the initial estimate and the real solution meets with probability exceeding .
Finally, it is worth emphasizing that since the signal is sparse and , the initialization method in (15) becomes by utilizing regularization to represent sparse prior information.
3.1.2. Accurate Support Recovery
When the measurements are undersampled, the number of measurements is less than the dimension of the signal; that is, . At this time, the sparse assumption of the signal needs to be introduced.
Without loss of generality, we set the support of unknown sparse solution , and . According to [22], we define random variables:
For the normalized Gaussian sensing vector , let be a positive integer; we can get where denotes twoorder multiplication. According to the rotation invariance of Gaussian distribution, it can be proved that for all , where is after removing the th element.
If index which means the corresponding is a nonzero term, that is, , then (22) can be converted to
Conversely, if which means , we can get
According to formulas (23) and (24), we can find that for and , the expected value of has at least intervals, which we call the gap. As long as the gap is large enough, the support can be restored accurately in this way [22]. That is, the index set corresponding to maximum can restore the support of the original signal . However, since is actually unavailable, it is replaced by their independent implementation. At the same time, following the law of strong numbers, the sample mean should approach a whole .
To estimate the support of the original signal , we first calculate the empirical estimate of the socalled sample mean as the expected . The larger , the greater probability of nonzero the corresponding element is. Therefore, we need to collect indexes corresponding to maximums in , which form an estimated support that can be expressed as
[22] proves that to improve the probability of accurately recovering the support , measurements are needed. At the same time, in order to ensure the separation of the index of the support, the smallest nonzero item of the signal is approximately ; that is, for some constants , there are
3.1.3. Sparse Initialization Method
After obtaining the support domain estimate of the original signal, we can estimate the initialization according to . Specifically, for all , we rewrite the measurements as where includes the element of whose index , and is the element of whose index .
Apply the general weighted initialization method in (19) to the reduced data , that is,
By zerofilling the element corresponding to the index of , we can construct sparse dimensional initialization . When , the final sparse initialization can be obtained through the norm estimation of in (17).
3.2. Thresholded Gradient Stage
After obtaining an accurate sparse initialization , we put it into the gradient iteration for continuous refinement. Our SPSAF algorithm introduces a smoothing strategy for the amplitudebased phase retrieval loss function, avoiding the nondifferentiable objective function. It significantly simplifies the convergence analysis of SPSAF, eliminating the need to perform other operations on the gradient like other algorithms.
The method we adopt is to iteratively refine through a series of gradient iterations based on the sparse hard threshold, that is, where is the number of iterations and constant is the step size. denotes the sparse hard threshold operator, which can transform all elements except largest elements in into zero, thus transforming dimensional vector into sparse dimensional vector , i.e.,
Combining (13), the Wirtinger gradient of the SPSAF algorithm can be expressed as
It is worth emphasizing that for the smoothed sparse phase retrieval model, the selection of different will affect the recovery performance and calculated efficiency of the algorithm. Specifically, the larger the value, the better the restore performance of the algorithm, but the convergence rate will slow down; the smaller the value, the faster the algorithm converges, but the recovery performance decreases. We set the value selection according to the ratio of measurements to sparsity . When the ratio is less than in the real case or in the complex case, or ; otherwise, or . The specific SPSAF algorithm is described as follows.

4. Experimental Results
This section will introduce the relevant numerical results of the SPSAF algorithm. To reflect the superiority of the SPSAF algorithm, we compare it with the SPARTA algorithm [16] and CRAF algorithm [20], which are the latest methods of sparse phase retrieval. The parameters of all algorithms will use their recommended values. All simulation experiments were conducted with 100 independent Monte Carlo experiments. In each experiment, all algorithms’ initialization power iteration numbers and gradient refinement iteration numbers are set to 100.
In all experiments, the signal or to be recovered is sparse signal in the real or complex case. When the sparse phase retrieval adopts the realvalued Gaussian model, the sparse signal is and the sensing vector is . When sparse phase retrieval adopts the complex Gaussian model, the sparse signal is and the sensing vector is . In addition, other parameters of the SPSAF algorithm are selected according to experience: learning rate in the real case or in the complex case, smoothing parameter .
We use relative error as our performance index, that is, where is the Euclidean distance from the estimated value to the real solution . When the relative error is less than , it can be considered that the original sparse signal has been successfully recovered. Namely, the current experiment is successful.
4.1. Recovery Success Rate with Unknown Sparsity
In some practical applications, the sparsity of the signal may be unknown, so it is necessary to test the running state of the algorithm in the case of unknown sparsity, namely, the algorithm’s stability. We define as the estimation of sparsity and set it to the upper bound of sparsity level in theory. Theorem 1 in [16] proves that when is approximately equal to , the upper limit of the sparsity level in this paper is about .
When the phase retrieval model is in the real case, the recovery success rates of SPSAF, SPARTA, and CRAF algorithms are compared as shown in Figure 1, where the number of measurements increases from to . Notably, these curves show that SPSAF has higher precision recovery performance than other comparison algorithms. When the sparsity is unknown, the SPSAF algorithm shows higher stability, and the recovery rate reaches more than when and achieves accurate recovery when .
When the phase retrieval model is in the complex case, the recovery success rates of SPSAF, SPARTA, and CRAF algorithms are shown in Figure 2, where the number of measurements ranges from to . It can be found from Figure 2 that the SPSAF algorithm has a weak advantage in recovery ability compared with other algorithms. When the actual sparsity , the estimated sparsity ; the SPSAF algorithm is stable. It is worth emphasizing that when the number of measurements , the SPSAF algorithm first reaches more than , showing excellent recovery ability.
4.2. Recovery Success Rate with Known Sparsity
In this section, we compare the signal recovery success rates of each algorithm when the sparsity is known. The comparison of the algorithms in the real case is shown in Figure 3, where the number of measurements increases from to . It can be seen from Figure 3 that the recovery performance of the SPSAF algorithm is significantly higher than the other two algorithms, showing excellent recovery ability. In addition, it is worth noting that compared with Figures 1 and 3, it can be found that the recovery performance of the SPSAF algorithm significantly improved when the sparse prior is known.
Figure 4 depicts the success rate of signal recovery in the complex case. We set the distribution of the number of measurements from to , where the prior sparsity is known. It can be seen from Figure 4 that the SPSAF algorithm is significantly better than the other two algorithms. Moreover, by comparing Figures 2 and 4, we find that the recovery degree of the algorithm has been slightly improved when the sparsity is known.
Figure 5 depicts the relationship between the success rate of signal recovery and the sparsity in the real case, where the step size parameters of each algorithm take the optimal value. It can be seen that with the increasing sparsity , the SPSAF algorithm can still accurately restore the signal at the sparsity and can still maintain a success rate of more than at . In comparison, the success rates of the CRAF algorithm and SPARTA algorithm with the same sparsity are reduced to about .
4.3. Comparison of Convergence Consumption
This section compares the SPSAF algorithm with other algorithms in terms of convergence speed and time cost. We adopt the real Gaussian model when the number of measurements is or the complex Gaussian model when , respectively. The prior sparsity of all models is , and the relative error of the algorithm is less than which is regarded as an accurate recovery. The convergence curves in the real or complex case are shown in Figures 6 and 7. It can be seen from these figures that the number of iterations required for the convergence of the SPSAF algorithm is significantly less than that of the other two algorithms.
On the other hand, Table 1 compares convergence speed and time cost under the noisefree Gaussian model, where the coarse font is the current optimal value. It can be seen from the results that the convergence speed and time cost of the SPSAF algorithm are better than those of other algorithms in both real and complex cases, which show excellent performance.
4.4. Noise Robustness
In order to prove the stability of the SPSAF algorithm in the presence of additional noise, we plot the relative mean square error (MSE) as a function of the signaltonoise ratio (SNR) value in dB. Amplitude measurements with noise can be expressed as where the size of is given by
In this section, we adopt realvalued Gaussian model with sparse prior . Figure 8 depicts the relationship between the average relative error of the three algorithms when . It can be seen that the SPSAF algorithm provides the most accurate estimation under the noise addition.
We also describe the relative error as a function of SNR with different to verify the stability of the SPSAF algorithm. Figure 9 depicts the experimental results. We use a realvalued Gaussian model with sparse prior , where SNR is between dB and dB. It can be seen that the SPSAF algorithm decreases approximately linearly with the increase of SNR in both real and complex cases, which proves that the proposed SPSAF algorithm is stable in noisy environments.
5. Conclusion
In this paper, we proposed the SPSAF algorithm to solve the problems of sparse phase retrieval. The proposed SPSAF algorithm is an amplitudebased nonconvex sparse smoothing phase retrieval algorithm divided into two stages: initialization and gradient refinement. The complexity of our algorithm is reduced by smoothing the phase retrieval loss function to avoid the truncation or weighting of the gradient. The SPSAF algorithm first estimates the support of the original signal by a reasonable rule, obtains the initial estimation by a carefully designed initialization method based on the support, and finally obtains the sparse estimation by a gradient descent method based on the hard threshold. Numerical experiments show that the SPSAF algorithm has significantly improved recovery performance and speed compared with the existing typical algorithms and has good robustness.
Data Availability
All research data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
Acknowledgments
This research was funded by the Natural Science Research Project of Fuyang Normal University (2021FSKJ02ZD), Natural Science Research Project of Anhui Province (KJ2020A0539, KJ2021A0662, and KJ2021A0682), Science Research and Innovation Team of Fuyang Normal University (kytd202004 and TDJC2021008), and Fuyang Normal University Young Talents Key Project (rcxm202006 and rcxm202004).