About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 129652, 9 pages
http://dx.doi.org/10.1155/2013/129652
Research Article

Two New Efficient Iterative Regularization Methods for Image Restoration Problems

School of Mathematical Sciences/Institute of Computational Science, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China

Received 4 March 2013; Revised 2 June 2013; Accepted 10 June 2013

Academic Editor: Marco Donatelli

Copyright © 2013 Chao Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Iterative regularization methods are efficient regularization tools for image restoration problems. The IDR() and LSMR methods are state-of-the-arts iterative methods for solving large linear systems. Recently, they have attracted considerable attention. Little is known of them as iterative regularization methods for image restoration. In this paper, we study the regularization properties of the IDR() and LSMR methods for image restoration problems. Comparative numerical experiments show that IDR() can give a satisfactory solution with much less computational cost in some situations than the classic method LSQR when the discrepancy principle is used as a stopping criterion. Compared to LSQR, LSMR usually produces a more accurate solution by using the -curve method to choose the regularization parameter.

1. Introduction

Image deblurring is one of the most classic linear inverse problems. Blurring in images can arise from many sources, such as limitations of the optical system, camera and object motion, astigmatism, and environmental effects. The blurring process of an image can be formulated as a Fredholm integral equation of the first kind which has the following classic form: where and are the original image and the blurred image, respectively. is a given point spread function (PSF). PSF describes the blurring and the resulting image of the point source. The degree of blurring of the point object is a measure for the quality of an imaging system. More information about different PSFs can be found in [1].

Equation (1) can be discretized to form a linear system where the matrix is ill-conditioned since it has many singular values close to zero [1]. For simplicity, we assume that in this paper is nonsingular.

However, the right-hand side is not available in many practical applications of image restoration because of the contamination of noise, so the linear system (2) can be reformulated as where represents the noise.

Our goal is to obtain a good approximation of the original image by solving the system (3) instead of the system (2) since is not known. However, the solution of (3) is not a good approximation of the solution of (2) because of ill-conditioned . In fact, there is a quite remarkable disparity among the corresponding solutions of (3) and (2) even if the norm of is small.

The singular value decomposition (SVD) is a powerful tool for the analysis of discrete ill-posed problems. The SVD takes the form where is a diagonal matrix with the singular values . For most image deblurring problems, the discrete Picard condition is satisfied: for all singular values larger than the norm of noise, the corresponding coefficients decay faster than the .

Regularization methods are used to produce better solution. One of the most famous regularization methods is truncated SVD (TSVD) [2], which yields solution . Another regularization method is Tikhonov regularization which consists of solving the minimization problem where is a regularization operator, which is often chosen to be the identity matrix ([3, Chapter  5]) and is the regularization parameter that controls the weighting between the two terms. However, the previously mentioned techniques are usually computationally expensive for large-scale problems like image deblurring. Iterative methods, especially Krylov subspace iterative methods, are used to solve these problems due to their inexpensive computational cost and easy implementation (see [4] for more details about Krylov subspace methods).

When we apply a Krylov subspace method to solve (3), the semiconvergence property is observed. Iterative methods can produce a sequence of iterates that initially tends to get closer to the exact solution but diverges again from the exact solution in later stages because the influence of the noise starts to dominate the solution. In this situation, the iteration number can be considered as a regularization parameter. We also introduce different methods for choosing an effective in this paper.

The conjugate gradient (CG) method is a classical Krylov subspace iterative method for solving the linear systems with symmetric positive definite (SPD) matrix, and its regularizing effects are well known (see [5] and [3, Chapter  6] for more details). When is not SPD, the method can be applied to the normal equations . CGLS is the most stable way to implementation of CG algorithm for the normal equations ([3, Chapter  6], [6, Chapter  6]). Other Krylov subspace methods also have been used to solve image restoration problems. The regularizing properties of GMRES have been studied in [7]. The regularizing properties of BiCG and QMR have been tested in [8], which shows that BiCG and QMR are much faster than CGLS. The comparison of the regularizing properties of Krylov subspace methods CGLS, GMRES, CGS, BiCG, QMR, and BiCGSTAB has been investigated in [9]: CGS, BiCG, and BiCGSTAB are not sufficiently consistent with the discrepancy principle (we say that a method is consistent with the discrepancy principle if the residual norm with discrepancy principle as the stopping rule is linear dependence to the noise norm [9]. See Section 3.1 for details of the discrepancy principle); CGLS method is both efficient and consistent with the discrepancy principle, but it needs a large number of iterations; GMRES and QMR show intermediate features.

Recently, IDR() [10] and LSMR [11] iterative methods have been proposed for solving large linear system. IDR() is a new family of iterative methods which is fast and requires low memory. IDR() has attracted considerable attention, and different variants have been proposed in [12, 13]. LSMR is similar to LSQR by applying the MINERS method to the normal equation. The regularizing effects of MINERS have been analysed in [14, 15]. It has been proved that both and decrease monotonically for LSMR, while only is monotonic for LSQR. So, it is safer to terminate LSMR early.

Many methods have been proposed for choosing the regularization parameter. But a robust method appropriate for different situations is to be found. Two methods are used in this paper: the discrepancy principle (DP) and the discrete -curve criterion. DP is the most famous and frequently used method. It has distinct advantages and disadvantages: it is simple and easy to be utilized, but it needs to know the norm of the noise in advance. The discrete -curve criterion is used to choose the regularization parameter by using the curve of . For more information about methods for choosing the regularization parameter, we suggest [6, Chapter  5] and references therein.

Inspired by the advantages of IDR() and LSMR, we apply them to solve the image restoration problems. We compare the performance of the IDR() method with that of the LSQR method for solving (3) with the iteration terminated by the DP method. We also make the comparison between the LSMR method and the LSQR method when the discrete -curve criterion is used to choose the regularization parameter.

The paper is organized as follows. In Section 2, we review the IDR() and LSMR algorithms briefly and present analysis of the regularizing properties of the IDR() and LSMR algorithms. In Section 3, methods for choosing the regularization parameter especially the DP method and the discrete -curve criterion are introduced. IDR() and LSMR methods are applied to solve image restoration problems and compared with with LSQR in Section 4. Section 5 summarizes conclusions.

2. IDR() and LSMR

2.1. The IDR() Method

The IDR() methods are a new family of iterative methods based on the induced dimension reduction (IDR) method proposed by Sonneveld in 1980. IDR() algorithm can generate a sequence of nested subspaces that corresponding residuals are forced to be in. The dimension of these nested subspaces is monotonically decreasing, and so in exact arithmetic, the solution can be obtained within a not very large number of iterations.

The nested subspaces are defined by where is a fixed proper subspace of and are nonzero scalars. The space is usually assumed as the left null space of :

The columns of are chosen as orthogonalization of a set of random vectors.

Suppose that after iterations we have vectors , and corresponding vectors with . Define the matrices

The algorithm produces residuals in ; then the next residual can be found in . Two steps are needed to obtain the residuals.

Step  1Compute the first residual in .Calculate from .

Step  2Compute the remaining residuals in , for .Calculate from select and ; ;; ; update to get ; to get .

Different choices of and would result in different variants of IDR(). Take the same notations as in [12]: IDR()-proto represents the algorithm in [10] and IDR()-biortho represents the variant of the IDR() in [12]. IDR()-proto just computes that is, , , and .

IDR()-biortho selects and to construct vectors that satisfy the biorthogonality conditions: The IDR()-biortho algorithm is more stable and more accurate than the IDR()-proto algorithm, so we apply the IDR()-biortho algorithm to solve the problems in this paper.

As described in [10, 16], we have the following expression after iterations: is a polynomial of degree . can be explicitly written as a product of two polynomials where , . The factor is called damping factor and is called Lanczos factor. The damping factor is the residual polynomial of damped Richardson iterative method. Damped Richardson iterative method is used as an iterative regularizing method in [17] and sometimes referred to as Van Citter method in the literature; compare [18]. According to the conclusion of [16], the Lanczos factor is determined completely by the choice of , whereas the damping factor depends on the choice of the parameters . Actually, in our experiments, different choices of parameter (which controls the dimension of ) did not show notable influence to the performance in solving image restoration problems. So, the damping factor plays a role of iterative regularizing method.

2.2. The LSMR Method

The LSMR algorithm is equivalent to the MINRES applied to the normal equation. The algorithm is implemented by the Lanczos bidiagonalization algorithm. One form of the bidiagonalization procedure is the Golub-Kahan process: The scalars and are chosen so that the norms of and are both equal to one.

We define that , , and Then, the following relations hold:

While LSQR computes at each iteration the vector which minimizes , LSMR chooses differently, trying to minimize . Since So, we have the following two least squares problems: Both and decrease monotonically for LSMR algorithm, while only is monotonic for LSQR algorithm. So, compared to LSQR, it is safer to terminate LSMR early.

LSMR works on the normal equation, so a simple analysis for the regularizing properties of the LSMR algorithm can be done, which is similar to the analysis of the CGLS method in [15]. Indeed LSMR and LSQR share the same Krylov subspace: We insert the SVD of into the expansion The Krylov vectors for LSMR are , . The diagonal elements of and the coefficients in decay, due to the discrete Picard condition. So, the Krylov subspace can be considered as an approximation to the subspace spanned by the first right singular vectors. Hence, can be considered as an approximation to the TSVD solution.

From another perspective, LSMR is equivalent to the MINRES method applied to the normal equation. The regularizing properties of MINRES have been analysed in [14, 15]. So, LSMR has the regularizing properties apparently.

3. Choosing the Regularization Parameter

A reliable, efficient, and robust method for choosing the regularization parameter has not yet been found. Many techniques have been proposed to choose a suitable regularization parameter that works well under certain conditions. New methods are being developed too.

Several well-known methods, including the DP [19], the -curve [20], the generalized cross validation (GCV) [21], and the NCP method, are described in Hansen’s book [6], and a comparison of these methods is involved. DP is often used to choose the regularization parameter of Krylov subspace methods. When and show monotonic behavior, the discrete -curve criterion can be utilized. Hnětynková et al. recently proposed to use the information from the Golub-Kahan iterative bidiagonalization to estimate the norm of the noise, which can be used to construct efficient stopping criteria [22]. For more information about choosing the regularization parameter, we suggest [23] and references therein.

Two frequently used methods are utilized to choose the regularization parameter in this paper: the DP and the discrete -curve criterion.

3.1. The Discrepancy Principle

One simple and frequently used method for choosing the regularization parameter is the DP ([14, Chapter 3.3], [6, Chapter 5.2]). It suggests that we should choose parameter such that the residual norm equals the discrepancy, which is the norm of the noise . Generally, we terminate the iteration as soon as we have a solution , such that where is the norm of .

The main disadvantage of the DP method is that is often not available in advance. Because the choice of regularization parameter is sensitive to the accuracy of the estimate for , it is not easy to get a good parameter if we use an estimate of the .

3.2. The Discrete -Curve Criterion

The discrete -curve criterion is more complicated than DP. The -curve tries to get the regularization parameter by the analysis of the norm of regularized solution and corresponding residual norm . The -curve is a log-log plot of versus , and it has two different parts, an approximately horizontal part and an approximately vertical part.(1)When is small, (2)When is large enough, the solution is dominated by the error caused by noise,

This also gives an explanation why DP is effective to choose the regularization parameter.

The effective regularization parameter can be determined by the “corner" that separates the two parts. The corner is the point with maximum curvature for continuous -curves. It is similar to determine the corner for discrete -curves. However, it is not straightforward to find the corner, because the discrete -curve may have many local corners. In this paper, we adopt the adaptive pruning algorithm in [24] for the discrete -curve criterion, which uses a sequence of pruned -curves at different scales to get the global corner. The MATLAB implementations of this algorithm are available at the website (http://www.mathworks.com/matlabcentral/fileexchange/52-regtools/content/regu/corner.m).

The discrete -curve criterion does not need information of the norm of noise, which is an advantage in comparison with the DP. The -curve also has its limitations [2527]: it will fail when the solution is very smooth; the regularization parameter computed by the -curve may not behave consistently with the problem size increasing.

The discrete -curve criterion has a requirement that the norm of increases monotonically as increases and the norm of decreases monotonically as increases. CGLS and LSQR are mathematically equivalent, and both of them can use the -curve criterion as a stopping rule. The monotonicities of in CGLS and LSQR are apparent. Starting with the zero vector, the monotonicities of in CGLS and LSQR have been proved in [3, Theorem ] and [28, Theorem ]. The monotonicities of and in LSMR have been proved in [28, Theorem and Theorem ] accordingly. So, -curve is allowed to choose the regularization parameter when LSMR is used as an iterative regularization method.

In practical implementations, we need stopping criterions to terminate the iteration (after the corner appears). Then, we can get the regularization parameter using the information we have. The iteration for choosing the regularization parameter needs effective stopping criterion. The frequently-used stopping criterion () is not a good one. The iteration may be terminated before the “corner” of -curve appears for a bigger , while the number of iterations may be very large for a smaller because of the ill-condition of . We choose as a stopping criterion thanks to the monotonicity of . This stopping criterion is used with other stopping criterions together: and , where is an explicit limit on the iteration number.

4. Numerical Experiments

In this section we present some numerical examples to compare the IDR() and LSMR methods with the LSQR method. All examples are problems of the restoration of blurred and noisy images. The blurring operator is constructed by in regularization tools package by Hansen [29], which is a block Toeplitz with Toeplitz block matrix that models the blurring of an image by a Gaussian PSF. This section includes two subsections: the first part is the numerical examples for comparison of the IDR() method and LSQR method where the DP is used to terminate the iteration; the second part is the numerical examples for comparison of the LSMR method and the LSQR method where the regularization parameters are chosen by the discrete -curve criterion. The relative error norm (REN) is defined by

All experiments presented in this section are implemented on a computer with an Intel(R) Pentium(R) D CPU (3.00 GHz) and 1 GB of RAM using MATLAB R2010a.

4.1. The Comparison of IDR() and LSQR

In this section, the IDR()-biortho method is used due to its stability and accuracy.

Example 1. The example is the restoration of a noisy and blurred image “Lena.” The true image is shown in Figure 1(a). The image has pixels. The parameters of are set as the bandwidth , . The parameter controls the width of the Gaussian PSF, and the larger the is, the more ill-conditioned the problem is. Different Gaussian white noises with noise levels are added to the noise-free image in our experiments.

fig1
Figure 1: (a) True image, (b) blurred and noisy image with , (c) restored image with LSQR, and (d) restored image with IDR(1).

The IDR() method is applied to solve this problem, while the popular iterative regularization method LSQR is used for comparison. The parameter in IDR() is chosen as , , , and . The DP method is used to terminate the iteration.

The cost for each IDR() step is dominated by one matrix-vector product, while LSQR needs two matrix-vector products. Actually, IDS() need more memory and computation as increases. More information about the cost of IDR() can be found in [12].

The data in Table 1 indicates that IDR() can produce a satisfying solution with much less iterations than LSQR (“Iter” is the number of iterations and “MV” is the number of matrix-vector products). Moreover, the parameter has a small influence on the convergence and the number of iterations. This suggests that we should use IDR(1) to solve the problem rather than IDR() () for saving storage and computational load.

tab1
Table 1: Image deblurring using LSQR and IDR() for “Lena” ().

However, IDR() does not give a desirable solution when the noise level is high. Observing the relative error norm when , IDR() cannot give a good enough solution compared to LSQR. The performance of IDR(s) is worse if the noise level is higher. Another shortcoming of IDR() is that it cannot obtain a desirable solution or even cannot converge for very ill-condition problems which can be obtained with a larger . The data in Table 2 shows the performance of LSQR and IDR(1) for Example 1 with (the condition number of is about ). Although IDR() can obtain the solution much faster than LSQR, we should use these methods carefully. In our experiments, IDR() can always give a satisfactory solution when is not very ill-conditioned and the noise level is low.

tab2
Table 2: Image deblurring using LSQR and IDR() for “Lena” ().

Figure 1 shows the true image and blurred and noisy image with , and corresponding images restored by LSQR and IDR(1).

4.2. The Comparison of LSMR and LSQR

Example 2. The true image is constructed by MATLAB command which can create a head phantom image with pixels. In this experiment, we set the size and the bandwidth , . -curve is fit for choosing the regularization parameter in this situation. The Gaussian white noise is added to the true image with noise level . The true image is Figure 3(a), and the blurred and noisy image is Figure 3(b).

Example 3. The true image is obtained from regularization tools package by Hansen [29]. The remaining parameters are the same as the parameters of Example 2. The true image and the blurred and noisy image are shown in Figures 5(a) and 5(b).

The LSMR method is applied to solve this problem, while the LSQR method is used for comparison. The discrete -curve criterion is employed to choose the regularization parameter. We generate 100 instances of Gaussian white noise with noise level . For each instance, we use the discrete -curve criterion to get the regularization parameters of LSMR and LSQR. Then, we obtain the solutions by running corresponding LSMR algorithm and LSQR algorithm. The comparison of the relative error norm is shown in Figure 2 (Example 2) and Figure 4 (Example 3). It is evident that LSMR more likely produces a better solution than LSQR. Figure 3 shows one instance of Example 2 including the true image, blurred and noisy image, and images restored by LSQR and LSMR. Similarly, Figure 5 shows one instance of Example 3. Furthermore, if we compare the optimal solution produced by LSMR and LSQR, the corresponding REN of the solution produced by LSMR is usually the smaller one.

129652.fig.002
Figure 2: Comparison of the REN produced by LSQR and LSMR.
fig3
Figure 3: (a) True image, (b) blurred and noisy image, (c) restored image with LSQR, Iter = 104, and REN = 0.5004, and (d) restored image with LSMR, Iter = 128, and REN = 0.4919.
129652.fig.004
Figure 4: Comparison of the REN produced by LSQR and LSMR.
fig5
Figure 5: (a) True image, (b) blurred and noisy image, (c) restored image with LSQR, Iter = 95, and REN = 0.3519, and (d) restored image with LSMR, Iter = 129, and REN = 0.3417.

The -curve works well for large noise level. Actually, we found that the regularization parameter obtained from the -curve is almost optimal when the level of noise is . But the -curve may fail or need a large number of iterations to find the corner for small noise level.

Besides, we have done the comparison of LSMR and LSQR with the regularization parameter chosen by DP. LSMR method can get nearly the same REN as LSQR method, while LSQR method could be terminated somewhat sooner. The memory and computational costs of every step in LSMR are also slightly more expensive than that in LSQR [11].

5. Conclusion

In this paper, we briefly introduced the IDR() method and the LSMR method, and then we applied the IDR() and LSMR methods to solve image deblurring problems. These methods show some superiorities when compared to the classic iterative regularization method LSQR.

When DP is used to terminate the iteration, IDR() can give a satisfactory solution with a low computational cost when is not very ill-condition and the noise level is low. The parameter has little influence on the performance of IDR(), so we can use IDR(1) for saving storage and computational cost.

LSMR performs as well as LSQR but needs more computational cost when DP is used to terminate the iteration. When the -curve method is utilized to choose the regularization parameter, LSMR more likely produces a more accurate solution than LSQR.

Acknowledgments

This research is supported by 973 Program (2013CB329404), NSFC (61170311), Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), and Sichuan Province Sci. & Tech. Research Project (2012GZX0080). The authors would like to express their thankfulness to the referees and the editor for their much helpful suggestions for revising this paper.

References

  1. P. C. Hansen, J. G. Nagy, and D. P. O'Leary, Deblurring Images: Matries, Spectra and Filtering, Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  2. R. J. Hanson, “A numerical method for solving Fredholm integral equations of the first kind using singular values,” SIAM Journal on Numerical Analysis, vol. 8, pp. 616–622, 1971. View at MathSciNet
  3. P. C. Hansen, Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion, Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  4. Y. Saad, Iterative Methods for Sparse Linear Systems, Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA, 2nd edition, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  5. R. Plato, “Optimal algorithms for linear ill-posed problems yield regularization methods,” Numerical Functional Analysis and Optimization, vol. 11, no. 1-2, pp. 111–118, 1990. View at Publisher · View at Google Scholar · View at MathSciNet
  6. P. C. Hansen, Discrete Inverse Problems: Insight and Algorithms, Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  7. D. Calvetti, B. Lewis, and L. Reichel, “On the regularizing properties of the GMRES method,” Numerische Mathematik, vol. 91, no. 4, pp. 605–625, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  8. D. Calvetti, B. Lewis, and L. Reichel, “Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration,” in Advanced Signal Processing: Algorithms, Architectures and Implementations XI, vol. 4474 of Proceedings of SPIE, pp. 224–233, The International Society for Optical Engineering, Bellingham, Wash, USA, August 2001. View at Publisher · View at Google Scholar · View at Scopus
  9. P. Brianzi, P. Favati, O. Menchi, and F. Romani, “A framework for studying the regularizing properties of Krylov subspace methods,” Inverse Problems, vol. 22, no. 3, pp. 1007–1021, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  10. P. Sonneveld and M. B. van Gijzen, “IDR(s): a family of simple and fast algorithms for solving large nonsymmetric systems of linear equations,” SIAM Journal on Scientific Computing, vol. 31, no. 2, pp. 1035–1062, 2008.
  11. D. C.-L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM Journal on Scientific Computing, vol. 33, no. 5, pp. 2950–2971, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  12. M. B. van Gijzen and P. Sonneveld, “An elegant IDR(s) variant that efficiently exploits bi-orthogonality properties,” Tech. Rep. 08-21, Department of Applied Mathematical Analysis, Delft University of Technology, Delft, The Netherlands, 2008.
  13. V. Simoncini and D. B. Szyld, “Interpreting IDR as a Petrov-Galerkin method,” SIAM Journal on Scientific Computing, vol. 32, no. 4, pp. 1898–1912, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  14. M. Hanke, Conjugate Gradient Type Methods for Ill-Posed Problems, vol. 327 of Pitman Research Notes in Mathematics Series, Longman, Harlow, UK, 1995. View at MathSciNet
  15. T. K. Jensen and P. C. Hansen, “Iterative regularization with minimum-residual methods,” BIT, vol. 47, no. 1, pp. 103–120, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  16. P. Sonneveld, “On the convergence behaviour of IDR(s),” Tech. Rep. 10-08, Department of Applied Mathematical Analysis, Delft University of Technology, Delft, The Netherlands, 2010.
  17. M. Donatelli and S. Serra-Capizzano, “On the regularizing power of multigrid-type algorithms,” SIAM Journal on Scientific Computing, vol. 27, no. 6, pp. 2053–2076, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  18. M. Donatelli and S. Serra-Capizzano, “Filter factor analysis of an iterative multilevel regularizing method,” Electronic Transactions on Numerical Analysis, vol. 29, pp. 163–177, 2007/08. View at MathSciNet
  19. V. A. Morozov, “On the solution of functional equations by the method of regularization,” Soviet Mathematics. Doklady, vol. 7, pp. 414–417, 1966. View at MathSciNet
  20. P. C. Hansen, “Analysis of discrete ill-posed problems by means of the L-curve,” SIAM Review, vol. 34, pp. 658–672, 1992. View at Publisher · View at Google Scholar
  21. G. Wahba, “Practical approximate solutions to linear operator equations when the data are noisy,” SIAM Journal on Numerical Analysis, vol. 14, no. 4, pp. 651–667, 1977. View at MathSciNet
  22. I. Hnětynková, M. Plešinger, and Z. Strakoš, “The regularizing effect of the Golub-Kahan iterative bidiagonalization and revealing the noise level in the data,” BIT, vol. 49, no. 4, pp. 669–696, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  23. L. Reichel and G. Rodriguez, “Old and new parameter choice rules for discrete ill-posed problems,” Numerical Algorithms, vol. 63, no. 1, pp. 65–87, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  24. P. C. Hansen, T. K. Jensen, and G. Rodriguez, “An adaptive pruning algorithm for the discrete L-curve criterion,” Journal of Computational and Applied Mathematics, vol. 198, no. 2, pp. 483–492, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  25. P. C. Hansen, “The L-curve and its use in the numerical treatment of inverse problems,” in Computational Inverse Problems in Electrocardiology, P. Johnston, Ed., pp. 119–142, WIT Press, Southampton, Ceremonial, 2001.
  26. C. R. Vogel, “Non-convergence of the L-curve regularization parameter selection method,” Inverse Problems, vol. 12, no. 4, pp. 535–547, 1996. View at Publisher · View at Google Scholar · View at MathSciNet
  27. M. Hanke, “Limitations of the L-curve method in ill-posed problems,” BIT, vol. 36, no. 2, pp. 287–301, 1996. View at Publisher · View at Google Scholar · View at MathSciNet
  28. D. C. L. Fong, Minimum-residual methods for sparse least-squares using golub-kahan bidiagonalization [Ph.D. thesis], Stanford University, Stanford, Calif, USA, 2011.
  29. P. C. Hansen, “Regularization tools: a Matlab package for analysis and solution of discrete ill-posed problems,” Numerical Algorithms, vol. 6, no. 1-2, pp. 1–35, 1994. View at Publisher · View at Google Scholar · View at MathSciNet