Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017, Article ID 1375716, 8 pages
https://doi.org/10.1155/2017/1375716
Research Article

Vector Extrapolation Based Landweber Method for Discrete Ill-Posed Problems

1School of Mathematical Sciences/Research Center for Image and Vision Computing, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China
2School of Economic Mathematics, Southwestern University of Finance and Economics, Chengdu, Sichuan 611130, China
3Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, Nijenborgh 9, P.O. Box 407, 9700 AK Groningen, Netherlands

Correspondence should be addressed to Xian-Ming Gu; nc.evil@gnimnaixug

Received 10 June 2017; Revised 12 September 2017; Accepted 18 September 2017; Published 16 November 2017

Academic Editor: Qingling Zhang

Copyright © 2017 Xi-Le Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Landweber method is one of the classical iterative methods for solving linear discrete ill-posed problems. However, Landweber method generally converges very slowly. In this paper, we present the vector extrapolation based Landweber method, which exhibits fast and stable convergence behavior. Moreover, a restarted version of the vector extrapolation based Landweber method is proposed for practical considerations. Numerical results are given to illustrate the benefits of the vector extrapolation based Landweber method.

1. Introduction

Ill-posed inverse problems arise in many important applications, including astronomy, medical tomography, geophysics, sound source detection, and image deblurring (refer, e.g., to [16] and references therein). In this paper, we consider linear discrete ill-posed problems of the form where is an observed vector, is a severely ill-conditioned matrix, and is an unknown additive noise. Popular methods for computing an approximation of are iterative regularization methods, which are very well suited for large-scale problems [3, 7]. Due to their fast convergence behaviors, Krylov subspace methods (e.g., CGLS [8, pp. 288–293], LSQR [9], and LSMR [10]) are usually considered to be superior to Landweber method (refer to Brianzi et al. [11] for an excellent overview of regularizing properties of Krylov subspace methods). However, the semiconvergence behavior of Krylov subspace methods is much more pronounced than that of Landweber method. This suggests that we are at high risk of computing a poor solution without good stopping criteria for Krylov subspace methods.

Due to its simplicity and stability, Landweber method is still competitive with Krylov subspace methods in some real applications [12, 13]. Slow convergence behavior is the main practical disadvantage of Landweber method. Recently, the adaptive Landweber method, which updates parameters at each iteration, was advanced to speed up the convergence of Landweber method [14]. But if we do not have a good preset parameter, the performance of the adaptive Landweber method is almost the same as that of Landweber method. Vector extrapolation methods are often effective strategies to accelerate the convergence of vector sequences. This motivates us to combine vector extrapolation methods with Landweber method to refresh Landweber method by accelerating its convergence.

The rest of this paper is organized as follows. In Section 2, a brief review of popular vector extrapolation methods is provided. To accelerate the convergence of Landweber method, the vector extrapolation based Landweber method is established in Section 3. Moreover, a restarted version of the vector extrapolation based Landweber method is proposed for practical considerations. Numerical experiments from image deblurring are reported to show the performance of the vector extrapolation based Landweber method in Section 4. Finally, the paper closes with conclusions in Section 5.

2. Vector Extrapolation Methods

In this section, we briefly review popular vector extrapolation methods. Surveys and practical applications of the vector extrapolation methods can be found in [1519]. Most popular vector extrapolation methods can be classified into two categories: the polynomial-type methods and the -algorithms [16]. In general, the polynomial-type methods are more economical than the -algorithms for solving large-scale systems of algebraic equations in terms of both computation and storage requirements. In the current study, we consider using the minimal polynomial extrapolation (MPE) method which is one of the efficient polynomial-type methods.

Let be the vector sequence generated via where is an initial vector. The MPE approximation to , the desired limit or antilimit, is defined as with Define Then are determined as follows:

(i) Compute the least-squares solution of the overdetermined linear system where and

(ii) Set , and compute by Based on QR decomposition and LU decomposition, two fast and numerically stable implementations of the MPE method were presented in [20, 21].

The following theorem in [22] illustrates the accelerating effect of the MPE method.

Theorem 1. Let the vector sequence be such that where are linearly independent vectors and are distinct nonzero scalars satisfying and are ordered such that Assume for some integers that there holds Then exists for all large , and

Remark 2. When the matrix is diagonalizable and is nonsingular, vector sequences satisfy the conditions of Theorem 1 [17]. In this case, all eigenvalues of the matrix are different from 1, the scalars are some or all of the distinct nonzero eigenvalues of , and the vectors are corresponding eigenvectors.

3. Vector Extrapolation Based Landweber Method

Landweber method is one of the simplest iterative regularization methods for solving linear discrete ill-posed problems (1). It follows the form where is an initial vector and is a real parameter satisfying . The filter factors for (12) are in which the iteration number plays the role of a regularization parameter [3].

Remark 3. At each iteration, the leading computational requirement for Landweber method is two matrix-vector multiplications. Imposing boundary conditions, the matrix-vector multiplication can be efficiently implemented via the corresponding fast transforms [23]. In this paper, we employed Neumann boundary conditions and the corresponding fast two-dimensional discrete cosine transform. The difference in boundary conditions may slightly affect the accuracies of image deblurring. For further discussions on this topic, we refer the interested readers to [23].

Remark 4. When solving the regularized linear system by Landweber method, the matrix is nonsingular if . And if the point spread function (PSF) is symmetric, the matrices , , and can be diagonalized by the two-dimensional discrete cosine transform. Thus, from Remark 2, we can conclude that vector sequences generated by (12) satisfy the conditions of Theorem 1 when solving the above regularized linear system.

Although exhibiting stable convergence behavior, Landweber method converges too slowly to be a practical method. Recently, the adaptive Landweber method, which adaptively updates parameters, has been proposed to speed up the convergence of Landweber method [14]. Numerical experiments showed that, without a good preset parameter, the performance of the adaptive Landweber method is almost the same as that of Landweber method. One way to economically accelerate the convergence of vector sequences is to apply vector extrapolation methods. This motivates us to combine Landweber method with vector extrapolation methods to accelerate the convergence of Landweber method for practical use. The vector extrapolation based Landweber method for ill-posed problems (1) is proposed as follows.

Algorithm 5. (1)Choose the integers and , the discrepancy principle factor , the initial vector , and the parameter satisfying .(2)Generate by (12); set .(3)Generate by (12), and compute the MPE approximation by (3).(4)If , stop.Otherwise set for , and go to step ().

Remark 6. The discrepancy principle for parameter-choice is well suited for iterative regularization methods [7]. The stable convergence behavior of the vector extrapolation based Landweber method makes it easy to be terminated by the simplified discrepancy principle . Recent developments of the parameter-choice methods (i.e., stopping rules) for Landweber-type methods are discussed in [24].

Remark 7. For a th-order MPE approximation, an overdetermined linear system needs to be solved with complexity (see [8] for more details) and vectors need to be stored. Thus, one has to take computational and storage requirement into account in choosing the order . And we observed that it is useful to perform some iterations before applying vector extrapolation methods in practice.

It is not computationally desirable to solve an overdetermined linear system after one Landweber iteration in Algorithm 5. For practical considerations, we can compute the th-order MPE approximation after Landweber iterations and then restart this procedure using the MPE approximation as the initial vector. The restarted version of the vector extrapolation based Landweber method is proposed as follows.

Algorithm 8. (1)Choose the integers and , the discrepancy principle factor , the initial vector , and the parameter satisfying .(2)Generate by (12); set .(3)Generate by (12), and compute the MPE approximation by (3).(4)If , stop.Otherwise set , and go to step ().

4. Numerical Experiments

In this section, we illustrate the performance of the vector extrapolation based Landweber method for typical linear discrete ill-posed problems (1) arising from image deblurring. In all tests, 1% white noise is added to the blurred images. The relative error (ReErr) is often used to measure the quality of solutions. It is defined as where and are the computed solution and the true solution, respectively. All the tests were carried out on a ThinkPad laptop with Intel(R) Core(TM) i3 CPU, 2.27 GHz, with 2 GB RAM, under Windows 7 operating system. Moreover, all the experimental results are obtained from using Matlab v7.10 (R2010a) implementation with double-precision floating-point arithmetic.

Example 9. The first example is a test problem from RestoreTools package [25]. The original image, the point spread function (PSF), and the blurred and noisy image are illustrated in Figure 1. Convergence histories of CGLS, Landweber method, and Algorithm 5 in Figure 2 show that Algorithm 5 converges faster than Landweber method and Algorithm 5 also has a more stable performance than CGLS. Moreover, restored images by Landweber method and Algorithm 5 in Figures 3 and 4 vividly illustrate that Algorithm 5 is superior to Landweber method.

Figure 1: The true image (a), the point spread function (b), and the blurred and noisy image (c).
Figure 2: Convergence histories of CGLS, Landweber method, and Algorithm 5 with = 1.7.
Figure 3: Restored images by Landweber method and Algorithm 5.
Figure 4: The non-strongly symmetric PSF.

Example 10. The aim of the second example is to make comparisons between the adaptive Landweber method and Algorithm 5. Two original images as well as the corresponding blurred and noisy images with the PSF of Gaussian blur and the non-strongly symmetric PSF (see Figure 4) are shown in Figure 5. It is clearly observed from convergence histories in Figure 6 that Algorithm 5 converges faster than Landweber method and the adaptive Landweber method and the performance of the adaptive Landweber method is almost the same as that of Landweber method without a good preset parameter . The corresponding restored images by Landweber method, the adaptive Landweber method, and Algorithm 5 are given in Figure 7. We can safely conclude that Algorithm 5 outperforms Landweber method and the adaptive Landweber method in terms of both convergence speed and quality of recovery.

Figure 5: The original image (a) and the corresponding blurred and noisy image with the PSF of Gauss blur (b). The original image (c) and the corresponding blurred and noisy image with the non-strongly symmetric PSF (d).
Figure 6: Convergence histories of Landweber method, the adaptive Landweber method, and Algorithm 5 with = 1.7 (a) and = 0.7 (b).
Figure 7: The corresponding restored images by Landweber method ((a) and (d)), the adaptive Landweber method ((b) and (e)), and Algorithm 5 ((c) and (f)) at iteration 35: = 1.7 and = 0.7 for (a), (b), and (c) and for (d), (e), and (f), respectively.

Example 11. We compare the performance of the vector extrapolation based Landweber method with its restarted version in the third example. For the test problem used in Example 9, the vector extrapolation based Landweber method reaches the relative error of 0.3747 with 100 iterations and 97 MPE approximations, while its restarted version reaches the relative error of 0.3539 with 100 iterations and 31 MPE approximations. The corresponding restored image by the restarted version of the vector extrapolation based Landweber method (100 Landweber iterations and 31 MPE approximations) is given in Figure 8. The experiments confirm the favorable performance of the restarted version of the vector extrapolation based Landweber method both visually and in aspects of relative error and the number of MPE approximations.

Figure 8: Restored image by the restarted version of the vector extrapolation based Landweber method with = 1.7.

5. Conclusions

In this paper, we construct the vector extrapolation technique for accelerating the Landweber method, which is a classical iterative method for solving ill-posed problems (1). Moreover, both the properties and the practical implementation of the proposed method (including its restarted version) are well investigated. Numerical experiments have shown that the proposed methods are feasible, robust, and efficient.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Authors’ Contributions

All authors contributed equally and significantly in writing this article. All authors wrote, read, and approved the final manuscript.

Acknowledgments

This research is supported by NSFC (61370147, 61402082, and 61772003) and the Fundamental Research Funds for the Central Universities (ZYGX2016J132 and ZYGX2016J138).

References

  1. M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, CRC Press, Boca Raton, FL, USA, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  2. H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Mathematics and Its Applications, Springer Netherlands, Dordrecht, The Netherlands, 1996. View at MathSciNet
  3. P. C. Hansen, Rank-Deficient and Discrete Ill-Posed Problems, SIAM Monographs on Mathematical Modeling and Computation, SIAM, Philadelphia, PA, USA, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  4. J-J. Mei, Y. Dong, T-Z. Huang, and W. Yin, “Cauchy Noise Removal by Nonconvex ADMM with Convergence Guarantees,” Journal of Scientific Computing, pp. 1–24, 2017. View at Publisher · View at Google Scholar
  5. X.-L. Zhao, F. Wang, T.-Z. Huang, M. K. Ng, and R. J. Plemmons, “Deblurring and sparse unmixing for hyperspectral images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 7, pp. 4045–4058, 2013. View at Publisher · View at Google Scholar · View at Scopus
  6. X.-L. Zhao, F. Wang, and M. K. Ng, “A new convex optimization model for multiplicative noise and blur removal,” SIAM Journal on Imaging Sciences, vol. 7, no. 1, pp. 456–475, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. P.-C. Hansen, Discrete Inverse Problems: Insight and Algorithms, SIAM, Philadelphia, PA, USA, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  8. Å. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA, USA, 1996. View at Publisher · View at Google Scholar · View at MathSciNet
  9. C. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Transactions on Mathematical Software, vol. 8, no. 1, pp. 43–71, 1982. View at Publisher · View at Google Scholar · View at MathSciNet
  10. D. C. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM Journal on Scientific Computing, vol. 33, no. 5, pp. 2950–2971, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  11. P. Brianzi, P. Favati, O. Menchi, and F. Romani, “A framework for studying the regularizing properties of Krylov subspace methods,” Inverse Problems, vol. 22, no. 3, pp. 1007–1021, 2006. View at Publisher · View at Google Scholar · View at Scopus
  12. C. Vonesch and M. Unser, “A fast thresholded Landweber algorithm for wavelet-regularized multidimensional deconvolution,” IEEE Transactions on Image Processing, vol. 17, no. 4, pp. 539–549, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. H. Zhang and L. Z. Cheng, “Projected Landweber iteration for matrix completion,” Journal of Computational and Applied Mathematics, vol. 235, no. 3, pp. 593–601, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. L. Liang and Y. Xu, “Adaptive landweber method to deblur images,” IEEE Signal Processing Letters, vol. 10, no. 5, pp. 129–132, 2003. View at Publisher · View at Google Scholar · View at Scopus
  15. X.-M. Gu, T.-Z. Huang, H.-B. Li, S.-F. Wang, and L. Li, “Two CSCS-based iteration methods for solving absolute value equations,” Journal of Applied Analysis and Computation, vol. 7, no. 4, pp. 1336–1356, 2017. View at Google Scholar
  16. K. Jbilou and H. Sadok, “Vector extrapolation methods. Applications and numerical comparison,” Journal of Computational and Applied Mathematics, vol. 122, no. 1-2, pp. 149–165, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. A. Sidi, “Vector extrapolation methods with applications to solution of large systems of equations and to PageRank computations,” Computers & Mathematics with Applications, vol. 56, no. 1, pp. 1–24, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  18. D. A. Smith, W. F. Ford, and A. Sidi, “Extrapolation methods for vector sequences,” SIAM Review, vol. 29, no. 2, pp. 199–233, 1987. View at Publisher · View at Google Scholar · View at Scopus
  19. H.-F. Zhang, T.-Z. Huang, C. Wen, and Z.-L. Shen, “FOM accelerated by an extrapolation method for solving PageRank problems,” Journal of Computational and Applied Mathematics, vol. 296, pp. 397–409, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  20. K. Jbilou and H. Sadok, “LU implementation of the modified minimal polynomial extrapolation method for solving linear and nonlinear systems,” IMA Journal of Numerical Analysis, vol. 19, no. 4, pp. 549–561, 1999. View at Publisher · View at Google Scholar · View at MathSciNet
  21. A. Sidi, “Efficient implementation of minimal polynomial and reduced rank extrapolation methods,” Journal of Computational and Applied Mathematics, vol. 36, no. 3, pp. 305–337, 1991. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. A. Sidi, “Convergence and stability properties of minimal polynomial and reduced rank extrapolation algorithms,” SIAM Journal on Numerical Analysis, vol. 23, no. 1, pp. 197–209, 1986. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. M. Donatelli and S. Serra-Capizzano, “Anti-reflective boundary conditions and re-blurring,” Inverse Problems, vol. 21, no. 1, pp. 169–182, 2005. View at Publisher · View at Google Scholar · View at Scopus
  24. T. Elfving and T. Nikazad, “Stopping rules for Landweber-type iteration,” Inverse Problems, vol. 23, no. 4, article no. 004, pp. 1417–1432, 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. J. G. Nagy, K. Palmer, and L. Perrone, “Iterative methods for image deblurring: a Matlab object-oriented approach,” Numerical Algorithms, vol. 36, no. 1, pp. 73–93, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus