Mathematical Problems in Engineering

Volume 2017 (2017), Article ID 1375716, 8 pages

https://doi.org/10.1155/2017/1375716

## Vector Extrapolation Based Landweber Method for Discrete Ill-Posed Problems

^{1}School of Mathematical Sciences/Research Center for Image and Vision Computing, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China^{2}School of Economic Mathematics, Southwestern University of Finance and Economics, Chengdu, Sichuan 611130, China^{3}Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, Nijenborgh 9, P.O. Box 407, 9700 AK Groningen, Netherlands

Correspondence should be addressed to Xian-Ming Gu

Received 10 June 2017; Revised 12 September 2017; Accepted 18 September 2017; Published 16 November 2017

Academic Editor: Qingling Zhang

Copyright © 2017 Xi-Le Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Landweber method is one of the classical iterative methods for solving linear discrete ill-posed problems. However, Landweber method generally converges very slowly. In this paper, we present the vector extrapolation based Landweber method, which exhibits fast and stable convergence behavior. Moreover, a restarted version of the vector extrapolation based Landweber method is proposed for practical considerations. Numerical results are given to illustrate the benefits of the vector extrapolation based Landweber method.

#### 1. Introduction

Ill-posed inverse problems arise in many important applications, including astronomy, medical tomography, geophysics, sound source detection, and image deblurring (refer, e.g., to [1–6] and references therein). In this paper, we consider linear discrete ill-posed problems of the form where is an observed vector, is a severely ill-conditioned matrix, and is an unknown additive noise. Popular methods for computing an approximation of are iterative regularization methods, which are very well suited for large-scale problems [3, 7]. Due to their fast convergence behaviors, Krylov subspace methods (e.g., CGLS [8, pp. 288–293], LSQR [9], and LSMR [10]) are usually considered to be superior to Landweber method (refer to Brianzi et al. [11] for an excellent overview of regularizing properties of Krylov subspace methods). However, the semiconvergence behavior of Krylov subspace methods is much more pronounced than that of Landweber method. This suggests that we are at high risk of computing a poor solution without good stopping criteria for Krylov subspace methods.

Due to its simplicity and stability, Landweber method is still competitive with Krylov subspace methods in some real applications [12, 13]. Slow convergence behavior is the main practical disadvantage of Landweber method. Recently, the adaptive Landweber method, which updates parameters at each iteration, was advanced to speed up the convergence of Landweber method [14]. But if we do not have a good preset parameter, the performance of the adaptive Landweber method is almost the same as that of Landweber method. Vector extrapolation methods are often effective strategies to accelerate the convergence of vector sequences. This motivates us to combine vector extrapolation methods with Landweber method to refresh Landweber method by accelerating its convergence.

The rest of this paper is organized as follows. In Section 2, a brief review of popular vector extrapolation methods is provided. To accelerate the convergence of Landweber method, the vector extrapolation based Landweber method is established in Section 3. Moreover, a restarted version of the vector extrapolation based Landweber method is proposed for practical considerations. Numerical experiments from image deblurring are reported to show the performance of the vector extrapolation based Landweber method in Section 4. Finally, the paper closes with conclusions in Section 5.

#### 2. Vector Extrapolation Methods

In this section, we briefly review popular vector extrapolation methods. Surveys and practical applications of the vector extrapolation methods can be found in [15–19]. Most popular vector extrapolation methods can be classified into two categories: the polynomial-type methods and the -algorithms [16]. In general, the polynomial-type methods are more economical than the -algorithms for solving large-scale systems of algebraic equations in terms of both computation and storage requirements. In the current study, we consider using the minimal polynomial extrapolation (MPE) method which is one of the efficient polynomial-type methods.

Let be the vector sequence generated via where is an initial vector. The MPE approximation to , the desired limit or antilimit, is defined as with Define Then are determined as follows:

(i) Compute the least-squares solution of the overdetermined linear system where and

(ii) Set , and compute by Based on QR decomposition and LU decomposition, two fast and numerically stable implementations of the MPE method were presented in [20, 21].

The following theorem in [22] illustrates the accelerating effect of the MPE method.

Theorem 1. *Let the vector sequence be such that where are linearly independent vectors and are distinct nonzero scalars satisfying and are ordered such that Assume for some integers that there holds Then exists for all large , and *

*Remark 2. *When the matrix is diagonalizable and is nonsingular, vector sequences satisfy the conditions of Theorem 1 [17]. In this case, all eigenvalues of the matrix are different from 1, the scalars are some or all of the distinct nonzero eigenvalues of , and the vectors are corresponding eigenvectors.

#### 3. Vector Extrapolation Based Landweber Method

Landweber method is one of the simplest iterative regularization methods for solving linear discrete ill-posed problems (1). It follows the form where is an initial vector and is a real parameter satisfying . The filter factors for (12) are in which the iteration number plays the role of a regularization parameter [3].

*Remark 3. *At each iteration, the leading computational requirement for Landweber method is two matrix-vector multiplications. Imposing boundary conditions, the matrix-vector multiplication can be efficiently implemented via the corresponding fast transforms [23]. In this paper, we employed Neumann boundary conditions and the corresponding fast two-dimensional discrete cosine transform. The difference in boundary conditions may slightly affect the accuracies of image deblurring. For further discussions on this topic, we refer the interested readers to [23].

*Remark 4. *When solving the regularized linear system by Landweber method, the matrix is nonsingular if . And if the point spread function (PSF) is symmetric, the matrices , , and can be diagonalized by the two-dimensional discrete cosine transform. Thus, from Remark 2, we can conclude that vector sequences generated by (12) satisfy the conditions of Theorem 1 when solving the above regularized linear system.

Although exhibiting stable convergence behavior, Landweber method converges too slowly to be a practical method. Recently, the adaptive Landweber method, which adaptively updates parameters, has been proposed to speed up the convergence of Landweber method [14]. Numerical experiments showed that, without a good preset parameter, the performance of the adaptive Landweber method is almost the same as that of Landweber method. One way to economically accelerate the convergence of vector sequences is to apply vector extrapolation methods. This motivates us to combine Landweber method with vector extrapolation methods to accelerate the convergence of Landweber method for practical use. The vector extrapolation based Landweber method for ill-posed problems (1) is proposed as follows.

*Algorithm 5. *(1)Choose the integers and , the discrepancy principle factor , the initial vector , and the parameter satisfying .(2)Generate by (12); set .(3)Generate by (12), and compute the MPE approximation by (3).(4)If , stop. Otherwise set for , and go to step ().

*Remark 6. *The discrepancy principle for parameter-choice is well suited for iterative regularization methods [7]. The stable convergence behavior of the vector extrapolation based Landweber method makes it easy to be terminated by the simplified discrepancy principle . Recent developments of the parameter-choice methods (i.e., stopping rules) for Landweber-type methods are discussed in [24].

*Remark 7. *For a th-order MPE approximation, an overdetermined linear system needs to be solved with complexity (see [8] for more details) and vectors need to be stored. Thus, one has to take computational and storage requirement into account in choosing the order . And we observed that it is useful to perform some iterations before applying vector extrapolation methods in practice.

It is not computationally desirable to solve an overdetermined linear system after one Landweber iteration in Algorithm 5. For practical considerations, we can compute the th-order MPE approximation after Landweber iterations and then restart this procedure using the MPE approximation as the initial vector. The restarted version of the vector extrapolation based Landweber method is proposed as follows.

*Algorithm 8. *(1)Choose the integers and , the discrepancy principle factor , the initial vector , and the parameter satisfying .(2)Generate by (12); set .(3)Generate by (12), and compute the MPE approximation by (3).(4)If , stop. Otherwise set , and go to step ().

#### 4. Numerical Experiments

In this section, we illustrate the performance of the vector extrapolation based Landweber method for typical linear discrete ill-posed problems (1) arising from image deblurring. In all tests, 1% white noise is added to the blurred images. The relative error (ReErr) is often used to measure the quality of solutions. It is defined as where and are the computed solution and the true solution, respectively. All the tests were carried out on a ThinkPad laptop with Intel(R) Core(TM) i3 CPU, 2.27 GHz, with 2 GB RAM, under Windows 7 operating system. Moreover, all the experimental results are obtained from using Matlab v7.10 (R2010a) implementation with double-precision floating-point arithmetic.

*Example 9. *The first example is a test problem from RestoreTools package [25]. The original image, the point spread function (PSF), and the blurred and noisy image are illustrated in Figure 1. Convergence histories of CGLS, Landweber method, and Algorithm 5 in Figure 2 show that Algorithm 5 converges faster than Landweber method and Algorithm 5 also has a more stable performance than CGLS. Moreover, restored images by Landweber method and Algorithm 5 in Figures 3 and 4 vividly illustrate that Algorithm 5 is superior to Landweber method.