Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 1912535 | 18 pages | https://doi.org/10.1155/2019/1912535

Approximate Schur-Block ILU Preconditioners for Regularized Solution of Discrete Ill-Posed Problems

Academic Editor: Thomas Schuster
Received03 Jan 2019
Revised27 Feb 2019
Accepted21 Mar 2019
Published24 Apr 2019

Abstract

High order iterative methods with a recurrence formula for approximate matrix inversion are proposed such that the matrix multiplications and additions in the calculation of matrix polynomials for the hyperpower methods of orders of convergence where is integer, are reduced through factorizations and nested loops in which the iterations are defined using a recurrence formula. Therefore, the computational cost is lowered from to matrix multiplications per step. An algorithm is proposed to obtain regularized solution of ill-posed discrete problems with noisy data by constructing approximate Schur-Block Incomplete LU (Schur-BILU) preconditioner and by preconditioning the one step stationary iterative method. From the proposed methods of approximate matrix inversion, the methods of orders are applied for approximating the Schur complement matrices. This algorithm is applied to solve two problems of Fredholm integral equation of first kind. The first example is the harmonic continuation problem and the second example is Phillip’s problem. Furthermore, experimental study on some nonsymmetric linear systems of coefficient matrices with strong indefinite symmetric components from Harwell-Boeing collection is also given. Numerical analysis for the regularized solutions of the considered problems is given and numerical comparisons with methods from the literature are provided through tables and figures.

1. Introduction

The numerical solution of many scientific and engineering problems requires the solution of large linear systems of equations in the form where and is nonsingular and is usually unsymmetric and unstructured matrix. When is large and sparse, the solution of (1) is generally obtained through Krylov subspace methods such as simple iteration, generalized minimal residual (GMRES), biconjugate gradient (BICG), Quasi-Minimal Residual (QMR), conjugate gradient squared (CGS), biconjugate gradient stabilized (BICGSTAB), and for Hermitian positive-definite problems conjugate gradient (CG) and for Hermitian indefinite problems minimum residual (MINRES). These methods converge very rapidly when the coefficient matrix is close to the identity matrix [1]. Unfortunately, in many practical applications is not close to the identity matrix such as the discretization of the Fredholm integral equation of first kind [2]. The robustness and efficiency of Krylov subspace methods can be improved dramatically by using a suitable preconditioner [1, 3]. Basically there are two classes of preconditioners: implicit and explicit preconditioners [4].

In implicit preconditioning methods typically first an approximate factorization of that is is computed where is the defect matrix and is used by an implicit way, as the incomplete LU (ILU) factorization of , [4]. Experimental study of ILU preconditioners for indefinite matrices is given in [5] to gain a better practical understanding of ILU preconditioners and help improve their reliability, because besides fatal breakdowns due to zero pivots, the major causes of failure are inaccuracy and instability of the triangular solves. Another example for implicit preconditioning is the algorithm BBI (Band Block Inverse) which was presented to compute the blocks of the exact inverse of block band matrix within a band consisting of blocks to the left and blocks to the right of the diagonal, in each block row, and is practically viable only when the bandwidths , are small and the entries are scalars or if block matrices have small order (see Chapter 8 of [4]).

The explicit preconditioning methods compute an approximation on explicit form of the inverse and use it to form a preconditioning matrix on an explicit form. That is, for solving (1), the system is considered by iteration when and are explicitly available [4]. As an example an approximate inverse preconditioner for Toeplitz systems with multiple right-hand sides is given in [6]. The study about factorized sparse approximate inverse preconditioning (FSAI) for solving linear algebraic systems with symmetric positive-definite coefficient matrices is proposed theoretically in [7], whereas the corresponding iterative construction of preconditioners is given in [8]. “For matrices with irregular structure, it is unclear how to choose the nonzero pattern of the FSAI preconditioner at least in a nearly optimal way and the serial costs of constructing this preconditioner can be very high if its nonzero pattern is sufficiently dense” [9] and the authors proposed two new approaches to rise the efficiency. Most recently, in [10] two classes of iterative methods for matrix inversion are proposed and these methods are used as approximate inverse preconditioners to precondition BICG method for solving linear systems (1) with the same coefficient matrix and multiple right sides.

There are also hybrid preconditioning methods that form in a two-stage way, where an incomplete factorization (implicit) method is used for a matrix partitioned in blocks and a direct method (explicit) is used to approximate the inverse of the pivot blocks that occur at each stage of the factorization. As an example, in Chapter 7 of [4] when is an matrix the methods are constructed by recursively applying a block partitioning of matrix , where elimination is to take place and then computing an approximation of its Schur complement. Using an approximate version of Young’s “Property A” for symmetric positive-definite linear systems with a block matrix it is proved in [11] that the condition number of the Schur complement is smaller than the condition number obtained by the block-diagonal preconditioning.

When the matrix is large and sparse matrix, a technique based on exploiting the idea of successive independent sets has been used to develop preconditioners that are akin to multilevel preconditioners [12, 13], called ILU with multielimination (ILUM). It is well known that the block ILU preconditioners usually perform better than their point counterparts and the block versions of the ILUM factorization preconditioning technique are proposed in [14]. Robustness of BILUM preconditioner by adding interlevel acceleration in the form of solving interlevel Schur complement matrices approximately and using two implementation strategies to utilize Schur complement technique in multilevel recursive ILU preconditioning techniques (RILUM) is proposed in [15]. For problems arising from practical applications, such as those from computational fluid dynamics, the coefficient matrices are often irregularly structured, ill-conditioned, and of very large size. If the underlying PDEs are discretized by high order finite elements on unstructured domains, the coefficient matrices may have many nonzeros in each row. Furthermore, some large blocks may be ill-conditioned or near singular and the standard techniques used to invert these blocks may produce unstable or inaccurate LU causing less accurate preconditioners. Because of these drawbacks simple strategies used in the standard BILUM technique proposed in [14] may become inefficient [16].

In order to determine a meaningful approximation of the solution of (1) when the coefficient matrix is severely ill-conditioned or near singular one typically replaces the linear system (1) by a nearby systemthat is less sensitive to the perturbations of the right-hand side and the coefficient matrix This replacement is commonly referred to as regularization. Discrete ill-posed problems where both the coefficient matrix and the right-hand side are contaminated by noise appear in a variety of engineering applications [1719]. Tikhonov regularization method [2023], Singular Value Decomposition (SVD) method, [24, 25], well-posed stochastic extensions method [26], extrapolation techniques [27], the iterative method yielding best accessible estimate of the solution of the integral equation [28], and a technique for the numerical solution of certain integral equations of first kind [29] are the most popular regularization methods.

The motivation of this study is firstly to propose high order iterative methods for approximate matrix inversion of a real nonsingular matrix such that the matrix multiplications and additions in the calculation of matrix polynomials for the hyperpower methods of orders of convergence , where is integer, are reduced through factorizations and nested loops of which the iterations are defined using a recurrence formula, and secondly to give an algorithm that constructs block incomplete LU decomposition based on approximate Schur complement for the nonsingular coefficient matrix (when is even) of the algebraic linear system of equations arising from the ill-posed discrete problems with noisy data. In this algorithm the methods of orders are applied for approximating the Schur complement matrices and the obtained preconditioners are used to precondition the one step stationary iterative method. Section 6 of the study is devoted for numerical examples. Algorithm 6, based on Algorithm 1 in [10], and the new proposed algorithm called Algorithm 11 are applied to find regularized solutions of algebraic linear systems obtained by discretization of two problems of Fredholm integral equation of first kind with noisy data of which the first example is the harmonic continuation problem [2, 26], and the second example is Phillip’s problem [29]. For numerical comparisons the same problems are solved by Algorithms 6 and 11 with method of order proposed in [30] and methods of orders , proposed in [31]. Furthermore, experiments of both algorithms are conducted on some nonsymmetric linear systems of coefficient matrices with strong indefinite symmetric components from Harwell-Boeing collection. Analysis and investigations are provided by the obtained numerical results. In last section concluding remarks are given based on theoretical and numerical analysis.

2. Hyperpower Iterative Methods for Approximate Matrix Inversion

Exhaustive studies are published investigating iterative methods for computing the approximate inverse of a nonsingular square matrix (see [10, 3234] and references therein), generalized inverse [30, 35], outer inverses [31], and Moore-Penrose inverses [36], based on matrix multiplication and addition. Let denote the unit matrix and be nonsingular matrix. We denote the approximate inverse of at the iteration by , and the residual matrix by . The most known iterative methods for obtaining approximate inverse of are the order hyperpower method [35] for which requires matrix by matrix multiplications for each iteration.

When is matrix of rank with real or complex elements, equation (3) may be used to find generalized inverses as Moore-Penrose inverse based on the choise of initial inverse see [35]. One of the recent studies One of the recent studies includes the following factorization of (3) for given in [30] for computing generalized inverse The method in (4) performs matrix by matrix multiplications. Another study is [31] in which several systematic algorithms for factorizations of the hyperpower iterative family of arbitrary orders are proposed for computing outer inverses. Among these methods, order method order methodand the order methodrequiring , and matrix by matrix multiplications, respectively, are stated by the authors. In [10] two classes of iterative methods for matrix inversion are proposed and these methods are used as approximate inverse preconditioners to precondition BICG method for solving linear systems (1) of the same coefficient matrix with multiple right sides. For a given integer parameter ; , , and , , are matrix-valued functions, where and is the approximate inverse of at the iteration. Class 1 methods are generated by and have order of convergence and Class 2 methods have orders , which are generated by . These classes are given by

3. A New Family of Methods with Recurrence Formula

Let denote the unit matrix and be nonsingular matrix. We definethe matrix-valued functions consisting of matrix multiplications and additions and call Family Generator Function to the matrix-valued function , given aswhere We say that is 1 nested loop; is 2 nested loop. Using this presentation we express (14) in Horner’s form with nested loops, which can be given by the following recurrence formula: For an integer and the corresponding Family Generator Function , , we present a family of iterative methods for approximate matrix inversion of as

Theorem 1. Let be nonsingular matrix. The family of methods (17) is a factorization of (3) with nested loops for the orders , integer.

Proof. When , from (16), , , formula (17) can be written aswhich is the order method (3). Assume that holds true for , then for it follows that and and this is the order method (3) for

Theorem 2 (see [34]). Let be a nonsingular matrix. For a given integer if the method (17) is used, the necessary and sufficient condition for the convergence of (17) to is that holds, where is spectral radius, , and is the initial approximation. The corresponding sequence of residuals satisfies , .

Proof. Using (21), on the basis of Theorem 4 in Section 7 of Chapter 7 given in [35] and using (17), at iteration we haveDenoting the error by and using (21),Now using (22) and (23) we obtainIf then converges to zero matrix as That is, the proposed family of methods (17) converge to with order for integer.

Lemma 3. Let be a nonsingular matrix. For a given integer if the method from the family (17) is applied to find the approximate inverse of with an initial choice satisfying and where then for all

Proof. Proof follows from induction using the proposed family of methods (17).

4. Computational Complexity

Letbe the corresponding approximate solution of (1) and the residual error, respectively [10]. Let be the number of matrix by matrix multiplications () per iteration of order hyperpower method given in factorized form. Let be the number of matrix by matrix additions () other then addition by identity and be the number of matrix additions by identity per step. For obtaining an error by an iterative method obtained by factorization of hyperpower method (3) using nested loops the authors of [10] showed that total number iterations are required where . Therefore, as criteria to compare iterative methods originated from (3) the authors of [10] defined Asymptotic Convergence Factor where this factor occurs in the product of with iteration number as When the two factorizations of order hyperpower method have the same values then it is necessary to use another quantity which measures the efficiency of the hyperpower method both with respect to matrix by matrix multiplications and matrix by matrix additions.

Definition 4. We define the asymptotic convergence values bywhere the components of the are denoted bysee also [34].

Theorem 5. Let be a nonsingular matrix. For a given integer if the method from the family (17) with convergence order is used to compute the approximate inverse of with an initial approximation satisfying where , then , , and

Proof. Consider the proposed family of methods (17). If then and from (16) gives , , where Thus, iteration requires five , two , and three additions by identity; therefore, , , and hold true for per step. Assume that the proposition is true for , that is, with nested loops requiring , per step. Then for we have with nested loops which require , and and hold true for .

We present from (27) and the total multiplication count and total addition count per iteration required by the proposed methods (17) for integer in Table 1 where the cost of addition with or subtracting from is taken as . We will use the notations: for the proposed order method in (17), for the order method in (4) [30], for the order method in (17), for the order method in (5) [31], for the order method in (17), for the order method in (6) [31], for the order method in (17), and for the order method in (7) [31]. Table 2 shows the rounded in (27) of the proposed methods for the orders , respectively, the methods , , and the methods (8) for , (9) for (see also [34]). This table demonstrates that the proposed iterative method (17) of convergence order has lower , defined in (28), than the method (5) of the same order given in [31]. Moreover, the proposed methods of orders possess lower from (28) than methods (5)-(7) of the same orders, respectively, given in [31].


Proposed Methods (17)

Order






Order , Method

523
523
614

624
734

725
734

826
834
1025

5. Algorithms for Numerical Regularized Solution

Discrete ill-posed problems arise from the discretization of ill-posed problems such as Fredholm integral equation of the first kind with smooth kernel [37]. If the discretization leads to the linear systemthen all the singular values of as well as the SVD components of the solution on the average decay gradually to zero and we say that a discrete Picard condition is satisfied; see [37]. In this cases the solution is very sensitive to perturbations in the data such as measurement or approximation errors and thus regularization methods are essential for computing meaningful approximate solutions. In this study we consider the problem of computing stable approximations of discrete ill-posed problems (31) where the data obey the following assumptions: (a) problem (31) is consistent, i.e., belongs to the column space of . (b) Instead of and we consider a noisy vector and a noisy matrix with available noise levels and , respectively, such thatDiscrete ill-posed problems with data satisfying (32) appear in a number of applications such as inverse scattering [17] and potential theory [18]. If and that is nonsingular the obtained perturbed algebraic linear systemmay be solved using Algorithms 6 and 11 given in the following subsections. When denote the set of all least squares solutions of (33) bythen if and only if the orthogonality conditionis satisfied. On the basis of Theorem 1.1.3 in [38] if then the unique regularized least squares solution isIn this case we may apply Algorithms 6 and 11 for the normalized system (35).

5.1. Algorithm 6 and Convergence Analysis

Let be nonsingular matrix. We denote the error of the approximation of the inverse matrix by where is the approximate inverse of obtained at the iteration using the proposed methods (17) for integers . Thus,are the corresponding numerical regularized solution and the regularized residual error respectively; also we have In this section and the following sections the norm of a matrix is considered to be a subordinate matrix norm. For a given integer and the given predescribed accuracy , Algorithm 6 finds the approximate solution given in (37) for the perturbed system (33) using the methods (17) by performing iterations to reach the accuracy .

Algorithm 6. This algorithm finds the numerical regularized solution of the perturbed system , by using the proposed methods (17) for and is given on the basis of Algorithm 1 in [10].
Step 1. Choose an initial matrix such that and an integer
Step 2 (let ). Evaluate and using (37), (38), respectively. Then, calculate
Do Step 3-Step 5 until .
Step 3. Evaluate
Step 4. Apply the iteration , for the corresponding method in (17) to find
Step 5. , and evaluate and using (37), (38), respectively. Then, calculate
Step 6. If is the iteration number performed, then is the approximate regularized solution satisfying

Remark 7. When the methods from (17) of orders , for , are to be applied in Algorithm 6 then it should be noted that as increases then value also increases. Therefore, to lower the total computational cost in Step 4 few iterations () of one of the proposed methods from (17) of order , for , can be used first to find an approximate inverse Next, this approximate inverse may be used as an initial approximate inverse for the considered method of order , for .

Theorem 8. Let be a nonsingular matrix and let Algorithm 6 be applied for the solution of the linear perturbed system (33). If the chosen initial approximation satisfies then the sequence of the norm of the errors , , and converges to zero as and the following error estimates are obtained at the iteration: where , , and .

Proof. The proof is analogous to the proof of Theorem 3 in [10]. From Theorem 2 using (22) and (24) in a subordinate matrix norm we getrespectively, where Using the residual error and the approximate solution at the step, as and , we getand from (22) we obtainUsing norm inequalities it follows that and because the sequence of the norm of the residual errors and converges to zero as Furthermore,using (24), yields , and hence we getand (41) results by using in (47). The sequence of the norm of the errors converges to zero as since

Theorem 9. For the linear systems (1) and (33) where is nonsingular matrix, assume that and and that . If Algorithm 6 is applied for the solution of the linear perturbed system (33) by performing iterations to compute an approximate inverse of with a chosen initial approximation satisfying , then the following normedwise error bounds are obtained:Here , is the approximate solution obtained at the iteration, is the exact solution of (1), and is the order of the method (17) used in Algorithm 6.

Proof. Proof is obtained using Theorem 8 equation (47) and is analogous to the proof of Theorem 5 in [10].

5.2. Algorithm 11 and Convergence Analysis

When is nonsingular matrix and is even, we consider a block partitioning of aswhere each block has size and When and are nonsingular we propose a splitting of the perturbed matrix , where and are block lower and block upper triangular matrices, respectively, as follows:Here and are of size and present the identity and zero matrices, respectively. is approximation of the Schur complement , of [39], and is approximate inverse of obtained by using the proposed methods from (17) of order , for with an initial approximate inverse satisfying for an accuracy of by performing iterations, where