Abstract

High order iterative methods with a recurrence formula for approximate matrix inversion are proposed such that the matrix multiplications and additions in the calculation of matrix polynomials for the hyperpower methods of orders of convergence where is integer, are reduced through factorizations and nested loops in which the iterations are defined using a recurrence formula. Therefore, the computational cost is lowered from to matrix multiplications per step. An algorithm is proposed to obtain regularized solution of ill-posed discrete problems with noisy data by constructing approximate Schur-Block Incomplete LU (Schur-BILU) preconditioner and by preconditioning the one step stationary iterative method. From the proposed methods of approximate matrix inversion, the methods of orders are applied for approximating the Schur complement matrices. This algorithm is applied to solve two problems of Fredholm integral equation of first kind. The first example is the harmonic continuation problem and the second example is Phillip’s problem. Furthermore, experimental study on some nonsymmetric linear systems of coefficient matrices with strong indefinite symmetric components from Harwell-Boeing collection is also given. Numerical analysis for the regularized solutions of the considered problems is given and numerical comparisons with methods from the literature are provided through tables and figures.

1. Introduction

The numerical solution of many scientific and engineering problems requires the solution of large linear systems of equations in the form where and is nonsingular and is usually unsymmetric and unstructured matrix. When is large and sparse, the solution of (1) is generally obtained through Krylov subspace methods such as simple iteration, generalized minimal residual (GMRES), biconjugate gradient (BICG), Quasi-Minimal Residual (QMR), conjugate gradient squared (CGS), biconjugate gradient stabilized (BICGSTAB), and for Hermitian positive-definite problems conjugate gradient (CG) and for Hermitian indefinite problems minimum residual (MINRES). These methods converge very rapidly when the coefficient matrix is close to the identity matrix [1]. Unfortunately, in many practical applications is not close to the identity matrix such as the discretization of the Fredholm integral equation of first kind [2]. The robustness and efficiency of Krylov subspace methods can be improved dramatically by using a suitable preconditioner [1, 3]. Basically there are two classes of preconditioners: implicit and explicit preconditioners [4].

In implicit preconditioning methods typically first an approximate factorization of that is is computed where is the defect matrix and is used by an implicit way, as the incomplete LU (ILU) factorization of , [4]. Experimental study of ILU preconditioners for indefinite matrices is given in [5] to gain a better practical understanding of ILU preconditioners and help improve their reliability, because besides fatal breakdowns due to zero pivots, the major causes of failure are inaccuracy and instability of the triangular solves. Another example for implicit preconditioning is the algorithm BBI (Band Block Inverse) which was presented to compute the blocks of the exact inverse of block band matrix within a band consisting of blocks to the left and blocks to the right of the diagonal, in each block row, and is practically viable only when the bandwidths , are small and the entries are scalars or if block matrices have small order (see Chapter 8 of [4]).

The explicit preconditioning methods compute an approximation on explicit form of the inverse and use it to form a preconditioning matrix on an explicit form. That is, for solving (1), the system is considered by iteration when and are explicitly available [4]. As an example an approximate inverse preconditioner for Toeplitz systems with multiple right-hand sides is given in [6]. The study about factorized sparse approximate inverse preconditioning (FSAI) for solving linear algebraic systems with symmetric positive-definite coefficient matrices is proposed theoretically in [7], whereas the corresponding iterative construction of preconditioners is given in [8]. “For matrices with irregular structure, it is unclear how to choose the nonzero pattern of the FSAI preconditioner at least in a nearly optimal way and the serial costs of constructing this preconditioner can be very high if its nonzero pattern is sufficiently dense” [9] and the authors proposed two new approaches to rise the efficiency. Most recently, in [10] two classes of iterative methods for matrix inversion are proposed and these methods are used as approximate inverse preconditioners to precondition BICG method for solving linear systems (1) with the same coefficient matrix and multiple right sides.

There are also hybrid preconditioning methods that form in a two-stage way, where an incomplete factorization (implicit) method is used for a matrix partitioned in blocks and a direct method (explicit) is used to approximate the inverse of the pivot blocks that occur at each stage of the factorization. As an example, in Chapter 7 of [4] when is an matrix the methods are constructed by recursively applying a block partitioning of matrix , where elimination is to take place and then computing an approximation of its Schur complement. Using an approximate version of Young’s “Property A” for symmetric positive-definite linear systems with a block matrix it is proved in [11] that the condition number of the Schur complement is smaller than the condition number obtained by the block-diagonal preconditioning.

When the matrix is large and sparse matrix, a technique based on exploiting the idea of successive independent sets has been used to develop preconditioners that are akin to multilevel preconditioners [12, 13], called ILU with multielimination (ILUM). It is well known that the block ILU preconditioners usually perform better than their point counterparts and the block versions of the ILUM factorization preconditioning technique are proposed in [14]. Robustness of BILUM preconditioner by adding interlevel acceleration in the form of solving interlevel Schur complement matrices approximately and using two implementation strategies to utilize Schur complement technique in multilevel recursive ILU preconditioning techniques (RILUM) is proposed in [15]. For problems arising from practical applications, such as those from computational fluid dynamics, the coefficient matrices are often irregularly structured, ill-conditioned, and of very large size. If the underlying PDEs are discretized by high order finite elements on unstructured domains, the coefficient matrices may have many nonzeros in each row. Furthermore, some large blocks may be ill-conditioned or near singular and the standard techniques used to invert these blocks may produce unstable or inaccurate LU causing less accurate preconditioners. Because of these drawbacks simple strategies used in the standard BILUM technique proposed in [14] may become inefficient [16].

In order to determine a meaningful approximation of the solution of (1) when the coefficient matrix is severely ill-conditioned or near singular one typically replaces the linear system (1) by a nearby systemthat is less sensitive to the perturbations of the right-hand side and the coefficient matrix This replacement is commonly referred to as regularization. Discrete ill-posed problems where both the coefficient matrix and the right-hand side are contaminated by noise appear in a variety of engineering applications [1719]. Tikhonov regularization method [2023], Singular Value Decomposition (SVD) method, [24, 25], well-posed stochastic extensions method [26], extrapolation techniques [27], the iterative method yielding best accessible estimate of the solution of the integral equation [28], and a technique for the numerical solution of certain integral equations of first kind [29] are the most popular regularization methods.

The motivation of this study is firstly to propose high order iterative methods for approximate matrix inversion of a real nonsingular matrix such that the matrix multiplications and additions in the calculation of matrix polynomials for the hyperpower methods of orders of convergence , where is integer, are reduced through factorizations and nested loops of which the iterations are defined using a recurrence formula, and secondly to give an algorithm that constructs block incomplete LU decomposition based on approximate Schur complement for the nonsingular coefficient matrix (when is even) of the algebraic linear system of equations arising from the ill-posed discrete problems with noisy data. In this algorithm the methods of orders are applied for approximating the Schur complement matrices and the obtained preconditioners are used to precondition the one step stationary iterative method. Section 6 of the study is devoted for numerical examples. Algorithm 6, based on Algorithm 1 in [10], and the new proposed algorithm called Algorithm 11 are applied to find regularized solutions of algebraic linear systems obtained by discretization of two problems of Fredholm integral equation of first kind with noisy data of which the first example is the harmonic continuation problem [2, 26], and the second example is Phillip’s problem [29]. For numerical comparisons the same problems are solved by Algorithms 6 and 11 with method of order proposed in [30] and methods of orders , proposed in [31]. Furthermore, experiments of both algorithms are conducted on some nonsymmetric linear systems of coefficient matrices with strong indefinite symmetric components from Harwell-Boeing collection. Analysis and investigations are provided by the obtained numerical results. In last section concluding remarks are given based on theoretical and numerical analysis.

2. Hyperpower Iterative Methods for Approximate Matrix Inversion

Exhaustive studies are published investigating iterative methods for computing the approximate inverse of a nonsingular square matrix (see [10, 3234] and references therein), generalized inverse [30, 35], outer inverses [31], and Moore-Penrose inverses [36], based on matrix multiplication and addition. Let denote the unit matrix and be nonsingular matrix. We denote the approximate inverse of at the iteration by , and the residual matrix by . The most known iterative methods for obtaining approximate inverse of are the order hyperpower method [35] for which requires matrix by matrix multiplications for each iteration.

When is matrix of rank with real or complex elements, equation (3) may be used to find generalized inverses as Moore-Penrose inverse based on the choise of initial inverse see [35]. One of the recent studies One of the recent studies includes the following factorization of (3) for given in [30] for computing generalized inverse The method in (4) performs matrix by matrix multiplications. Another study is [31] in which several systematic algorithms for factorizations of the hyperpower iterative family of arbitrary orders are proposed for computing outer inverses. Among these methods, order method order methodand the order methodrequiring , and matrix by matrix multiplications, respectively, are stated by the authors. In [10] two classes of iterative methods for matrix inversion are proposed and these methods are used as approximate inverse preconditioners to precondition BICG method for solving linear systems (1) of the same coefficient matrix with multiple right sides. For a given integer parameter ; , , and , , are matrix-valued functions, where and is the approximate inverse of at the iteration. Class 1 methods are generated by and have order of convergence and Class 2 methods have orders , which are generated by . These classes are given by

3. A New Family of Methods with Recurrence Formula

Let denote the unit matrix and be nonsingular matrix. We definethe matrix-valued functions consisting of matrix multiplications and additions and call Family Generator Function to the matrix-valued function , given aswhere We say that is 1 nested loop; is 2 nested loop. Using this presentation we express (14) in Horner’s form with nested loops, which can be given by the following recurrence formula: For an integer and the corresponding Family Generator Function , , we present a family of iterative methods for approximate matrix inversion of as

Theorem 1. Let be nonsingular matrix. The family of methods (17) is a factorization of (3) with nested loops for the orders , integer.

Proof. When , from (16), , , formula (17) can be written aswhich is the order method (3). Assume that holds true for , then for it follows that and and this is the order method (3) for

Theorem 2 (see [34]). Let be a nonsingular matrix. For a given integer if the method (17) is used, the necessary and sufficient condition for the convergence of (17) to is that holds, where is spectral radius, , and is the initial approximation. The corresponding sequence of residuals satisfies , .

Proof. Using (21), on the basis of Theorem 4 in Section 7 of Chapter 7 given in [35] and using (17), at iteration we haveDenoting the error by and using (21),Now using (22) and (23) we obtainIf then converges to zero matrix as That is, the proposed family of methods (17) converge to with order for integer.

Lemma 3. Let be a nonsingular matrix. For a given integer if the method from the family (17) is applied to find the approximate inverse of with an initial choice satisfying and where then for all

Proof. Proof follows from induction using the proposed family of methods (17).

4. Computational Complexity

Letbe the corresponding approximate solution of (1) and the residual error, respectively [10]. Let be the number of matrix by matrix multiplications () per iteration of order hyperpower method given in factorized form. Let be the number of matrix by matrix additions () other then addition by identity and be the number of matrix additions by identity per step. For obtaining an error by an iterative method obtained by factorization of hyperpower method (3) using nested loops the authors of [10] showed that total number iterations are required where . Therefore, as criteria to compare iterative methods originated from (3) the authors of [10] defined Asymptotic Convergence Factor where this factor occurs in the product of with iteration number as When the two factorizations of order hyperpower method have the same values then it is necessary to use another quantity which measures the efficiency of the hyperpower method both with respect to matrix by matrix multiplications and matrix by matrix additions.

Definition 4. We define the asymptotic convergence values bywhere the components of the are denoted bysee also [34].

Theorem 5. Let be a nonsingular matrix. For a given integer if the method from the family (17) with convergence order is used to compute the approximate inverse of with an initial approximation satisfying where , then , , and

Proof. Consider the proposed family of methods (17). If then and from (16) gives , , where Thus, iteration requires five , two , and three additions by identity; therefore, , , and hold true for per step. Assume that the proposition is true for , that is, with nested loops requiring , per step. Then for we have with nested loops which require , and and hold true for .

We present from (27) and the total multiplication count and total addition count per iteration required by the proposed methods (17) for integer in Table 1 where the cost of addition with or subtracting from is taken as . We will use the notations: for the proposed order method in (17), for the order method in (4) [30], for the order method in (17), for the order method in (5) [31], for the order method in (17), for the order method in (6) [31], for the order method in (17), and for the order method in (7) [31]. Table 2 shows the rounded in (27) of the proposed methods for the orders , respectively, the methods , , and the methods (8) for , (9) for (see also [34]). This table demonstrates that the proposed iterative method (17) of convergence order has lower , defined in (28), than the method (5) of the same order given in [31]. Moreover, the proposed methods of orders possess lower from (28) than methods (5)-(7) of the same orders, respectively, given in [31].

5. Algorithms for Numerical Regularized Solution

Discrete ill-posed problems arise from the discretization of ill-posed problems such as Fredholm integral equation of the first kind with smooth kernel [37]. If the discretization leads to the linear systemthen all the singular values of as well as the SVD components of the solution on the average decay gradually to zero and we say that a discrete Picard condition is satisfied; see [37]. In this cases the solution is very sensitive to perturbations in the data such as measurement or approximation errors and thus regularization methods are essential for computing meaningful approximate solutions. In this study we consider the problem of computing stable approximations of discrete ill-posed problems (31) where the data obey the following assumptions: (a) problem (31) is consistent, i.e., belongs to the column space of . (b) Instead of and we consider a noisy vector and a noisy matrix with available noise levels and , respectively, such thatDiscrete ill-posed problems with data satisfying (32) appear in a number of applications such as inverse scattering [17] and potential theory [18]. If and that is nonsingular the obtained perturbed algebraic linear systemmay be solved using Algorithms 6 and 11 given in the following subsections. When denote the set of all least squares solutions of (33) bythen if and only if the orthogonality conditionis satisfied. On the basis of Theorem 1.1.3 in [38] if then the unique regularized least squares solution isIn this case we may apply Algorithms 6 and 11 for the normalized system (35).

5.1. Algorithm 6 and Convergence Analysis

Let be nonsingular matrix. We denote the error of the approximation of the inverse matrix by where is the approximate inverse of obtained at the iteration using the proposed methods (17) for integers . Thus,are the corresponding numerical regularized solution and the regularized residual error respectively; also we have In this section and the following sections the norm of a matrix is considered to be a subordinate matrix norm. For a given integer and the given predescribed accuracy , Algorithm 6 finds the approximate solution given in (37) for the perturbed system (33) using the methods (17) by performing iterations to reach the accuracy .

Algorithm 6. This algorithm finds the numerical regularized solution of the perturbed system , by using the proposed methods (17) for and is given on the basis of Algorithm 1 in [10].
Step 1. Choose an initial matrix such that and an integer
Step 2 (let ). Evaluate and using (37), (38), respectively. Then, calculate
Do Step 3-Step 5 until .
Step 3. Evaluate
Step 4. Apply the iteration , for the corresponding method in (17) to find
Step 5. , and evaluate and using (37), (38), respectively. Then, calculate
Step 6. If is the iteration number performed, then is the approximate regularized solution satisfying

Remark 7. When the methods from (17) of orders , for , are to be applied in Algorithm 6 then it should be noted that as increases then value also increases. Therefore, to lower the total computational cost in Step 4 few iterations () of one of the proposed methods from (17) of order , for , can be used first to find an approximate inverse Next, this approximate inverse may be used as an initial approximate inverse for the considered method of order , for .

Theorem 8. Let be a nonsingular matrix and let Algorithm 6 be applied for the solution of the linear perturbed system (33). If the chosen initial approximation satisfies then the sequence of the norm of the errors , , and converges to zero as and the following error estimates are obtained at the iteration: where , , and .

Proof. The proof is analogous to the proof of Theorem 3 in [10]. From Theorem 2 using (22) and (24) in a subordinate matrix norm we getrespectively, where Using the residual error and the approximate solution at the step, as and , we getand from (22) we obtainUsing norm inequalities it follows that and because the sequence of the norm of the residual errors and converges to zero as Furthermore,using (24), yields , and hence we getand (41) results by using in (47). The sequence of the norm of the errors converges to zero as since

Theorem 9. For the linear systems (1) and (33) where is nonsingular matrix, assume that and and that . If Algorithm 6 is applied for the solution of the linear perturbed system (33) by performing iterations to compute an approximate inverse of with a chosen initial approximation satisfying , then the following normedwise error bounds are obtained:Here , is the approximate solution obtained at the iteration, is the exact solution of (1), and is the order of the method (17) used in Algorithm 6.

Proof. Proof is obtained using Theorem 8 equation (47) and is analogous to the proof of Theorem 5 in [10].

5.2. Algorithm 11 and Convergence Analysis

When is nonsingular matrix and is even, we consider a block partitioning of aswhere each block has size and When and are nonsingular we propose a splitting of the perturbed matrix , where and are block lower and block upper triangular matrices, respectively, as follows:Here and are of size and present the identity and zero matrices, respectively. is approximation of the Schur complement , of [39], and is approximate inverse of obtained by using the proposed methods from (17) of order , for with an initial approximate inverse satisfying for an accuracy of by performing iterations, where

Lemma 10. Let be nonsingular matrix and be even. Assume that are nonsingular and is a leading block of as partitioned in (50), where , , and is an approximate inverse of obtained by using the proposed method from (17) of order , for a given integer such that by performing iterations for an accuracy of . If thenand that converges to zero matrix as , where and , are as given in (51). In addition, this incomplete decomposition of is a convergent splitting if .

Proof. Assume that and are nonsingular. Hence isOn the basis of Lemma 3, the residual matrix isFrom Theorem 2 we have , where , , and substituting in (54) gives (52). Since it follows that as Using (52) and (53) we getThe eigenvalues of are the roots of the equation where det denotes the determinant of a square matrix. On the basis of determinant of lower block matrices in [40] we getHence,Therefore, on the basis of convergence of one step stationary iterative method (see Theorem 5.3 Chapter 5 in [4]) if then and the splitting is a convergent splitting.

Algorithm 11. This algorithm constructs approximate block Schur-BILU decomposition of using the proposed methods (17) for to approximate the Schur complement matrices and then finds the regularized solution of the perturbed system (33) by using the constructed Schur-BILU decomposition of to precondition the one step stationary iterative method.
Let stage number , and the subblock .
Step 1. Let and choose an initial matrix such that
and the method from (17) for the corresponding integer Find
Do Steps 2 and 3 until .
Step 2. Apply the iteration for the corresponding method in (17) to find
Step 3 (). Evaluate and
Step 4. Let be the total iteration number performed in Steps 2 and 3 and denote the approximate inverse of at iterations by . Find (i)An approximate inverse preconditioner for can be constructed using Steps 1-3 by taking and and repeating Steps 1-3. Let be the total iteration number performed, then the obtained approximate inverse preconditioner of is denoted by .Step 5. Construct the lower block triangular matrix and the upper block triangular matrix as defined in (51).
Step 6. Take , and , and find
Step 7. Solve the system by solving the block lower and block upper triangular systems. For the solution of block upper triangular system the obtained subsystems may be solved using Algorithm 6. Also preconditioned BICG method or preconditioned GMRES method may be used to solve these block subsystems of which the approximate inverse preconditioners and obtained in Step 4 can be used as preconditioners. Next denote the approximate solution by and find and increase the value of to .
Do Steps 8 and 9 until .
Step 8. Calculate and
Step 9. Solve and denote the obtained approximate solution by and find and let
Step 10. If is the iteration number performed, in Steps 8 and 9, then is the approximate solution satisfying

Theorem 12. Let be the error between the exact solution of (33) and the solution of the one step stationary iterative method where and is the exact solution of and in Steps 7-9 of Algorithm 11. If the conditions of Lemma 10 are satisfied, then and converge to zero vector as Furthermore,and ifthen and converge to zero for any initial solution as where is as given in (55).

Proof. On the basis of Section 5.2.1 in [4] the proof is as follows: let , and From Steps 6-9 of Algorithm 11 and using (55) it follows thatand by recursion at the iteration we obtainFurthermore, from (61) and (63) we obtainif then and the splitting is a convergent splitting of and we get that and converge to the zero vector as . Using norm inequalities we get inequalities (58) and (59). Furthermore, if condition (60) is satisfied then and and converge to zero for any initial solution as .

Remark 13. Let be as given in Theorem 12 and be the approximate solution obtained by Algorithm 11; the error occurs due to the floating points or due to the preconditioned iterative method used to solve the block subsystems in the backward block substitution in Steps 7-9 of Algorithm 11.

6. Numerical Results

The experimental investigation of the proposed algorithms is given on two examples of Fredholm integral equation of first kind and on some nonsymmetric linear systems with strong indefinite symmetric components sourced from simulation of computer systems and nuclear reactor core model. All the computations are performed using a personal computer with properties AMD Ryzen 7 1800X Eight Core Processor 3.60GHz. Calculations are carried by Fortran programs in double precision. In this section the figures and the tables adopt the following notations: is the total solution cost in seconds of Algorithm 6 for Steps 1-6. is the total solution cost in seconds of Algorithm 6 for successive perturbations. is the cost in seconds for constructing the preconditioner in Steps 1-5 of Algorithm 11. is the cost in seconds of the total iterations performed by the preconditioned stationary one step iterative method in Steps 6-9 of Algorithm 11. is the total solution cost in seconds of Algorithm 11, that is, +.TBMMs denote the total block matrix by matrix multiplications.TMMs denote the total matrix by matrix multiplications.

Application 14 (harmonic continuation problem). The first application is the harmonic continuation problem [2, 26]. Given a harmonic function in the unit circle with known values for some , , find its values for Now and are related by the Poisson integral:We use the same data used by Franklin [26] by which the answer is known from the real part of the analytic function For ,whereas for ,Using the quadrature nodes , evaluating at the points , , and taking , the discretization of (65) gives the algebraic linear system whereIn all applications given in this section we take and satisfying (32) where is identity matrix. The trace of the exact solution at the grid points is denoted by . The condition number of matrix is denoted by and for Application 14,   with respect to and the values of are given in Table 3 where The shape of the coefficient matrix when and of Application 14 is given in Figure 1, illustrating that the coefficient matrix is very dense. The to achieve the desired accuracy when and for Application 14 with respect to are given in Figure 2.
This figure shows that, for small values of , the proposed method (17) requires the minimum . Table 4 presents the iteration number performed, TMMs, to achieve the desired accuracy , and the relative norm of the errors using the methods , , for Application 14 when and Next we take and use the methods , in Step 4 of Algorithm 6 by applying successive perturbations for solving the perturbed systems , for , where , , and , such that the obtained approximate inverse is used as an initial approximate inverse for the next perturbed system. Let , be the approximate solution of , obtained by Algorithm 6 by performing iterations for an accuracy of for the corresponding perturbed system. Table 5 shows the total iteration number performed, TMMs, for an accuracy of , and the relative norm of the errors using the methods , , by Algorithm 6 for Application 14 when . Thus, both Tables 4 and 5 present that the proposed methods , , and with Algorithm 6 give solutions by performing less total solution cost in time compared with the other methods of same orders, , , and respectively. The with respect to by using the methods , in Step 2 of Algorithm 11 are given in Figure 3 for Application 14 when . This figure shows that, for the considered values of , the proposed method (17) requires the minimum total solution cost to achieve the desired accuracy and when . , , and to achieve the desired accuracy and , iteration numbers , and TBMMs, maximum norm, and relative norm of the errors with respect to of the methods , , by Algorithm 11, when for Application 14, are presented in Table 6.
Moreover, in [2] the best error of numerical solution occurring by singular value decomposition when for the harmonic continuation problem (Application 14) was obtained approximately using single precision. However, relative maximum errors using the proposed methods , for the corresponding discretization by Algorithm 6 when are and by Algorithm 11 when are . Furthermore, we solved harmonic continuation problem by direct LU with pivoting and the accuracy of the solution is for , and Also when this problem is solved for same value of and by Algorithm 6 using the proposed methods , relative norm of the errors is less than for the third successive perturbation as shown in column seven of Table 5.

Application 15 (Phillip’s problem). We take the following Fredholm integral equation of first kind discussed in [29]. Nowadays this problem is reconsidered by the authors of [41, 42] and solved using a new L-curve technique and a variant of L-curve technique, respectively, to estimate the Tikhonov regularization parameter for regularizing the obtained algebraic linear system.Its solution, kernel, and the right-hand side are given byrespectively. We take quadrature nodes , evaluating at the points , and the discretization of (69), (70) gives the algebraic linear system . The trace of the exact solution at the grid points , is denoted by The condition number of matrix () of Application 15 with respect to where and the values of are given in Table 7. with respect to by using the methods , , for Phillip’s problem is given in Figure 4 when . This figure demonstrates that the proposed method (17) requires the minimum total solution cost in seconds with respect to the perturbations by Algorithm 6 to achieve the desired accuracy when for Application 15.
Table 8 presents the iteration number performed, TMMs, for an accuracy of , and the relative norm of the errors using the methods , , by Algorithm 6 for Application 15 when and . It can be viewed from this table that the best relative norm error is approximately To get more smooth solution, we use the methods , , in Step 4 of Algorithm 6 by applying successive perturbations for solving the perturbed systems , for , where , , and , , such that the obtained approximate inverse is used as the initial approximate inverse for the next perturbed system. Let, be the approximate solutions obtained by Algorithm 6 by performing iterations for an accuracy of for the considered perturbed systems. Total iteration number performed, the and TMMs, and the relative norm of the errors for the last three perturbed systems using the methods , , by Algorithm 6 for Application 15 when are presented in Table 9. This table shows that best relative norm error obtained after fifth successive perturbation is . Thus, the proposed methods , , and with Algorithm 6 give solutions by performing less total solution cost in seconds compared with the methods of same orders , , and , respectively, for Application 15. with respect to by Algorithm 11 applied with the methods , , for Application 15 when are illustrated in Figure 5. Figure 5 shows that, for the considered values of , the proposed method (17) requires the minimum by Algorithm 11 to achieve the desired accuracy and , when for Application 15. The , and the , iteration numbers and TBMMs, maximum norm errors, and relative norm errors with respect to of the methods , , by Algorithm 11, when for Application 15 are shown in Table 10.
Moreover, to compare our results with the existing ones from the literature we take for Phillip’s problem. The best relative error of numerical solution occurring in [42] by Galerkin method with the code Phillips was . On the other hand relative maximum error using the proposed methods by Algorithm 6 when is and by Algorithm 11 when is .

Application 16 (selected matrices from Harwell-Boeing collection). Test matrices are selected from the sets SMTAPE, GRENOBLE, and NUCL originating from simplex method basis matrix, simulation of computer systems, and nuclear reactor core model respectively, in Harwell-Boeing collection which are available from “Matrix Market”, a repository organized by the National Institute of Standards and Technology. In all these problems right-hand side is generated from a known solution vector of all ones. We use the proposed method in Step 4 of Algorithm 6 by applying successive perturbations for solving the perturbed systems , for , where , , and , , such that the obtained approximate inverse is used as the initial approximate inverse for the next perturbed system. Let be the approximate solutions obtained by Algorithm 6 by performing iterations for an accuracy of for the considered perturbed systems. The minimal values of the relative norm of the errors with respect to using the method by Algorithm 6 for the test problems and the total iteration number performed and are presented in Table 11. The numerical solution of the problems GRE216A, GRE216B, GRE343, GRE512, and GRE1107 is also studied in [43] and the authors provided results obtained by balance scheme in Table III of [43]. The experimental study of ILU preconditioners for the problems BP1000, GRE1107, and NNC1374 is given in [5], and the authors classify these problems hard problems to solve. We present the numerical results of these test problems from the studies [5, 43] in Table 12, of which the Pnc means that problem is not considered, F denotes the failure, and RNEng means relative norm error (RNE) obtained by the method cited in the corresponding reference which is not given. In Table 12 second column presents relative norm errors (RNE) and the iteration numbers (iter) of the problems from GRENOBLE set by balance scheme taken from the Table III of [43], in which the authors mentioned that the preconditioned GMRES failed to converge for any of these problems. This table also demonstrates the iteration numbers for the preconditioned GMRES method or possible reasons of the failure of this method for the solution of the problems BP1000, GRE1107, and NNC1374 with the preconditioners ILU taken from Table 3, ILUTP taken from Table 5, and ILUTP taken from Table 6 of [5], presented in third, fourth, and fifth columns of Table 12, respectively. Furthermore, we present the solution of some problems from GRENOBLE set in Application 16 for the minimal values of relative norm of the errors with respect to to achieve the desired accuracy and using the method by Algorithm 11 in Table 13. We conclude that the proposed algorithms give stable and highly accurate solution of the considered test problems when compared with the results in Table 12. Moreover, taking into consideration the fact that the authors of [5] classified the problem NNC1374 as very hard problem to solve, Algorithm 6 gave stable and almost accurate solution of this problem.

7. Concluding Remarks

In this study for any integer , we propose a family of methods with recursive structure for computing approximate matrix inverse of a real nonsingular square matrix with convergence order It is proven that these methods require , matrix by matrix multiplications per iteration, which are fewer than , for the standard hyperpower method of same order. The proposed family of methods perform , matrix by matrix additions other than addition with the identity matrix and , matrix addition by identity per iteration. An algorithm is proposed by constructing block ILU decomposition based on approximate Schur complement (Schur-BILU) for the coefficient matrix of the algebraic linear system of equations arising from the ill-posed discrete problems with noisy data. From the proposed methods of approximate matrix inversion, the methods of orders are applied for approximating the Schur complement matrices by which the obtained approximate Schur-BILU preconditioner is used to precondition the one step stationary iterative method. This economic computational efficiency is useful in many practical problems such as the solution of first kind Fredholm integral equations and the numerical results justify the given theoretical results. Furthermore, the cost of construction of the approximate Schur-BILU preconditioner can be amortized over systems with same coefficient matrix and different right sides since the preconditioner is to be reused several times. Table 14 shows that Algorithm 11 is at least two times faster than Algorithm 6 used with the methods , , in both considered problems when and for Application 14 and for Application 15.

Data Availability

Data are used from “Matrix Market”, a repository organized by the National Institute of Standards and Technology to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.