Abstract

This paper presents a new iterative method for computing the approximate inverse of nonsingular matrices. The analytical discussion of the method is included to demonstrate its convergence behavior. As a matter of fact, it is proven that the suggested scheme possesses tenth order of convergence. Finally, its performance is illustrated by numerical examples on different matrices.

1. Introduction

The solution of linear algebraic systems of the form , where is an matrix (all matrices in this paper are of the same dimension, unless it is clearly stated) and is a given vector, is the center to many numerical simulations and is often the most time-consuming part of a computation. Direct methods, which work essentially based on finding the inverse of the coefficients matrix , are very robust, and they tend to require a predictable amount of resources in terms of time and storage. Their only problem, in fact, lies in the massive need of time and memory in calculations, which normally put them out of interest especially for the cases when is sparse.

At this time, iterative methods can be taken into account. The relaxation and Krylov subspace methods are of such type, which are reliable in solving large scale sparse systems [1]. On the other hand, we have another type of iterative methods, which are called Schulz-type iterations in the literature (see, e.g., [2]). These techniques are based on finding robust approximate inverses of a given matrix.

The oldest technique of this type is the Schulz method [3] defined by where is the identity matrix. In the general case, it is known to converge with , where and denotes the spectral radius. Such schemes are also useful for sensitivity analysis when accurate approximate inverses are needed for both square and rectangular matrices. Notice that, for rectangular matrices, one may obtain their generalized inverse using such iterative methods [4].

These solvers are also the method of choice in certain areas such as circuits, power system networks, and chemical plant modeling [5]. A practical application of Schulz-type methods, which recently attracted some numerical analysts to itself [6], is in preconditioning. In fact, by having an initial value [7], one is able to produce any approximate inverse preconditioners up to the desired accuracy and then solve the preconditioned linear system as rapidly as possible.

Hence, we are interested in finding a new iterative method belonging to the class of Schulz-type methods for finding approximate inverses in this work.

Remark 1. We use matrix norm 2 in all subsequent derivations and discussions unless the other forms are clearly stated.

The rest of this paper is organized in what follows. In the next section, we briefly review some of the existing Schulz-type methods and provide a new mathematical proof for one of the unproved iterative methods. Another contribution of this paper is presented in Section 3. That section is also devoted to the analysis of convergence. Section 4 covers the numerical simulations and some notes in order to have the best feedback in practical implementations. And finally, concluding remarks are drawn in Section 5.

2. Preliminaries

Li et al. in [6] presented and also proposed another third-order iterative method for finding as comes next (right-product form):

W. Li and Z. Li in [8] proposed the following fourth-order iteration method:

In fact, a family of methods developed in [8]. We draw attention to the point that the iterative methods (2) and (4) can also be found in the textbook [9]. Moreover, we suggest that the iterative method (3) can be rewritten as (left-product form)

This structure is easier due to the use of Horner-like multiplications and, subsequently, lowers round-off errors by avoiding the calculation of matrix power, which is costly. It should be remarked that authors in [6] did not provide a mathematical proof for (3) or its other form (5). Thus, herein we prove its order of convergence to first complete the paper [6].

Theorem 2. Assume that is an invertible matrix with real or complex components. If the initial value satisfies then, the iteration (5) converges cubically to .

Proof. In order to prove the convergence of (5), we consider first that , , and . For (5), we get that Thus, we obtain . Moreover, since and , we get that where (8) tends to zero when . That is, , when , and thus for (5), we attain , as . This shows the convergence.
Now, we must show its third order. Toward this end, we denote by the error matrix in the iterative procedure (5). We have (using a similar methodology as in [10]) We can now obtain Equation (11) results in and subsequently which yields by taking norm from both sides and consequently This reveals that the iterative method (5) converges to with at least third order of convergence. The proof is complete.

3. A Novel Method

Let be the identity matrix and . We aim at constructing an iterative method in which the sequence of iterates converges to for an appropriate initial guess. We suggest our proposed method as follows: while and its derivation will be pointed out in Section 4. In numerical mathematics, it is essential to know the theoretical behavior of an approximate method. In what follows, we prove the convergence order of (16).

Theorem 3. Assume that is an invertible matrix with real or complex entries. If the initial guess satisfies then, the iteration (16) converges to with at least tenth convergence order.

Proof. In order to prove the convergence of (16), we consider the same assumptions as we did in the proof of Theorem 2. We then have Thus, we obtain Moreover, since and , we attain where (20) tends to zero when . That is, , when , and thus for (16), we obtain We must now illustrate the tenth order of convergence for (16). To this aim, we denote by the error matrix in the iterative procedure (16). We have (using (18)) Equation (22) yields Using (24), we attain which simplifies, by taking norm from both sides, the following: and consequently This shows that the method (16) converges to with at least tenth order of convergence. The proof is now complete.

A simple corollary from Theorems 2 and 3 is that, using the same conditions and initial conditions, the higher order methods arrive at the convergence phase faster than lower order methods and this reduces the number of iterations.

Note that the sequence of (16) may be applied to not only the left preconditioned linear system but also the right preconditioned linear system , where , only if the initial matrix satisfies .

The iterative methods that have been discussed up to now are sensitive for choosing the initial guess to start the process. As a matter of fact, the high accuracy and efficiency of such types of iterative algorithms are guaranteed only if the initial value satisfies the appropriate condition given in Theorems 2 and 3.

Thus, in order to preserve the convergence order, we remind the reader that the efficient way of producing as given in [11] is as follows: . Another adaptive way is , where is the identity matrix, and should be determined so that .

Remark 4. The new iteration (16) reaches 10th order by using 8 matrix-matrix multiplications, while the schemes (1), (2), and (5) reach 2nd, 3rd, and and 4th orders, respectively, by consuming 2, 3, and 4 matrix-matrix multiplications. Hence, the contributed method has less computational cost than its competitors. This superiority will be clarified in Section 4. It should also be remarked that the convergence of any order for nonsingular square matrices is generated in Section 6 of Chapter 2 of the book [12], whereas the general way for the rectangular matrices is discussed in Chapter 5 of [9] and the recent paper [13]. In fact, in those constructions a convergence order will always be attained by times of matrix-matrix products, such as (1) which reaches the order 2 using two matrix-matrix multiplications.

Remark 5. Two important matters must be mentioned at this moment to ease up the perception of why a higher order (efficient) method such as (16) with 8 matrix-matrix products to reach at least the convergence order 10 is practical. First, by following the comparative index of informational efficiency index of inverse-finders [14], defined by , wherein and stand for the convergence order and the number of matrix-matrix products, the informational efficiency for (16), that is, , beats its other competitive, of (1), of (2) and (4), and of (3). And second, the significance of the new scheme will be displayed in the implementation of such schemes. To illustrate further, such iterations are totally dependent on the initial matrices. Though there are certain and efficient ways for finding , in general such initial approximations take a high number of iterations to arrive at the convergence phase. On the other hand, each cycle of the implementation of such Schulz-type methods includes one stopping criterion based on the use of a matrix norm, and this would impose further burden and load in general for the low order methods in contrast to the high order methods such as (5). Because the computation of a matrix norm (usually for dense matrices and for large sparse matrices) takes a reasonable time, therefore higher number of steps/iterations (which is the result of lower order methods) will be costlier than the lower number of steps/iterations.

Remark 6. The index that we are defining is different from the classical efficiency index as defined in [15]. Traub in [15] discussed why informational efficiency index is needed. In fact, the kind of efficiency index that the users apply depends on the situation which is dealt with. Note that the cost of each iteration (step) is governed by the number of matrix-matrix products, order, and computing of a stopping criterion. Thus, the informational index has meaning in this case, because it measures the gain brought each time a matrix-matrix product along with the order and the stopping criterion is computed.

4. Numerical Reports

In this section, some experiments are presented to demonstrate the capability of the suggested method. The computer algebra system MATHEMATICA 8, [16] and [17], has been used in this section. For numerical comparisons, we have used the methods (1), (2), (5), and (16). We will also use double precision in our calculations. The computer specifications are Microsoft Windows XP Intel(R), Pentium(R) 4, CPU 3.20 GHz, and 4 GB of RAM.

Experiment 1. In this test, 10 sparse random complex matrices of the dimension 2500 are considered as follows:;–>,–>,–>;

In this test, the stopping criterion is , and the maximum number of iterations allowed is set to 100. Note that in this test the initial choice has been constructed by . We also plot the condition number of the 10 test matrices in Figure 1.

The results of comparisons for the test problem have been presented in Figures 2-3. We have distinguished the curves by various symbols like circle, triangle, and so forth, alongside different colors. The attained results reverify the robustness of the proposed iterative method (16) by a clear reduction in the number of iterations and the elapsed time. Note that, in figures of this paper, Schulz, KSM, MM, and proposed method (PM) stand for (1), (2), (5), and (16), respectively.

In general, iterative Schulz-type methods are very useful for large sparse matrices having an sparse inverse or when only an approximate inverse is needed.

After a few iterations of such methods, the computed approximate inverse may be dense, and thus the whole procedure might be slow. To remedy this, a threshold can be imposed to the implemented algorithm. Hopefully, this can be done by the command . In Experiment 1, we have set the to . This technique of implementation would be also fruitful for preconditioning.

We also point out that the construction of (16) is based on applying the nonlinear equation solver (similar idea to [3]) on the matrix equation , where, for example, is the two-point divided difference, which gives us the aimed tenth-order method (16).

Remark 7. As discussed before, an important application of such solvers is to provide robust preconditioners for solving linear systems of equations. To illustrate further, we have chosen the following example carefully, in which, in a practical problem, the GMRES solver fails in solving the resulting system of a discretization, when the maximum number of steps allowed to the GMRES is 2000.

Experiment 2. We consider solving a boundary value problem (BVP) using finite difference discretization. Let us solve the BVP where , , and the number of discretization in our range is .
The implementation to find the solution with the tolerance would be (in the MATHEMATICA environment) as comes next: ;;; –> ; –>  –> –> 
Now the coefficient matrix and the right hand side vector for finding the solution of the BVP (29) in the sparse form can be deduced by ;
Just like the other problems, this is the most time-consuming step, that is, to solve the sparse system . Herein, we choose the Krylov subspace method GMRES to solve the system, without preconditioning and with produced from the iterative method (1), produced from the iterative methods (2) and (5), and produced from the iterative methods (16) to solve the left preconditioned system .
In this way, the system will be well behaved and we expect to find the solution of the BVP (29) in less computational time than the nonpreconditioned GMRES. The results of consuming time for this purpose are given in Table 1. We should remark that in this test has been chosen as the initial approximation based on [18], where is the th diagonal entry of .
As is obvious from Table 1, the nonpreconditioned GMRES fails to converge even after 2000 iteration steps, while the required accuracy could be attained using preconditioning. The best time which beats all the other ways comes from the new scheme (16). The left preconditioned system resulting from (16), showed as (16)-PGMRES, is reliable in solving linear ill-conditioned systems.
Note that the time reported for PGMRES is the whole time of constructing the initial matrix, obtaining the preconditioner and solving the left preconditioned system , by GMRES.

Experiment 3. In order to compare the preconditioners obtained from new method with the famous preconditioners of the literature resulting from incomplete LU factorizations, [1], we pay heed to solving the linear sparse systems , of the dimension 841 using BiCGSTAB. The matrix has been chosen from MatrixMarket database as  A = ExampleData[“Matrix”, “YOUNG1C”], while the right hand side vector is . The solution would be . Figure 4 denotes the plot of matrix .
The left preconditioned system using of Schulz, of KSM, and of the proposed method PM along with the well-known preconditioned techniques  ILU0,  ILUT, and  ILUTP has been tested, while the initial vector has been chosen for all the cases automatically by the command of in MATHEMATICA 8. The results of consuming time comparisons for different tolerances (residual norms) have been listed in Figure 5.
Note that, after a few iterations, the computed preconditioner of Schulz-type methods may be dense. Like before, we must choose a strategy to control the sparsity of the preconditioner. This here can be done by setting , in our obtained approximate inverses.

5. Conclusions

In this work, we have developed an iterative method in inverse-finding of complex matrices belonging to the well-known class of Schulz-type methods.

We have proposed a simpler form of a recently published method (3) in the form (5) and proved analytically that the scheme possesses third-order convergence.

We have shown that the suggested method (16) reaches tenth order of convergence. We moreover have discussed that (16) can be considered for the left and right preconditioned systems under a certain condition. The efficacy of the new scheme was illustrated numerically using the computer programming package MATHEMATICA.

One may note that the approximate inverse obtained per step of Algorithm (5) or (16) can also easily be taken into account as a preconditioner to reduce the ill-conditioning of a system and let the users apply iterative methods such as GMRES or BiCGSTAB in solving large scale sparse linear systems of algebraic equations efficiently.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.