Table of Contents Author Guidelines Submit a Manuscript
International Journal of Mathematics and Mathematical Sciences
Volume 2012 (2012), Article ID 134653, 11 pages
http://dx.doi.org/10.1155/2012/134653
Research Article

A Rapid Numerical Algorithm to Compute Matrix Inversion

Department of Applied Mathematics, School of Mathematical Sciences, Ferdowsi University of Mashhad, Mashhad, Iran

Received 22 March 2012; Revised 1 June 2012; Accepted 1 June 2012

Academic Editor: Taekyun Kim

Copyright © 2012 F. Soleymani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The aim of the present work is to suggest and establish a numerical algorithm based on matrix multiplications for computing approximate inverses. It is shown theoretically that the scheme possesses seventh-order convergence, and thus it rapidly converges. Some discussions on the choice of the initial value to preserve the convergence rate are given, and it is also shown in numerical examples that the proposed scheme can easily be taken into account to provide robust preconditioners.

1. Introduction

Let us consider the square matrix 𝐴𝑁×𝑁 with real or complex elements which is nonsingular. It is well known that its inverse is available and could be found by the direct methods such as LU or QR decompositions, see for example [1]. When the matrix inverse is computed, the method of choice should be probably Gaussian elimination with partial pivoting (GEPP). The resulting residual bounds and possible backward errors may be much smaller in this case, see [2] (subsection on the “Use and abuse of the matrix inverse”).

An effective tool to compute approximate inverses of the matrix 𝐴 is to use iteration-type methods for this purpose which are based on matrix multiplications and are of great interest and accuracy when implementing on parallel machines. In fact, one way is to construct iterative methods of high order of convergence to find matrix inversion numerically for all types of matrices (especially for ill-conditioned ones).

A clear use of such schemes is that one may apply them to find 𝐴1 and then, by an easy matrix-vector multiplication, compute the solution of the linear system of the equations 𝐴𝑥=𝑏. However another use is in constructing approximate inverse preconditioners; that is, a very robust approximate preconditioner can easily be constructed using one, two, or three steps of such iterative methods, and the resulting left preconditioned systems would be 𝐴𝑥=𝑏,(1.1) wherein 𝐴=𝑃1𝐴, 𝑏=𝑃1𝑏, and 𝑃1𝐴1.

Such obtained approximate inverse preconditioners could be robust competitors to the classical or modern methods such as AINV or FAPINV; see for example [3, 4]. The approximate inverse (AINV) and the factored approximate inverse (FAPINV) are two known algorithms in the field of preconditioning of linear systems of equations. Both of these algorithms compute a sparse approximate inverse of matrix 𝐴 in the factored form and are based on computing two sets of vectors which are 𝐴 biconjugate.

In this paper, in order to challenge with very ill-conditioned matrices or to find the preconditioner 𝑃1 in less number of iterations and having high accuracy, we will propose an efficient iterative method for finding 𝐴1 numerically. Theoretical analysis and numerical experiments show that the new method is more effective than the existing ones in the case of constructing approximate inverse preconditioners.

The rest of the paper is organized as follows. Section 2 is devoted to a brief review of the available literature. The main contribution of this paper is given in Section 3. Subsequently, the method is examined in Section 4. Finally, concluding remarks are presented in Section 5.

2. Background

Several methods of various orders were proposed for approximating (rectangular or square) matrix inverses, such as those according to the minimum residual iterations [5] and Hotelling-Bodewig algorithm [6].

The Hotelling-Bodewig algorithm [6] is defined as 𝑉𝑛+1=𝑉𝑛2𝐼𝐴𝑉𝑛,𝑛=0,1,2,,(2.1) where 𝐼 is the identity matrix. Note that throughout this paper we consider matrices of the same dimension unless it is stated obviously.

In 2011, Li et al. in [7] presented the following third-order method: 𝑉𝑛+1=𝑉𝑛3𝐼𝐴𝑉𝑛3𝐼𝐴𝑉𝑛,𝑛=0,1,2,,(2.2) and also proposed another third-order iterative method for approximating 𝐴1 as comes next 𝑉𝑛+1=1𝐼+4𝐼𝑉𝑛𝐴3𝐼𝑉𝑛𝐴2𝑉𝑛,𝑛=0,1,2,.(2.3)

It is intersecting to mention that the method (2.2) can be found in the Chapter 5 of [8].

As an another method from the existing literature, Krishnamurthy and Sen suggested the following sixth-order iteration method [8] for the above purpose: 𝑉𝑛+1=𝑉𝑛2𝐼𝐴𝑉𝑛3𝐼𝐴𝑉𝑛3𝐼𝐴𝑉𝑛𝐼𝐴𝑉𝑛𝐼𝐴𝑉𝑛=𝑉𝑛𝐼+𝐼𝐴𝑉𝑛𝐼+𝐼𝐴𝑉𝑛𝐼+𝐼𝐴𝑉𝑛𝐼+𝐼𝐴𝑉𝑛𝐼+𝐼𝐴𝑉𝑛𝐼+𝐼𝐴𝑉𝑛,(2.4) where 𝑛=0,1,2,

For further reading refer to [9, 10].

3. An Accurate Seventh-Order Method

This section contains a new high-order algorithm for finding 𝐴1 numerically. In order to deal with very ill-conditioned linear systems, to find efficient preconditioners rapidly, or to find robust approximate inverses, we suggest the following matrix multiplication-based iterative method: 𝑉𝑛+1=1𝑉16𝑛120𝐼+𝐴𝑉𝑛393𝐼+𝐴𝑉𝑛735𝐼+𝐴𝑉𝑛(861𝐼+𝐴𝑉𝑛651𝐼+𝐴𝑉𝑛315𝐼+𝐴𝑉𝑛93𝐼+𝐴𝑉𝑛15𝐼+𝐴𝑉𝑛,(3.1) for any 𝑛=0,1,2,, wherein 𝐼 is the identity matrix, and the sequence of iterates {𝑉𝑛}𝑛=𝑛=0 converges to 𝐴1 for a good initial guess.

Theorem 3.1. Assume that 𝐴=[𝑎𝑖,𝑗]𝑁×𝑁 be an invertible matrix with real or complex elements. If the initial guess 𝑉0 satisfies 𝐼𝐴𝑉0<1,(3.2) then the iteration (3.1) converges to 𝐴1 with at least seventh convergence order.

Proof. In order to prove the convergence behavior of (3.1), we assume that 𝐼𝐴𝑉0<1, 𝐸0=𝐼𝐴𝑉0, and 𝐸𝑛=𝐼𝐴𝑉𝑛. We then have 𝐸𝑛+1=𝐼𝐴𝑉𝑛+11=𝐼𝐴𝑉16𝑛120𝐼+𝐴𝑉𝑛393𝐼+𝐴𝑉𝑛735𝐼+𝐴𝑉𝑛861𝐼+𝐴𝑉𝑛651𝐼+𝐴𝑉𝑛(315𝐼+𝐴𝑉𝑛93𝐼+𝐴𝑉𝑛15𝐼+𝐴𝑉𝑛1=𝐼𝐴𝑉16𝑛120𝐼393𝐴𝑉𝑛+735𝐴𝑉𝑛2861𝐴𝑉𝑛3+651𝐴𝑉𝑛4315𝐴𝑉𝑛5+93𝐴𝑉𝑛615𝐴𝑉𝑛7+𝐴𝑉𝑛81=164𝐼+𝐴𝑉𝑛2𝐼+𝐴𝑉𝑛7=1163𝐼+𝐼𝐴𝑉𝑛2𝐼𝐴𝑉𝑛7=1163𝐼+𝐸𝑛2𝐸𝑛7=1169𝐼+6𝐸𝑛+𝐸2𝑛𝐸𝑛7=1𝐸169𝑛+6𝐸8𝑛+9𝐸7𝑛.(3.3) Hence, it is now and according to the above simplifications easily to have 𝐸𝑛+11𝐸169𝑛+6𝐸8𝑛+9𝐸7𝑛1𝐸16𝑛9𝐸+6𝑛8𝐸+9𝑛7.(3.4) Furthermore, since 𝐸0<1, and 𝐸1𝐸07<1, we get that 𝐸𝑛+1𝐸𝑛7𝐸𝑛172𝐸07𝑛+1<1,(3.5) where (3.5) tends to zero when 𝑛. That is to say 𝐼𝐴𝑉𝑛0,(3.6) when 𝑛, and thus for (3.6), we obtain 𝑉𝑛𝐴1,as𝑛.(3.7)
We must now illustrate the seventh order of convergence for (3.1) due to the obtained theoretical discussions that (3.1) converges under the assumption made in Theorem 3.1 to 𝐴1. To do this aim, we denote 𝜀𝑛=𝑉𝑛𝐴1, as the error matrix in the iterative procedure (3.1). We have 𝐼𝐴𝑉𝑛+1=116𝐼𝐴𝑉𝑛9+6𝐼𝐴𝑉𝑛8+9𝐼𝐴𝑉𝑛7.(3.8) Equation (3.8) yields in 𝐴𝐴1𝑉𝑛+1=1𝐴169𝐴1𝑉𝑛9+6𝐴8𝐴1𝑉𝑛8+9𝐴7𝐴1𝑉𝑛7,𝐴(3.9)1𝑉𝑛+1=1𝐴168𝐴1𝑉𝑛9+6𝐴7𝐴1𝑉𝑛8+9𝐴6𝐴1𝑉𝑛7.(3.10) Using (3.10), we attain 𝜀𝑛+1=1𝐴168𝜀𝑛96𝐴7𝜀𝑛8+9𝐴6𝜀𝑛7,(3.11) which simplifies by taking norm from both sides 𝜀𝑛+11𝐴168𝜀9𝑛+6𝐴7𝜀8𝑛+9𝐴6𝜀7𝑛,(3.12) and consequently 𝜀𝑛+11916𝐴6+6𝐴7𝜀𝑛+𝐴8𝜀𝑛2𝜀𝑛7.(3.13) This shows that the method (3.1) converges to 𝐴1 with at least seventh order of convergence. This concludes the proof.

Remark 3.2. For the above Theorem 3.1, we can conclude some points as follows. (1)From (3.4), one knows that the condition (3.2) may be weakened. In fact, we only need that the spectral radius of 𝐴𝑉0 be less than one for the convergence of the above new method (3.1). In this case, the choice of 𝑉0 may be obtained according to the estimate formulas for the spectral radius 𝜌(𝐴𝑉0) (see, e.g., [11]) (2)In some experiments and to reduce the computational cost, we may solve the matrix multiplications, based on the vector and parallel processors. (3)Finally, for the choice of 𝑉0, there exist many of different forms. We will describe this problem after Theorem 3.3 based on some literatures.
We now give a property about the scheme (3.1). This property shows that {𝑉𝑛}𝑛=𝑛=0 of (3.1) may be applied to not only the left preconditioned linear system 𝑉𝑛𝐴𝑥=𝑉𝑛𝑏 but also to the right preconditioned linear system 𝐴𝑉𝑛𝑦=𝑏, where 𝑦=𝑉𝑛𝑥.

Theorem 3.3. Let again 𝐴=[𝑎𝑖,𝑗]𝑁×𝑁 be a nonsingular real or complex matrix. If 𝐴𝑉0=𝑉0𝐴,(3.14) is valid, then, for the sequence of {𝑉𝑛}𝑛=𝑛=0 of (3.1), one has that 𝐴𝑉𝑛=𝑉𝑛𝐴,(3.15) holds, for all 𝑛=1,2,.

Proof. The mathematical induction is taken into account herein. First, since 𝐴𝑉0=𝑉0𝐴, we have 𝐴𝑉11=𝐴𝑉160120𝐼+𝐴𝑉0393𝐼+𝐴𝑉0735𝐼+𝐴𝑉0861𝐼+𝐴𝑉0(651𝐼+𝐴𝑉0315𝐼+𝐴𝑉093𝐼+𝐴𝑉015𝐼+𝐴𝑉0=1𝑉160𝐴120𝐼+𝑉0𝐴393𝐼+𝑉0𝐴735𝐼+𝑉0𝐴861𝐼+𝑉0𝐴(651𝐼+𝑉0𝐴315𝐼+𝑉0𝐴93𝐼+𝑉0𝐴15𝐼+𝑉0𝐴=1𝑉160120𝐼+𝑉0𝐴393𝐼+𝑉0𝐴735𝐼+𝑉0𝐴861𝐼+𝑉0𝐴(651𝐼+𝑉0𝐴315𝐼+𝑉0𝐴93𝐼+𝑉0𝐴15𝐼+𝑉0𝐴𝐴=𝑉1𝐴.(3.16) Equation (3.16) shows that when 𝑛=1, (3.15) is true. At the moment, suppose that 𝐴𝑉𝑛=𝑉𝑛𝐴 is true, and then a straightforward calculation using (3.16) shows that, for all 𝑛1, 𝐴𝑉𝑛+11=𝐴𝑉16𝑛120𝐼+𝐴𝑉𝑛393𝐼+𝐴𝑉𝑛735𝐼+𝐴𝑉𝑛861𝐼+𝐴𝑉𝑛(651𝐼+𝐴𝑉𝑛315𝐼+𝐴𝑉𝑛93𝐼+𝐴𝑉𝑛15𝐼+𝐴𝑉𝑛=1𝑉16𝑛𝐴120𝐼+𝑉𝑛𝐴393𝐼+𝑉𝑛𝐴735𝐼+𝑉𝑛𝐴861𝐼+𝑉𝑛𝐴(651𝐼+𝑉𝑛𝐴315𝐼+𝑉𝑛𝐴93𝐼+𝑉𝑛𝐴15𝐼+𝑉𝑛𝐴=𝑉𝑛+1𝐴.(3.17)
This concludes the proof.

Note that according to the literatures [7, 9, 12, 13] and to find an initial value 𝑉0 to preserve the convergence order of such iterative methods, we need to fulfill at least the condition given in Remark 3.2. or Theorem 3.1. We list some ways for this purpose in what follows. WAY 1 If a matrix is strictly diagonally dominant, then choose 𝑉0 as 𝑉0=diag(1/𝑎11,1/𝑎22,,1/𝑎𝑛𝑛), where 𝑎𝑖𝑖 are the diagonal elements of 𝐴. WAY 2 For the matrix 𝐴, choose 𝑉0 as 𝑉0=𝐴𝑇/(𝐴1𝐴), wherein 𝑇 stands for transpose, 𝐴1=max𝑗{Σ𝑛𝑖=1|𝑎𝑖𝑗|} and 𝐴=max𝑖{Σ𝑛𝑗=1|𝑎𝑖𝑗|}. WAY 3 If the ways 1-2 fail, then use 𝑉0=𝛼𝐼, where 𝐼 is the identity matrix, and 𝛼R should adaptively be determined such that 𝐼𝛼𝐴<1.

4. Numerical Testing

In this section, experiments are presented to demonstrate the capability of the suggested method. For solving a square linear system of equations of the general form 𝐴𝑥=𝑏, wherein 𝐴𝑁×𝑁, we can now propose the following efficient algorithm: 𝑥𝑛+1=𝑉𝑛+1𝑏while𝑛=0,1,2,.

The programming package MATHEMATICA 8, [14, 15], has been used in this section. For numerical comparisons, we have used the methods (2.1), (2.2), the sixth-order method of Krishnamurthy and Sen (2.4), and the new algorithm (3.1).

As noted in Section 1, such high order schemes are so much fruitful in providing robust preconditioners to the linear systems. In fact, we believe that only one full iteration of the scheme (3.1) for even large sparse systems and by defining SparseArray[mat] to reduce the computational load of matrix multiplications is enough to find acceptable preconditioners to the linear systems. Anyhow, using parallel computations with simple commands for this purpose in MATHEMATICA 8 may reduce the computational burden much more.

Test Problem 1
Consider the linear system 𝐴𝑥=𝑏, wherein 𝐴 is the large sparse complex matrix defined byn = 1000;A = SparseArray[ { Band[ { 1, 120 } ] -> -2., Band[ { 950, 1 } ] -> 2. - I,Band[ { -700, 18 } ] -> 1., Band[ { 1, 1 } ] -> 23.,Band[ { 1, 100 } ] -> 0.2, Band[ { 214, -124 } ] -> 1.,Band[ { 6, 800 } ] -> 1.1 } , { n, n } , 0.].
The dimension of this matrix is 1000 with complex elements in its structure. The right hand side vector is considered to be 𝑏=(1,1,,1)𝑇. Although page limitations do not allow us to provide the full form of such matrices, the structure of such matrices can easily be drawn. Figure 1 illustrates the plot and also the array plot of this matrix.

fig1
Figure 1: The matrix plot (a) and the array plot (b) of the coefficient complex matrix in test problem 1.

Now, we expect to find robust approximate inverses for 𝐴 in less iterations by the high-order iterative methods. Furthermore, as we described in the previous sections, the approximate inverses could be considered for left or right preconditioned systems.

Table 1 clearly shows the efficiency of the proposed iterative method (3.1) in finding approximate inverses by manifesting the number of iterations and the obtained residual norm. When working with sparse matrices, an important factor which affects clearly on the computational cost of the iterative method is the number of nonzeros elements. The considered sparse complex matrix 𝐴, in test problem 1, has 3858 nonzero elements at the beginning. We list the number of nonzero elements for different iteration matrix multiplication-based schemes in Table 1. It is obvious that the new scheme is much better than (2.1) and (2.2), but when comparing to (2.4) its number of nonzero elements are higher; however, this is completely satisfactory due to the better numerical results we have obtained for the residual norm. In fact, if one let the (2.4) to cycle more in order to reach the residual norm which is correct up to seven decimal places, then the obtained nonzero elements will be more than the corresponding one of (3.1).

tab1
Table 1: Results of comparisons for the test problem 1.

At this time, if the user is satisfied of the obtained residual norm for solving the large sparse linear system then can be stopped, else, one can use the attained approximate inverse as a left or right preconditioner and solve the resulting preconditioned linear system with low condition number using the LinearSolve command. Note that the written code in order to implement the test problem 1 for the method (3.1) is given as follows:b = SparseArray[Table[1, { k, n } ]];DA = Diagonal[A];B = SparseArray[Table[(1/DA[[i]]), { i, 1, n } ]];Id = SparseArray[{{i_, i_ } -> 1 } , { n, n } ];X = SparseArray[DiagonalMatrix[B]];Do[X1 = SparseArray[X];AV = Chop[SparseArray[A.X1]];X = Chop[(1/16) X1.SparseArray[(120 Id +AV.(-393 Id+AV.(735 Id +AV.(-861 Id +AV.(651 Id +AV.(-315 Id+ AV.(93 Id + AV.(-15 Id + AV))))))))]];Print[X];L[i] = Norm[b - SparseArray[A.(X.b)]];Print["The residual norm of the linear system solution is:"Column[ { i } , Frame -> All, FrameStyle -> Directive[Blue]]Column[ { L[i] } , Frame -> All, FrameStyle -> Directive[Blue]]];, { i, 1 } ].

In what follows, we try to examine the matrix inverse-finding iterative methods of this paper on a dense matrix which is of importance in applications.

Test Problem 2
Consider the linear system 𝐴𝑥=𝑏, wherein 𝐴 is the dense matrix defined by n = 40;A = Table[Sin[(x*y)]/(x + y) - 1., { x, n } , { y, n } ].
The right hand side vector is again considered to be 𝑏=(1,1,,1)𝑇. Figure 2 illustrates the plot and array plot of this matrix. Due to the fact that, for dense matrices, such plots will not be illustrative, we have drawn the 3-D plots of the coefficient dense matrix using the zero-order interpolation in test problem 2, alongside its approximate inverse obtained from the method (3.1) after 11 iterations in Figure 3. We further expect to have a similar identity-like behavior for the multiplication of A to its approximate inverse, and this is clear in part (c) of Figure 3.

fig2
Figure 2: The matrix plot (a) and the array plot (b) of the coefficient complex matrix in test problem 2.
fig3
Figure 3: The 3D plot of the structure of the coefficient matrix (a) in the Test Problem 2, its approximate inverse (b) by the method (3.1) and their multiplication to obtain a similar output to the identity matrix (c).

Table 2 shows the number of iterations and the obtained residual norm for different methods in order to reveal the efficiency of the proposed iteration. There is a clear reduction in computational steps for the proposed method (3.1) in contrast to the other existing well-known methods of various orders in the literature. Table 2 further confirms the use of such iterative inverse finders in order to find robust approximate inverse preconditioners again. Since, the condition number of the coefficient matrix in Test Problem 2 is 18137.2, which is quite high for a 40×40 matrix. But as can be furnished in Table 2, the obtained condition number of the preconditioned matrix after the specified number of iterations is very small.

tab2
Table 2: Results of comparisons for the test problem 2 with condition number 18137.2.

We should here note that the computational time requiring for implementing all the methods in Tables 1 and 2 for the Test Problems 1 and 2 is less than one second, and due to this we have not listed them herein. For the second test proplem, we have used 𝑉0=𝐴𝑇/(𝐴1𝐴)

5. Concluding Remarks

In the present paper, the author along with the helpful suggestions of the reviewers has developed an iterative method in inverse finding of matrices. Note that such high order-iterative methods are so efficient for very ill-conditioned linear systems or to find robust approximate inverse preconditioners. We have shown analytically that the suggested method (3.1), reaches the seventh order of convergence. Moreover, the efficacy of the new scheme was illustrated numerically in Section 4 by applying to a sparse matrix and an ill-conditioned dense matrix. All the numerical results confirm the theoretical aspects and show the efficiency of (3.1).

Acknowledgments

The author would like to take this opportunity to sincerely acknowledge many valuable suggestions made by the two anonymous reviewers and especially the third suggestion of reviewer 1, which have been made to substantially improve the quality of this paper.

References

  1. T. Sauer, Numerical Analysis, Pearson, 2nd edition, 2011. View at Zentralblatt MATH
  2. N. J. Higham, Accuracy and Stability of Numerical Algorithms, Society for Industrial and Applied Mathematics, 2nd edition, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. D. K. Salkuyeh and H. Roohani, “On the relation between the AINV and the FAPINV algorithms,” International Journal of Mathematics and Mathematical Sciences, vol. 2009, Article ID 179481, 6 pages, 2009. View at Google Scholar · View at Zentralblatt MATH
  4. M. Benzi and M. Tuma, “Numerical experiments with two approximate inverse preconditioners,” BIT, vol. 38, no. 2, pp. 234–241, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. E. Chow and Y. Saad, “Approximate inverse preconditioners via sparse-sparse iterations,” SIAM Journal on Scientific Computing, vol. 19, no. 3, pp. 995–1023, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. G. Schulz, “Iterative berechnung der reziproken matrix,” Zeitschrift für Angewandte Mathematik und Mechanik, vol. 13, pp. 57–59, 1933. View at Publisher · View at Google Scholar
  7. H.-B. Li, T.-Z. Huang, Y. Zhang, V.-P. Liu, and T.-V. Gu, “Chebyshev-type methods and preconditioning techniques,” Applied Mathematics and Computation, vol. 218, no. 2, pp. 260–270, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. E. V. Krishnamurthy and S. K. Sen, Numerical Algorithms—Computations in Science and Engineering, Affiliated East-West Press, New Delhi, India, 1986.
  9. G. Codevico, V. Y. Pan, and M. V. Barel, “Newton-like iteration based on a cubic polynomial for structured matrices,” Numerical Algorithms, vol. 36, no. 4, pp. 365–380, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. W. Li and Z. Li, “A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix,” Applied Mathematics and Computation, vol. 215, no. 9, pp. 3433–3442, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. H. B. Li, T. Z. Huang, S. Shen, and H. Li, “A new upper bound for spectral radius of block iterative matrices,” Journal of Computational Information Systems, vol. 1, no. 3, pp. 595–599, 2005. View at Google Scholar · View at Scopus
  12. V. Y. Pan and R. Schreiber, “An improved Newton iteration for the generalized inverse of a matrix, with applications,” SIAM Journal on Scientific and Statistical Computing, vol. 12, no. 5, pp. 1109–1130, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  13. K. Moriya and T. Nodera, “A new scheme of computing the approximate inverse preconditioner for the reduced linear systems,” Journal of Computational and Applied Mathematics, vol. 199, no. 2, pp. 345–352, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  14. S. Wagon, Mathematica in Action, Springer, 3rd edition, 2010.
  15. S. Wolfram, The Mathematica Book, Wolfram Media, 5th edition, 2003.