- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Journal of Applied Mathematics

Volume 2013 (2013), Article ID 185648, 10 pages

http://dx.doi.org/10.1155/2013/185648

## Computation of the -Table Related to the Padé Approximation

^{1}Uniwersytet Technologiczno-Humanistyczny im. K. Pułaskiego, ul. Malczewskiego 20a, 26-600 Radom, Poland^{2}Aix Marseille Université, CPT CNRS, UMR 7332, Luminy, Case 907, 13288 Marseille Cedex 9, France^{3}Université de Toulon, CNRS, CPT, UMR 7332, 83957 La Garde, France

Received 27 June 2013; Revised 13 September 2013; Accepted 24 September 2013

Academic Editor: Nicola Mastronardi

Copyright © 2013 Radosław Jedynak and Jacek Gilewicz. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The aim of this paper is to give a complete and practical method for numerical application of Padé approximation with the help of the -table analysis. We present an exhaustive list of useful formulas to compute a -table related to a formal power series . Some of these formulas are not widely known, because they were presented in publications of limited circulation. Some others were never published, as three symmetric Paszkowski-like formulas to overcome the blocks in a -table or an extension of local error formula for Padé approximants in the blocks. All formulas are given in two versions: in terms of Toeplitz determinants (-table) and in the version of Hankel determinants (-table). We compare the theory with numerical observations by reproducing different computational aspects of software producing the -tables with the presence of blocks and their evolution following the evolution of computer environment.

#### 1. Introduction

The Padé approximation method is largely used to solve many problems of numerical analysis such as the convergence acceleration, analytic continuation of complex functions, moment problems, and numerical integration [1–3] and, in general, to approximate functions of the complex variable represented by a truncated power series and the detection of their zeros and poles. This method is also commonly applied to solve numerous problems in physical modeling [4]. We can find some historical background and fundamental concepts of this method in [5] and the recently published book by Trefethen [6]. In [7] the authors present the methods used to construct matrix Padé approximants if the coefficient matrices of the input matrix polynomial are triangular. The extended Euclidean algorithm is applied to solve this problem. The application of Padé approximants in computational problems starts frequently by the computation of the auxiliary table, called -table [8]. The entries of -table are the Toeplitz determinants of matrices of linear systems defining the denominators of Padé approximants. The square blocks of zeros in the -table indicate the existence of corresponding blocks in the table of Padé approximants. The so-called valleys in the -table are the lines of minimal absolute values of entries, which indicate the lines of best Padé approximants (BPA) in the Padé table [9]. Because the computation of the -table is simpler, it is recommended to begin by this computation to obtain global preliminary information about the interesting Padé approximants before their own calculations.

Let us recall some definitions and results related to Padé approximation.

Let be a formal power series of a function . It can be a Maclaurin series of an analytic function at or an asymptotic series, such as a Stieltjes series having a zero radius of convergence [10, 11].

*Definition 1. *Let
denote in general a rational function. The Padé table (-table) of the formal power series (1) is a doubly infinite array of irreducible rational functions called *reduced Padé forms*, determined in such a manner that the Maclaurin expansion of agrees with as far as possible.

The first column in Table 1 contains the truncated series (1). Because a reduced Padé form can be normalized by dividing all coefficients by one coefficient , then contains only unknowns which can be determined by first coefficients of or, equivalently, by first derivatives of at :

This is usually expressed by the condition which means that the power series expansion of matches the power series expansion of at at least up to the power .

*Definition 2 (see Gilewicz [11, page 171]). *If the reduced Padé form satisfies (5), then it is a Padé approximant and is denoted by : .

*Definition 3. *The power series and associated -table are said to be normal if every element of the table is different from any other element. Each Padé approximant reproduces the series exactly up to the term in .

Multiplying (5) by we obtain which leads to the separated linear system for the coefficients and ( if ):

The system (8) of equations for unknowns always a nontrivial solution (Frobenius) which determines the reduced Padé form. Let us note that (6) is equivalent to (5) only if is reversible, that is, if . Following this remark Baker presented an equivalent definition of the Padé approximant as follows.

*Definition 4 (see Baker [10, page 5]). *The Padé approximant is a solution of linear systems (8) and (7) with the condition ( if ):

The power series converges only inside the convergence circle; the Padé approximants reproduce zeros and poles of meromorphic functions and give a good approximation of these functions also outside the circle of convergence. Moreover, in particular cases where the series represents a rational function, the Padé approximant method finds this function automatically. In other cases the BPA algorithms allow finding excellent approximations. Padé approximant is the best local rational approximation of in the vicinity of the point of expansion of in the power series. Of course, we are not limited by the point and if we are interested in other regions in we can compute the Padé approximants at arbitrary point of analyticity of knowing the coefficients of the series .

A determinant of the matrix of the system (10) is the Toeplitz determinant: where we put if . Defining we can build an infinite table of ’s, called -table.

The second column of Table 2 contains the coefficients of the power series . The first row contains the powers of , and the second row can be computed recursively by expanding :

The -table was first introduced by Gragg [8]. Baker [10] used an alternative definition permuting rows in (11), calling the resulting Hankel determinants and the corresponding table a -table. An obvious relation between and is

Interest in the -table is due to its direct relation to the Padé table [8, 11] and to the particular ease of its computation. The -table contains information about the block structure of the Padé table. Each square block of zeros in the -table defines a corresponding block in the Padé table. Moreover, a dominant term of the error of Padé approximant located outside a block is given by a ratio of two Toeplitz determinants:

A theory of valleys in the -table [9] leading to a numerical algorithm of choice of the best Padé approximant [11] is based on property (14).

In general a -table presents a square block structure. We denote by a square block of all equal entries with a west-north corner . In each block are Padé approximants and satisfy but the reduced forms located under the antidiagonal of a block are not Padé approximants: because (see (16)). If the -table presents a block, the -table presents a block of zeros with the west-north corner on the position , . Now the following property becomes evident.

*Property 1. *The -table is normal if all .

Let us complete the error formula (14) by the corresponding formula for belonging to the block : where

This formula given in [10, 12] is given here in terms of Toeplitz and Hankel determinants. An extension of representations of using other determinants around the zeros block in the -table will be presented in the next section.

The existence of an infinite block means that the series represents a rational function . If a -table presents a block , then the Padé approximant computed with coefficients located on the antidiagonal in the -table is clearly the best Padé approximant (BPA) on this antidiagonal, because it is the only approximant which reproduces exactly the series up to the power . A number of algorithms of choice of BPA are presented in [11]. The detailed modern theory of Padé approximants and the exhaustive list of algorithms of their calculation are presented in [10, 11]. The entries of -table can be calculated directly by solving the linear system (10) ((9) is restricted only to the substitutions) or by more efficient recursive algorithms adapted to the Toeplitz systems. One of these is based on the Wynn formula which permits computing the elements from four others (ascending algorithm). This method is too laborious and so it is simpler to discover the block structure of the -table by analyzing the block structure of the -table. In numerical practice we are always interested in BPA and a more rapid way to determine the position of BPA passes through the analysis of the -table.

#### 2. Computation of the -Table in the Normal Case: Numerical Recommendations

The particular form of Toeplitz (resp. Hankel) determinants permits computing them recursively using the Sylvester formula
avoiding the laborious direct computation by (11) (resp. (13)). Starting from the two first columns one can compute the “East” elements by the *ascending algorithm* [13]:

Alternatively, starting from the first two rows one can compute the “South” elements by the *descending algorithm*:

The above order of arithmetic operations diminishes a risk of overflows or underflows. We have studied in [13] the complexity of both the above algorithms counting multiplications and divisions (including that of (12) needed for the descending algorithm) necessary to obtain the from the normalized sequence , that is, with . The cost of the normalization of each coefficient in the second column of the -table is equal to . Then the costs in the second row are (12). Then the cost of computed by (22) is , where , , , and are the previous costs of , and , respectively. In fact it is the cost of the calculation of all determinants disposed in the triangle pointed on the last if it is situated under or on the diagonal. Analogously, the global cost of the calculation of all determinants with the last by the descending algorithm is . The following separation line optimizes the cost of each algorithm.

In the following example we give the number of operations needed by the descending algorithm followed by the number of operations needed by the ascending one:

The precision of computed by (22) is, crudely estimating, proportional to the global cost , where , , , and are the previous global costs. We count twice and because each presence of an element in the arithmetic expression introduces its error. On the other hand if we wish to optimize the precision, we must take into account all arithmetic operations, including additions and subtractions. The errors of differences play a dominant role, but they are not considered here. Consequently the separation line will be modified. The precision observed numerically using multiprecision calculus (see details in [14]) showed that the separation line mounts a few places giving a clear advantage to the ascended algorithm.

*Example 5. *Fragment of the 13th column of the -table of the series of the Stieltjes function is computed by ascending algorithm (AA) and descending algorithm (DA) in double precision (see Table 3).

The underlined elements (Example 5) computed by the ascending algorithm situated in the region of accuracy of the descending algorithm are a little better than those computed by descending algorithm. The exact values were determined by multiprecision calculus with 32 digits.

#### 3. Overflow, Underflow, and Detection of Blocks: Numerical Recommendations

No problems arise in a numerical computation if all coefficients are integers [8, 12]. If a block exists, the integer arithmetic produces zeros exactly. Unfortunately in a real case the computational problem is generally much more difficult. Numerical experiments show that ’s decrease or increase quite rapidly. The risk of the overflow or underflow appears approximately for , where is the number of decimal digits of the floating point representation ([11, page 344]). Fortran subroutines given in [13, 15] use masks allowing the determination of exponents of real constants composing of the arithmetic expression (22) or (23) before their evaluation. After that the program checks if a global exponent is inside an authorized range or not and, if so, if it authorizes the evaluation or not. Knowing the computer used it is possible to overcome overflows without stopping an execution of the program, but the use of this technique embarrasses the software portability.

The underflow problem is related to another important question: is the computed value of equal to zero or not? The Vignes permutation-perturbation method to detect the so-called “zero informatique” in French [16] gives a satisfactory statistical estimation, allowing one to decide if a determinant vanishes or not, but in our practical computational problems its cost is too great. The crude estimation is the following: where denotes the base of representation of the real constant in the computer (essentially , or ). The second method, purely empirical, is due to Guziński [17]. Guziński follows the monotonicity of the ascending or descending sequences: , or , respectively. Using the 32-bit arithmetic (single precision) he estimates that the true value of determinant (let us call it “new”) is equal to zero if the previous monotonicity changes significantly, that is, if the new determinant with respect to the previous determinant satisfies the following inequality:

We recommend this efficient method which allows the detection of the zeros blocks in -tables very quickly. Of course, it must be readapted to the used real constant representation. These two criteria, (25) and (26), are well-illustrated by Example 6, where the second table represents a fragment of -table for a series produced by a rational function:

The numerical constants are denoted shortly by decimal exponents: for instance is denoted by . The first table represents the approximative limits (25) for the :. The underlined elements correspond to the beginning of the infinite block of zeros starting at element . As shown here both criteria work: the underlined elements on the right (-table) are less than the corresponding limits on the left and also Guziński criterion is satisfied for these elements.

*Example 6 (zeros block in the -table). *If the first zero is detected by the ascending algorithm, it corresponds to the west-north corner of the block of zeros. Then, we stop at the moment of the ascending computation of this antidiagonal and we return to the next antidiagonal to verify the value of the determinant situated under this first zero. Suppose that we find zeros starting from which indicate the existence of the block . The next section is devoted to the strategy of following the calculus.

The “normal” monotonicity, by opposition to the jumps (26), can be tested by reference to the normal -table corresponding, for instance, to the Stieltjes function (see Table 4)

Example 7 presents a fragment of the -table for a series produced by the mentioned function.

*Example 7 (fragment of the -table of the Stieltjes function ). *The minima on the antidiagonals are underlined. They correspond to the positions of the best padé approximants on the corresponding antidiagonals in the Padé table. Joining these minima we obtain the valley structure in the -table mentioned in Section 6 (see Table 5).

Using the 64-bit arithmetic (double precision) the risk of overflows or underflows is rather marginal [14]. Both standards of mentioned representations of numerical values are specified by the IEEE Standard for Floating-Point Arithmetic (IEEE 754).

#### 4. Nonnormal Case: Froissart-Gilewicz and Paszkowski Identities to Overcome Blocks

The proof of geometrical progression of elements surrounding the zeros blocks in the -table and an analytic proof of the identity allowing the computation of the east or south elements of these blocks when the Sylvester identity fails are presented in [11]. The purely algebraic proof of an equivalent identity is given by Paszkowski in [12]. Three new Paszkowski-like identities and an extension of formula (19) are proved and are given in this section. Figure 2 illustrates the -table corresponding to the block with and two “shells” adjacent to the zeros block.

Elements surrounding the blocks of zeros form geometrical sequences (attention: the sense of west and north arrows and the superscripts on the south and east sides are inverted with respect to [11, 12] and now correspond to the sense of calculation of the elements of the -table by presented algorithms) from the corners as indicated by arrows and with respective ratios , , , and . Then, it suffices to know four entries , , (these three are already computed before the detection of the block), and (first nonzero element after zeros on the column which identifies the block) to calculate all elements surrounding the block of zeros in the following way:

Two additional relations were proved in [11]: where , , , and . The following relations (31) show relationships analogous to (29) which are expressed in terms of Hankel determinants : where here , , , and . Using the above relations we can easily extend the formulas for in (18) to other ratios of Toeplitz determinants located around the zeros block. Let us simplify the notation of the left part of the formula (19) as follows: and denote the next determinants by , , , and as indicated in Figure 3 corresponding to the block .

Then, we have the following equivalent formulas: where denote the remainder of division of by .

The ascending Sylvester algorithm fails for the second east column because of the division by , that is, for with . The descending algorithm fails for the second south row, that is, for with . This creates the “east shadow” region and the “south shadow” region where the elements can not be computed by respective algorithms. These regions are shown in Figure 4.

If no block exists in the way of descending algorithm then the east shadow region can be computed by this algorithm. Analogously the ascending algorithm can be used to compute the south shadow region. Then, the two combined algorithms allow the easy computation of many -tables with blocks without the use of the special identities which will be presented in this section. Of course, these identities can be used to compute the black east column or the black south row which allows following the use of respective, ascending, or descending algorithm. In fact, the Sylvester algorithms fail only in the rare situations illustrated in Figure 5. Let be an intersection of an east shadow region produced by a block and a south shadow region produced by the block . Let be an intersection of with the second east -element column on the right side of the block and let be an intersection of with the second south -element row in the bottom of the block . The combined Sylvester algorithm fails only if both and (in general: and ) are nonempty.

In this particular case it is necessary to compute by the Froissart-Gilewicz or Paszkowski identity, to be able to follow the recurrence computation of the -table by the combined Sylvester algorithms. If not, we can compute the south black elements at the bottom of the blocks of zeros and follow the computation by Sylvester algorithms. The last strategy is recommended if , that is, in the opposite situation, as seen in Figure 5.

However this “theoretical” strategy based on an extensive application of the combined Sylvester algorithm can be only used near the diagonal of a -table. Indeed, the complexity diagram (Figure 1, Section 2) shows that outside this region the cost of the computation of elements needed is much lower if they are computed by the Froissart-Gilewicz or Paszkowski formulas.

The first identity allows going around a block in the -table (i.e., for ). It was proven by Froissart and Gilewicz [11]. Using the notation of Figure 2 it can be presented in the following form of a determinant:

After computing elements or/and one can continue the computing using the Sylvester formulas. This identity written for (now denote Hankel determinants) is

Paszkowski identity for , equivalent to the previous, gives also or/and :

Paszkowski presented this identity in the elegant form of a determinant:

Following (32) we have and which give three new Paszkowski-like identities:

These identities for become

Achuthan and Ponnuswamy have published [18] a detailed analysis of a table of approximants of McCabe -fractions in the nonnormal case. To this so-called -table corresponds a table of Toeplitz (or Hankel) determinants, like the -table (or -table). These tables have a square block structure analogous to that presented in this work. Consequently, all our formulas can be applied to compute the elements around the blocks in the table of Hankel determinants related to the -table.

#### 5. Computation of Padé Approximants around the Blocks: Numerical Recommendations

Our goal consists of indicating the location of the blocks in the -table and then giving some information about the location of the BPA by means of the analysis of the structure of the -table. The next step in the work of approximation consists of computing the relevant Padé approximant. Many methods of computation of Padé approximants use the relation between the convergents of continued fractions and Padé approximants [19]. However if we wish to compute an individual Padé approximant solving the linear system (10) followed by the substitutions (9), we can use Cholesky method, the cost of which is

The cost of Cholesky method or other methods adapted to this particular symmetric matrix (see [11, page 370] or [20]) represents one-half of the cost of the solution by the Gauss method. For instance the cost of the computation of by Cholesky is but by Longman algorithm is . Using this last algorithm we compute all Padé approximants located in the triangle ended by the antidiagonal . If we need to know all -tables it is recommended to use any efficient recursive algorithm (see [10, 11, 20]) computing the so-called -triangle in the -table knowing first coefficients of the power series of . To do this it is also useful to know the generalization of the Wynn cross identity (20) obtained by Florent Cordellier (see [11, page 369]) for the Padé approximants surrounding a block in the -table: where , , , , and occupied symmetric positions around the block as indicated in Figure 6.

#### 6. Best Padé Approximant (BPA)

The Padé approximant to a function is the best local approximation of with respect to the norm of uniform convergence. However in practice we are commonly interested in the nonlocal quality of this approximation as, for instance, an analytic continuation of . Knowing first coefficients of power expansion (1) of we can compute all Padé approximants with , that is, the so-called -triangle in the -table. The entries on the last antidiagonal contain complete initial information. The natural question is: where is the BPA on each antidiagonal? The answer to this question is given in [11] where four empirical algorithms of choice of BPA, based on the numerical experiments and justified for few classes of functions, are proposed:(i)method : analysis of the behavior of the sequence which also allows the elimination of badly computed coefficients;(ii)method of valleys: the minima of on the antidiagonals indicate the positions of BPA and/or the positions of the blocks (see Example 7);(iii)method of coefficients of PA: estimation of coefficients which can be neglected allowing the detection of the blocks;(iv)method of Gram matrix: analysis of the eigenvalues of , where is the matrix of the system (10); this shows if is equal to zero or not.

In numerous scientific papers where the Padé approximation method is applied, the authors automatically use the diagonal Padé approximants without any justification. This frequently leads to erroneous conclusions as those of Van Dyke (reported in [11, page 414]) who, after selecting for his problem the PA, claimed that PA method is not satisfactory in his case. Curiously, the PA is the worst choice among all possible elements in the -table and the BPA gives a result ten times better than the better approximation proposed by Van Dyke!

We repeat that our goal is to help the authors wishing to apply correctly the Padé approximation method detecting the BPA quickly.

#### 7. Numerical Observations and Computational Suggestions

In numerical practice the coefficients used for the computation of Padé approximants are first computed numerically or determined experimentally. Then they are affected by some noise and some errors, in general increasing with the index. The -method of choice of best Padé approximant (BPA) [11] allows the elimination of the last badly computed coefficients. This empirical method and the recommendation do not compute the approximants. Many papers (see [11, 21–24]) were devoted to analyzing the effect of noise on the Padé approximants. The so-called *Froissart doublets* (zero-pole doublets) appearing in the Padé approximants are created by the noise. Unfortunately one has not obtained at the present moment an efficient method of filtering the noise from the coefficients: the simple elimination of Froissart doublets spoils the quality of the resulting fraction obtained by the cleaning of the Padé approximant.

#### 8. Conclusion

This text contains a complete list of useful identities and empirical numerical rules. Certain of these are new or were never published in current journals. We hope that the presented strategy of analysis of the -table before the calculation of the BPA or of the -table will allow the improvement of the efficient use of Padé approximation method.

#### Acknowledgments

The second author wishes to thank Professor Luc Wuytack who has encouraged him many years ago to write the unpublished note [25] which is in the origin of this review. The authors would like to thank the referees for their very valuable corrections.

#### References

- D. Bahuguna, A. Ujlayan, and D. N. Pandey, “A comparative study of numerical methods for solving an integro-differential equation,”
*Computers & Mathematics with Applications*, vol. 57, no. 9, pp. 1485–1493, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - R. Jedynak and J. Gilewicz, “Approximation of the integrals of the gaussian distribution of asperity heights in the Greenwood-Tripp contact model of two rough
surfaces revisited,”
*Journal of Applied Mathematics*, vol. 2013, Article ID 459280, 7 pages, 2013. View at Publisher · View at Google Scholar - R. Jedynak and J. Gilewicz, “Approximation of smooth functions by weighted means of N-point Padé approximants,”
*Ukrainian Mathematical Journal*, vol. 65, no. 10, pp. 1410–1419, 2013. - D. Roy, “Global approximation for some functions,”
*Computer Physics Communications*, vol. 180, no. 8, pp. 1315–1337, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. I. Aptekarev, V. I. Buslaev, A. Martines-Finkel'shteĭn, and S. P. Suetin, “Padé approximants, continued fractions, and orthogonal polynomials,”
*Russian Mathematical Surveys*, vol. 66, no. 6(402), pp. 37–122, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - L. N. Trefethen,
*Approximation Theory and Approximation Practice*, SIAM, Philadelphia, Pa, USA, 2013. View at MathSciNet - M. Kaliyappan, S. Ponnusamy, and S. Sundar, “Recursive formulation of the matrix Padé approximation in packed storage,”
*Computers & Mathematics with Applications*, vol. 59, no. 4, pp. 1532–1540, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - W. B. Gragg, “The Padé table and its relation to certain algorithms of numerical analysis,”
*SIAM Review*, vol. 14, pp. 1–16, 1972. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Gilewicz and A. Magnus, “Valleys in $c$-table,” in
*Padé Approximation and Its Applications (Proc. Conf., Univ. Antwerp, Antwerp, 1979)*, L. Wuytack, Ed., vol. 765 of*Lecture Notes in Mathematics*, pp. 135–149, Springer, Berlin, Germany, 1979. View at Publisher · View at Google Scholar · View at MathSciNet - G. A. Baker, Jr.,
*Essentials of Padé Approximants*, Academic Press, New York, NY, USA, 1975. View at MathSciNet - J. Gilewicz,
*Approximants de Padé*, vol. 667 of*Lecture Notes in Mathematics*, Springer, Berlin, Germany, 1978. View at MathSciNet - S. Paszkowski, “Evaluation of a $C$-table,”
*Journal of Computational and Applied Mathematics*, vol. 44, no. 2, pp. 219–233, 1992. View at Publisher · View at Google Scholar · View at MathSciNet - J. Gilewicz and E. Leopold, “Subroutine CTABLE for best Padé approximant detection,” preprint CPT-81/P.1324, 1981.
- J. Gilewicz and M. Pindor, “Computation of
*c*-table and detection of valleys,”*Ukrainian Journal of Mathematics*, vol. 62, pp. 762–772, 2010. - M. H. Lallemand,
*Logiciel pour la détéction du Meilleur Approximant de Padé [Ph.D. thesis]*, DEA, CPT-CNRS, Marseille, France, 1985. - J. Vignes, “Review on stochastic approach to round-off error analysis and its applications,”
*Mathematics and Computers in Simulation*, vol. 30, no. 6, pp. 481–491, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - W. Guziński,
*PADELIB: Program Library for Padé Approximation*, Inst. of Nuclear Research, Warsaw, Poland, 1978, (in Polish). - P. Achuthan and S. Ponnuswamy, “On general two-point continued fraction expansions and Padé tables,”
*Journal of Approximation Theory*, vol. 64, no. 3, pp. 291–314, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Gilewicz and R. Jedynak, “Compatibility of continued fraction convergents with Padé approximants,” in
*Approximation and Computation. In Honor of Gradimir V. Milovanovic*, W. Gautschi, G. Mastroianni, and Th. M. Rassias, Eds., vol. 42 of*Spriger Optimization and Its Application*, pp. 135–144, Springer, New York, NY, USA, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C. Brezinski, “Computation of Padé approximants and continued fractions,”
*Journal of Computational and Applied Mathematics*, vol. 2, no. 2, pp. 113–123, 1976. View at Zentralblatt MATH · View at MathSciNet - B. Beckermann, D. Bessis, L. Perotti, and D. Vrinceanu, “Computing high precision matrix Padé approximants,”
*Numerical Algorithms*, vol. 61, no. 2, pp. 189–208, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - D. Bessis and L. Perotti, “Universal analytic properties of noise: introducing the $J$-matrix formalism,”
*Journal of Physics A*, vol. 42, no. 36, Article ID 365202, 15 pages, 2009. View at Publisher · View at Google Scholar · View at MathSciNet - J. Gilewicz and M. Pindor, “Padé approximants and noise: a case of geometric series,”
*Journal of Computational and Applied Mathematics*, vol. 87, no. 2, pp. 199–214, 1997. View at Publisher · View at Google Scholar · View at MathSciNet - J. Gilewicz and M. Pindor, “Padé approximants and noise: rational functions,”
*Journal of Computational and Applied Mathematics*, vol. 105, no. 1-2, pp. 285–297, 1999. View at Publisher · View at Google Scholar · View at MathSciNet - J. Gilewicz, “Computation of the $c$-table related to the Padé approximation,” preprint CPT-91/P.2901, 1991.