#### Abstract

We give an example of infinite-order rational transformation that leaves a linear differential equation covariant. This example can be seen as a nontrivial but still simple illustration of an exact representation of the renormalization group.

#### 1. Introduction

There is no need to underline the success of the renormalization group revisited by Wilson [1, 2] which is nowadays seen as a fundamental symmetry in lattice statistical mechanics or field theory. It contributed to promote 2d conformal field theories and/or scaling limits of second-order phase transition in lattice statistical mechanics.^{1} If one does not take into account most of the subtleties of the renormalization group, the simplest sketch of the renormalization group corresponds to Migdal-Kadanoff decimation calculations, where the new coupling constants created at each step of the (real-space) decimation calculations are forced^{2} to stay in some (slightly arbitrary) finite-dimensional parameter space. This drastic projection may be justified by the hope that the basin of attraction of the fixed points of the corresponding (renormalization) transformation in the parameter space is “large enough.”

One heuristic example is always given because it is one of the very few examples of *exact* renormalization, the renormalization of the one-dimensional Ising model without a magnetic field. It is a straightforward undergraduate exercise to show that performing various decimations summing over every two, three, or spins, one gets * exact generators of the renormalization group* reading , where is (with standard notations) the high temperature variable . It is easy to see that these transformations , depending on the integer , commute together. Such an * exact symmetry* is associated with a covariance of the partition function per site . In this particular case one recovers the (very simple) expression of the partition function per site, , as an infinite product of the action of (for instance) on the cofactor . In this very simple case, this corresponds to the using of the identity (valid for ):
For one must use the identity
and for a similar identity where the in the exponents is changed into .

Another simple heuristic example is the one-dimensional Ising model *with a magnetic field*. Straightforward calculations enable to get an infinite number of exact generators of the corresponding renormalization group, represented as *rational* transformations^{3}
where the first two transformations and read in terms of the two (low-temperature well-suited and fugacity-like) variables and :
One simply verifies that these rational transformations of two (complex) variables commute. This can be checked by formal calculations for and for any and less than , and one can easily verify a fundamental property expected for renormalization group generators:
where the “dot” denotes the * composition* of two transformations. The infinite number of these rational transformations of two (complex) variables (1.3) are thus a * rational representation of the positive integers together with their product*. Such rational transformations can be studied “per se” as discrete dynamical systems, the iteration of any of these various exact generators corresponding to an orbit of the renormalization group.

Of course these two examples of exact representation of the renormalization group are extremely degenerate since they correspond to one-dimensional models.^{4} Migdal-Kadanoff decimation will quite systematically yield* rational*^{5} transformations similar to (1.3) in two, or more, variables.^{6} Consequently, they are never (except “academical” self-similar models) exact representations of the renormalization group. The purpose of this paper is to provide simple (but * nontrivial*) examples of * exact* renormalization transformations that are not degenerate like the previous transformations on one-dimensional models.^{7} In several papers [3, 4] for Yang-Baxter integrable models with a canonical genus one parametrization [5, 6] (elliptic functions of modulus ), we underlined that the *exact* generators of the renormalization group must necessarily identify with the various isogenies which amount to multiplying or dividing , the ratio of the two periods of the elliptic curves, by an integer. The simplest example is the *Landen transformation* [4] which corresponds to multiplying (*or dividing* because of the modular group symmetry ), the ratio of the two periods is
The other transformations^{8} correspond to , for various integers . In the (transcendental) variable , it is clear that they satisfy relations like (1.5). However, in the natural variables of the model (, not transcendental variables like ), these transformations are *algebraic* transformations corresponding in fact to the * fundamental modular curves*. For instance, (1.6) corresponds to the *genus zero fundamental modular curve *
or
which relates the two Hauptmoduls , :
One verifies easily that (1.7) is verified with and .

The selected values of , the modulus of elliptic functions, , are actually *fixed points of the Landen transformations*. The Kramers-Wannier duality maps onto . For the Ising (resp. Baxter) model these selected values of correspond to the three selected subcases of the model (, , and the critical temperature ), for which the elliptic parametrization of the model degenerates into a rational parametrization [4]. We have the same property for all the other algebraic modular curves corresponding to . This is certainly the main property most physicists expect for an exact representation of a * generator of the renormalization group*, namely, that it maps a generic point of the parameter space onto the critical manifold (fixed points). Modular transformations are, in fact, the only transformations to be compatible with all the other symmetries of the Ising (resp. Baxter) model like, for instance, the gauge transformations, some extended symmetry [7], and so forth. It has also been underlined in [3, 4] that seeing (1.6) as a transformation on *complex variables* (instead of real variables) provides two other complex fixed points which actually correspond to *complex multiplication* for the elliptic curve, and are, actually, fundamental new singularities^{9} discovered on the linear ODE [8–10]. In general, this underlines the deep relation between the renormalization group and the theory of elliptic curves in a deep sense, namely, *isogenies of elliptic curves, Hauptmoduls,*^{10}* modular curves and modular forms*.

Note that an algebraic transformation like (1.6) or (1.8) cannot be obtained from any * local* Migdal-Kadanoff transformation which naturally yields * rational* transformations; an exact renormalization group transformation like (1.6) can only be deduced from * nonlocal* decimations. The emergence of modular transformations as representations of exact generators of the renormalization group explains, in a quite subtle way, the difficult problem of how renormalization group transformations can be compatible with * reversibility*^{11} (iteration forward and backwards). An algebraic modular transformation (1.8) corresponds to * and ** in the same time, as a consequence of the modular group symmetry *.

A simple rational parametrization^{12} of the genus zero modular curve (1.8) reads:
Note that the previously mentioned reversibility is also associated with the fact that the modular curve (1.8) is invariant by , and, within the previous rational parametrization (1.10), with the fact that permuting and corresponds^{13} to the Atkin-Lehner involution .

For many Yang-Baxter integrable models of lattice statistical mechanics the physical quantities (partition function per site, correlation functions, etc.) are solutions of selected^{14} linear differential equations. For instance, the partition function per site of the square (resp. triangular, etc.) Ising model is an integral of an elliptic integral of the third kind. It would be too complicated to show the precise covariance of these physical quantities with respect to (algebraic) modular transformations like (1.8). Instead, let us give, here, an illustration of the nontrivial action of the renormalization group on some elliptic function that actually occurs in the 2D Ising model: a weight-one modular form. This modular form actually, and remarkably, emerged [11] in a second-order linear differential operator factor denoted occurring [8] for , and that the reader can think as a physical quantity solution of a particular linear ODE replacing the too complicated integral of an elliptic integral of the third kind. Let us consider the second-order linear differential operator ( denotes ):
which has the (modular form) solution
Do note that the two pull-backs in the arguments of the *same* hypergeometric function are * actually related by the modular curve relation* (1.8) (see (1.10)). The covariance (1.12) is thus the very expression of a modular form property with respect to a modular transformation () corresponding to the modular transformation (1.8).

The hypergeometric function at the rhs of (1.12) is solution of the second-order linear differential operator which is the transformed of operator by the Atkin-Lehner duality , and, also, a conjugation of :

Along this line we can also recall that the (modular form) function^{15}
verifies:

A relation like (1.12) is a straight generalization of the covariance we had in the one-dimensional model , which basically amounts to seeing the partition function per site as some “automorphic function” with respect to the renormalization group, with the simple renormalization group transformation being replaced by the algebraic modular transformation (1.8) corresponding to (i.e., the Landen transformation (1.6)).

We have here all the ingredients for seeing the identification of exact algebraic representations of the renormalization group with the modular curves structures we tried so many times to promote (preaching in the desert) in various papers [3, 4]. However, even if there are no difficulties, just subtleties, these Ising-Baxter examples of exact algebraic representations of the renormalization group already require some serious knowledge of the modular curves, modular forms, and Hauptmoduls in the theory of elliptic curves, mixed with the subtleties naturally associated with the various branches of such algebraic (multivalued) transformations.

The purpose of this paper is to present another elliptic hypergeometric function and other much simpler (Gauss hypergeometric) second-order linear differential operators covariant by infinite-order rational transformations.

The replacement of *algebraic (modular) transformations* by simple * rational* transformations will enable us to display a complete *explicit description of an exact representation of the renormalization group* that any graduate student can completely dominate.

#### 2. Infinite Number of Rational Symmetries on a Gauss Hypergeometric ODE

Keeping in mind modular form expressions like (1.12), let us recall a particular Gauss hypergeometric function introduced by Vidunas in [12]
This hypergeometric function corresponds to the integral of a holomorphic form on a * genus-one* curve :
Note that the function
which is exactly an integral of an algebraic function, has an extremely simple covariance property with respect to the * infinite-order rational* transformation :
The occurrence of this specific infinite-order transformation is reminiscent of Kummer's quadratic relation
but it is crucial to note that, relation (2.4) does not relate two different functions, but is an “automorphy” relation on the * same function*.

It is clear from the previous paragraph that we want to see such functions as “ideal” examples of physical functions covariant by an exact (here, rational) generator of the renormalization group. The function (2.3) is actually solution of the second-order linear differential operator:
From the previous expression of involving a log derivative of a rational function it is obvious that this second-order linear differential operator has two solutions, the constant function and an integral of an algebraic function. Since these two solutions behave very simply under the infinite-order rational transformation , it is totally and utterly natural to see how the linear differential operator transforms under the rational change of variable (which amounts to seeing how the two-order-one operators and transform). It is a straightforward calculation to see that introducing the cofactor which is the inverse of the derivative of and , respectively, transform under the rational change of variable as
Since is of infinite-order, the second-order linear differential operator (2.6) has * an infinite number of rational symmetries* (isogenies):

Once we have found a second-order linear differential operator (written in a unitary or monic form) , covariant by the infinite-order rational transformation , it is natural to seek for higher-order linear differential operators also covariant by . One easily verifies that the successive symmetric powers of are (of course) also covariant. The symmetric square of ,
factorizes in simple order-one operators
and, more generally, the symmetric th power^{16} of reads
The covariance of such expressions is the straight consequence of the fact that the order-one factors
transform very simply under :
More generally, let us consider a rational transformation , the corresponding cofactor , and the order-one operator . We have the identity
The change of variable on reads
We want to impose that this rhs expression can be written (see (2.8)) as
which, because of (2.15), occurs if
yielding a “Rota-Baxter-like” [13, 14] functional equation on and

*Remark 2.1. *Coming back to the initial Gauss hypergeometric differential operator the covariance of becomes a conjugation. Let us start with the Gauss hypergeometric differential operator for (2.1):
It is transformed by into
then by into
and more generally for

##### 2.1. A Few Remarks on the “Rota-Baxter-Like” Functional Equation

The functional equation^{17}(2.19) is the (necessary and sufficient) condition for to be covariant by .

Using the chain rule formula of derivatives of composed functions: one can show that, for fixed, the “Rota-Baxter-like” functional equation (2.19) is invariant by the composition of by itself , This result can be generalized to any composition of various 's satisfying (2.19). This is in agreement with the fact that (2.19) is the condition for to be covariant by it must be invariant by composition of 's (for fixed).

Note that we have not used here the fact that for globally nilpotent [11] operators, and are necessarily log derivatives of th roots of rational functions. For : The existence of the underlying in (2.25) consequence of a global nilpotence of the order-one differential operator, can however be seen in the following remark on the zeros of the lhs and rhs terms in the functional equation (2.19). When is a rational function (e.g., or any of its iterates ), the lhs and rhs of (2.19) are rational expressions. The zeros are roots of the numerators of these rational expressions. Because of (2.25) the functional equation (2.19) can be rewritten (after dividing by ) as One easily verifies, in our example, that the zeros of the rhs of (2.26) come from the zeros of (and not from the zeros of in the lhs of (2.26)). The zeros of the log-derivative rhs of (2.26) correspond to , where is a constant to be found. Let us consider for the th iterates of that we denote . A straightforward calculation shows that the zeros of or (where denotes the derivative of namely, ) actually correspond to the general closed formula: More precisely the zeros of verify (2.27), or, in other words, the numerator of divides the numerator of the lhs of (2.27).

In another case for given by (2.45), which also verifies (2.19) (see below), the relation (2.27) is replaced by More generally for a rational function , obtained by an arbitrary composition of and , we would have where corresponds to

##### 2.2. Symmetries of , Solutions to the “Rota-Baxter-Like” Functional Equation

Let us now analyse all the symmetries of the linear differential operator by analyzing all the solutions of (2.19) for a given . For simplicity we will restrict to which corresponds to and all its iterates (2.9). Let us first seek for other (more general) solutions that are * analytic at *:
It is a straightforward calculation to get, order by order from (2.19), the successive coefficients in (2.31) as polynomial expressions (with rational coefficients) of the first coefficient with
where is a polynomial with integer coefficients of degree . Since we have here a series depending on one parameter we will denote it . This is a quite remarkable series depending on one parameter.^{18} One can easily verify that this series actually reduces (as it should!) to the successive iterates (2.9) of for . In other words this one-parameter family of “functions” actually reduces to rational functions for an infinite number of integer values .

Furthermore, one can also verify a quite essential property we expect for a representation of the renormalization group, namely, that two for different values of commute, the result corresponding to the product of these two : The neutral element must necessarily correspond to which is actually the identity transformation . We have an “absorbing” element corresponding to , namely, . Performing the inverse of (with respect to the composition of functions) amounts to changing into its inverse . Let us explore some “reversibility” property of our exact representation of a renormalization group with the inverse of the rational transformations (2.9). The inverse of must correspond to : However, a straight calculation of the inverse of gives a multivalued function, or if one prefers, two functions which are the two roots of the simple quadratic relation (): where it is clear that the product of these two functions is equal to . The radius of convergence of is .

Because of our choice to seek for functions analytical at our renormalization group representation “chooses” the unique root that is analytical at , namely, . For the next iterate of in (2.9) the inverse transformation corresponds to the roots of the polynomial equation of degree four ():
which yields four roots, one of which is analytical at and corresponds to in our one-parameter family of (renormalization) transformations:
its (multiplicative) inverse :
and two (formal) Puiseux series ():
Many of these results are better understood when one keeps in mind that there is a special transformation which is * also a R-solution of* (2.19) and verifies many compatibility relations with these transformations ( denotes the identity transformation ):
where the dot corresponds, here, to the composition of functions. These symmetries of the linear differential operator correspond to isogenies of the elliptic curve (2.2).

It is clear that we have another one-parameter family corresponding to with an expansion of the form For , , , this family reduces to the (multiplicative) inverse of the successive rational functions displayed in (2.9) which can also be written as: where we discover some “additive structure” of these successive rational functions.

In fact, due to the specificity of this elliptic curve (occurrence of complex multiplication), we have another remarkable rational transformation solution of (2.19), preserving covariantly . Let us introduce the rational transformation ( denotes ): we also have the remarkable covariance [12]: which can be rewritten in a simpler way on (2.3) (see (2.4)).

It is a straightforward matter to see that actually belongs to the one-parameter family:

As far as the reduction of (2.32) to a rational function is concerned, it is straightforward to see that: where is a large integer growing with , and is a polynomial with integer coefficients of degree , or where and are two polynomials with integer coefficients of degree, respectively, and .

Similar calculations can be performed for defined by for which we also have the covariance

It is a simple calculation to check that any iterate of (resp. ) is actually a solution of (2.19) and corresponds to for the infinite number of values (resp. ). Furthermore, one verifies, as it should (see (2.33)), that the three rational functions , , and commute. It is also a straightforward calculation to see that the rational function built from any composition of , , and is actually a solution of (2.19). We thus have a *triple infinity* of values of , namely * for any integer **, ** and *, for which reduces to rational functions. We are in fact describing (some subset of) the isogenies of the elliptic curve (2.2), and identifying these isogenies with a discrete subset of the renormalization group. Conversely, a functional equation like (2.19) can be seen as a way to extend the -fold composition of a rational function (namely ) to * any complex number*.

##### 2.3. Revisiting the One-Parameter Family of Solutions of the “Rota-Baxter-Like” Functional Equation

This extension can be revisited as follows. Keeping in mind the well-known example of the parametrization of the standard map with , yielding , let us seek for a (transcendental) parametrization such that
where denotes the scaling transformation (here ) and denotes the inverse transformation of (for the composition). One can easily find such a (transcendental) parametrization order by order
and similarly for its inverse (for the composition) transformation
This approach is reminiscent of the conjugation introduced in Siegel's theorem [15–17]. It is a straightforward matter to see (order by order) that one actually has
The structure of the (one-parameter) renormalization group and the extension of the composition of times a rational function (namely, ) to * any complex number* become a straight consequence of this relation. Along this line one can define some “infinitesimal composition” ():
where one can find, order by order, the “infinitesimal composition” function :
It is straightforward to see, from (2.33), that the function satisfies the following functional equations involving a rational function (in the one-parameter family ):*F * cannot be a rational or algebraic function. Let us consider the fixed points of . Generically is not equal to or at any of these fixed points. Therefore one must have or for the infinite set of these fixed points: cannot be a rational or algebraic function, it is a transcendental function, and similarly for the parametrization function . In fact, let us introduce the function
One actually finds that the successive satisfies the very simple (hypergeometric function) relation:
The function is actually the hypergeometric function solution of the homogeneous operator
or of the inhomogeneous ODE
One deduces the expression of as a hypergeometric function

Finally we get the linear differential operator annihilating
which is, in fact, nothing but the * adjoint of* linear differential operator (see (2.6)). One easily checks^{19} that the second-order differential equation transforms under the change of variable into the second-order differential equation with where the unitary (monic) operator is the conjugate of :
with and the “dot” denotes the composition of operators. Actually, the factors in the adjoint transform under the change of variable as follows^{20}:
which is precisely the transformation we need to match with (2.58) and see the ODE compatible with the change of variable :
This is, in fact, a quite general result that will be seen to be valid in a more general (higher genus) framework (see (2.148), (2.150) in what follows).

Not surprisingly one can deduce from (2.33) and the previous results, in particular (2.63), the following results for :
where and denotes . Of course we have similar relation for , being replaced by . Therefore the partial derivative that can be expressed in terms of hypergeometric functions for for a *double infinity* of values of , namely, .

One can, of course, check, order by order, that (2.58) is actually verified for any function in the one-parameter family : which corresponds to an infinitesimal version of (2.33).

From (2.56) one simply deduces
that we can check, order by order from (2.53), the series expansion of , and from (2.57) the series expansion of , but also
that we can, check order by order, from (2.54), the series expansion of and from (2.57). We now deduce that the log-derivative of the “well-suited change of variable” is nothing but the (multiplicative) inverse of a hypergeometric function :
The function is solution of the * nonlinear* differential equation
where the 's denote the th derivative of . At first sight would be a * nonholonomic* function, however, remarkably, it is a * holonomic* function solution of an order-five operator which factorizes as follows:
yielding the exact expression of in terms of hypergeometric functions:
that is, the fourth power of (2.3), with the differential operator (2.74) being the symmetric fourth power of . From (2.3) we immediately get the covariance of :
and, more generally, . Since and are expressed in terms of the same hypergeometric function, the relation (2.71) must be an identity on that hypergeometric function. This is actually the case. This hypergeometric function verifies the inhomogeneous equation:
where

Recalling , one has the following functional relation on :

Noting that (see (2.3)) can be expressed in term of an incomplete elliptic integral of the first kind of argument
one can find that (2.79) rewrites on as
from which we deduce that the function is nothing but a* Jacobi elliptic function *^{21}
In Appendix B we display a set of “Painlevé-like” ODEs^{22} verified by . From the simple nonlinear ODE on the Jacobi elliptic sinus, namely, , and the exact expression of in term of Jacobi elliptic sinus, one can deduce other * nonlinear* ODEs verified by the * nonholonomic* function (, ):

##### 2.4. Singularities of the Jacobi Elliptic Function

Most of the results of this section, and to some extent, of the next one, are straight consequences of the exact closed expression of in terms of an elliptic function. Following the pedagogical approach of this paper we will rather follow a heuristic approach not taking into account the exact result (2.82), to display simple methods and ideas that can be used beyond exact results on a specific example.

From a diff-Padé analysis of the series expansion of , we got the sixty (closest to ) singularities. In particular we got that