• Views 431
• Citations 0
• ePub 11
• PDF 214
`Journal of OperatorsVolume 2015 (2015), Article ID 824549, 8 pageshttp://dx.doi.org/10.1155/2015/824549`
Research Article

## On Ordinary, Linear -Difference Equations, with Applications to -Sato Theory

Department of Mathematics, Uppsala University, P.O. Box 480, 751 06 Uppsala, Sweden

Received 24 September 2014; Revised 23 December 2014; Accepted 3 February 2015

Copyright © 2015 Thomas Ernst. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The purpose of this paper is to develop the theory of ordinary, linear -difference equations, in particular the homogeneous case; we show that there are many similarities to differential equations. In the second part we study the applications to a -analogue of Sato theory. The -Schur polynomials act as basis function, similar to -Appell polynomials. The Ward -addition plays a crucial role as operation for the function argument in the matrix -exponential and for the -Schur polynomials.

#### 1. Introduction

We begin this paper with an introduction to -difference equations. Since there is a well-known parallel approach to this theme, we quote some of the historical facts about this. Then we show an example of solutions to a -difference equation with constant coefficients; the multiple root case can be solved in a similar way. When we know a solution to a homogeneous equation of order , the equation can be transformed into another equation of order ; this is called reduction of order. The -analogue of Euler’s differential equation is of particular importance in -calculus because of its operational form. In a previous article [1] we introduced the concept -analogues of matrix formulas. In this paper we continue on this theme; the main content of this paper is -Sato theory, which is only one way to treat the theory of -deformed solitons. Previously, articles on the -KdV equation and -Schur polynomials, for example, [2], were published; in this paper we define a quite different -Schur polynomial, which is connected to the Ward -addition.

We now start with the definitions; many of these can be found in the book [3].

Definition 1. Assume that , . The power function is defined by . Let be an arbitrary small number. We will use the following branch of the logarithm: . This defines a simply connected space in the complex plane.
The variables denote certain parameters. The variables , and will denote natural numbers except for certain cases where it will be clear from the context that will denote the imaginary unit.
The -analogues of a real number and the factorial function are defined by The -analogues of the derivate and the integral are given by The inverse -derivate is accordingly defined by Let the Gauss -binomial coefficient be defined by
If , or and , the -exponential function is defined by

Definition 2. Let denote the invertible operator defined by

Definition 3. Let and be any elements with commutative multiplication. Then the NWA -addition is given by
There is a Ward number where the number of on the RHS is . For instance,
The following theorem reminding of [4, page 258] shows how Ward numbers usually appear in applications.

Theorem 4. Assume that . Then where each partition of is multiplied with its number of permutations.

A table of some is given in [3, page 109].

Definition 5. The notation denotes a multiple summation with the indices running over all nonnegative integer values.
Given an integer , the formula determines a set .
Then if is the formal power series , its ’th NWA-power is given by

Difference equations are mathematical models describing real life situations in many applied sciences. For an excellent introduction to this subject see Nørlund 1924 [5].

Theorem 6. Let be a function of the continuous variable . The homogeneous linear -difference equation of order is of the form or even Instead of studying (15), we can study an equation where the are known functions of .

Proof. Begin with formula (16), and use the formula [3, page 211 (6.101)] This gives The last expression is equivalent to the LHS of (15).

The first steps from (16) to an investigation of the linear -difference equation (15) were taken in two dissertations by Smith 1911 [6] and Nørlund’s student Ryde 1921 [7], who generalized the method of Frobenius for solving linear differential equations.

Equation (16) was first studied by Carmichael 1912 [8]. He distinguished between the two cases and and, according to Trjitzinsky [9], treated the case satisfactorily.

In 1915 Mason [10] proved two theorems about -difference equations with entire function coefficients. He also introduced the notion of characteristic equation for a -difference equation.

Equation (16) has also been studied by Adams [11], who generalized the results of Carmichael and Mason. He assumed the coefficient functions to be analytic or to have poles of finite order at the origin. Adams also studied partial -difference equations.

In 1933 Trjitzinsky [9] solved an inhomogeneous first order linear -difference equation and studied the solutions of linear -difference equations.

There is an alternative approach, called timescales; this is just another dialect of -calculus, with completely different and more general definitions. This generality leads to many general theorems, but the -analogues are far from easy to find. For instance, timescales have another -Laplace transform than the one the author is going to use later.

#### 2. The Ordinary, Linear Case

Some of the results in this section have previously occurred in a paper on internet by Bangerezako [12]. We refer to him in each case and to the page number.

A -difference equation of order , containing powers of operator (2), is said to be linear if it is linear in the dependent variable and the -difference . The most general linear nonhomogeneous -difference equation of order is of the form where is a linear sum of -differential operators.

We assume that since the equation is of order , its general solution will depend on distinct arbitrary constants and proceed to consider the mode of this dependence.

Suppose that two distinct particular solutions of (19) are known; say and . Then that is, Thus if represents the difference between any two solutions of (19), will satisfy the homogeneous equation which contains no term free from or a -difference operator of . The general solution of (19) will be the sum of two components: (1)the general solution of the homogeneous equation involving arbitrary constants and known as the complementary function,(2)a particular solution involving no arbitrary constants.

Theorem 7 (see [12, page 38]). The homogeneous linear th-order -difference equation has the general solution where are solutions of the characteristic equation We have assumed that (25) has no multiple roots.

Proof. Similar to the ordinary case, use the chain rule for .

Example 8. Compare with [12, page 17]. Consider the equation The corresponding differential equation has solutions and , and we find that (26) has the solutions These solutions can be rephrased in the form
It is obvious that we can continue this process to find -analogues of any homogeneous, linear differential equation with constant coefficients, which has an exact solution in terms of sums of exponential functions.

##### 2.1. The Multiple Root Case

For [12, page 38] we illustrate the general technique with an example.

Example 9. Try to find a -difference equation satisfied by the homogeneous solution
The differential equation (with multiple root characteristic equation) has solution , and (29) is a -analogue of this. Let , and let . Consider the space of functions and let denotes the invertible operator defined by We find that and .
We try with the equation which indeed solves the problem.

In [12, page 31] for nonhomogeneous -difference equations, the solution will be , where denotes a particular solution.

Example 10. Find a particular solution to We try with . This gives , . The particular solution is the same as that in the ordinary case.
In general, we can find particular solutions very similar to the ordinary case by replacing integers by -integers and solving the resulting system of equations.

##### 2.2. Reduction of Order

When any solution of a homogeneous equation of order is known, the equation can be transformed into another (also linear and reduced) of order . If the known solution is , the transformation is where is a new dependent variable. For simplicity consider an equation of the third order (the general proof is similar) where the are functions of or constants. By substituting and rearranging we have Since is a solution of (36) the first term disappears, leaving a homogeneous linear equation of the second order in .

##### 2.3. A -Analogue of the Euler Equation

The -difference operator is a -analogue of . The operator maps the polynomial to ; keeps the degree of a polynomial and is very important in -calculus.

This implies that the equation is a -analogue of the Euler equation. Our investigations show that the regularity theorems of Adams [11], Carmichael [8], and Mason [10] are also valid for the regularity of solutions to the generalized Euler equation (15).

The following two formulas from [3, page 179] are of particular interest in this context: where and are -Stirling numbers, inverse to each other.

#### 3. First Matrix Calculations

We now come to the main content of this paper, which is a continuation of [1]. We start with a short repetition. The definition of letters in an alphabet and the corresponding linear functional is found in [1]. In our case, the alphabet is the reals.

Definition 11. Matrix elements will always be denoted . Here denotes the row and denotes the column. The matrix elements range from to . This holds both for real numbers (linear functional) and for the letters in the matrix. Juxtaposition of matrices (like in (53)) will always be interpreted as matrix multiplication. If and are commuting matrices of the same dimension (belonging to the alphabet), one defines as a matrix with matrix elements (i.e., letters) . If and are commuting matrices of the same dimension, one defines as a matrix with matrix elements .

Definition 12. Let be an matrix, , and . Then

##### 3.1. -Sato Theory

In Sato theory, infinite-dimensional matrices and pseudodifferential operators are used to solve differential equations, with applications to soliton theory and the KdV equation. The following polynomial is used in the computations.

Definition 13. Given an integer , the formula determines a set .
Then the elementary Schur polynomial is defined by the following equation:
These polynomials satisfy the equation We now begin with the -deformations. The following definition is slightly different from [13, page 213], where it was assumed that (formal power series).

Definition 14 (see [14, page 60]). Define the following pseudo--differential operator where is defined by iterating (4).

Theorem 15. The homogeneous, linear -difference equation has linearly independent solutions , which are all analytic; that is, The constants are uniquely determined by the initial values of the function . The solutions form an -dimensional vector space.

Proof. According to the fundamental theorem of algebra, the corresponding characteristic equation has complex roots. This gives solutions like in (27). When there are multiple roots, we multiply by a suitable polynomial, like in formula (33).

The rank of the Wronskian matrix is and we have The shift operator (not to be confused with the Polya-Vein matrix from [1]) is defined by This implies Introduce the following notation : We will now try to determine from the solutions . By (47)

Theorem 16. A formula for the pseudo--differential operator as a quotient of determinants is The entries of the matrices are functions, except for the last column of the numerator, which consists of pseudo--differential operators.

Proof. By Cramer’s rule we have By combining (46) and the above two equations we obtain a formula for . An expansion of the numerator of (55) along the last column completes the proof.

#### 4. Time Evolution

We now assume that also depend on an infinite number of time variables . This implies that the solutions of (47), , also depend on : and given by (53) can be written as . We assume that evolves in time as where We find that the -Schur polynomial is defined by the following equation: where is defined by (44). Or equivalently The first are

Remark 17. These -Schur polynomials are completely different than those in [2, 15] and give richer -differential properties, due to the NWA -addition.

Theorem 18. These polynomials satisfy the equations

Proof. Operate with on (61), and write the right hand side as a product of -exponentials. After performing the -differentiation to the right, multiply both sides by .

We can express by means of the -Schur polynomials as follows:

We have the following theorem for the entries of .

Theorem 19. Consider This means that the function is the solution of the partial -difference equation with initial value

Proof. We have . Then The two expressions are equal.

The operators and in (46) now also depend on and By formula (66) we find By applying the operator to (70) and employing (66), we obtain which is a -difference equation of order with the same linearly independent solutions as (70). The -difference operators in (72) can be factorized as where is a certain -difference operator. After applying from the right, we obtain By a similar reasoning as in the case , we have where denotes the -difference part of the operator. This implies that the time evolution of is governed by which we will call the -Sato equation.

#### 5. Conclusion

We have found a -analogue of a simplified and more mathematical form of Sato theory. We hope that this paper will have many applications for -difference equations and in soliton theory. A further paper on -Laplace transformations is in preparation.

#### Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

#### References

1. T. Ernst, “An umbral approach to find $q$-analogues of matrix formulas,” Linear Algebra and its Applications, vol. 439, no. 4, pp. 1167–1182, 2013.
2. L. Haine and P. Iliev, “The bispectral property of a q-deformation of the Schur polynomials and the q-KdV hierarchy,” Journal of Physics, A: Mathematical and General, vol. 30, no. 20, pp. 7217–7227, 1997.
3. T. Ernst, A Comprehensive Treatment of q-Calculus, Birkhäuser, 2012.
4. M. Ward, “A calculus of sequences,” American Journal of Mathematics, vol. 58, no. 2, pp. 255–266, 1936.
5. N. E. Nørlund, Vorlesungen über Differenzenrechnung, Springer, Berlin, Germany, 1924.
6. E. R. Smith, Zur Theorie der Heineschen Reihe und ihrer Verallgemeinerung [Dissertationen], Universität München, 1911.
7. F. Ryde, “A contribution to the theory of linear homogeneous geometric difference equations (q-difference equations),” Dissertation Lund, 1921.
8. R. D. Carmichael, “The general theory of linear $q$-difference equations,” American Journal of Mathematics, vol. 34, no. 2, pp. 147–168, 1912.
9. W. J. Trjitzinsky, “Analytic theory of linear $q$-difference equations,” Acta Mathematica, vol. 61, no. 1, pp. 1–38, 1933.
10. T. E. Mason, “On properties of the solutions of linear $q$-difference equations with ENTire function coefficients,” American Journal of Mathematics, vol. 37, no. 4, pp. 439–444, 1915.
11. C. R. Adams, “On the linear ordinary $q$-difference equation,” Annals of Mathematics: Second Series, vol. 30, no. 1–4, pp. 195–205, 1928.
12. G. Bangerezako, q-Difference Equations, Preprint.
13. Y. Ohta, J. Satsuma, D. Takahashi, and T. Tokihiro, “An elementary introduction to Sato theory,” Progress of Theoretical Physics: Supplement, no. 94, pp. 210–241, 1988.
14. F. Druitt, Hirota's direct method and Sato's formalism in soliton theory [Honour Thesis], The University of Melbourne, Melbourne, Australia, 2005.
15. R. Carroll, “Hirota formulas and q-hierarchies,” Applied Analysis, vol. 82, no. 8, pp. 759–786, 2003.