#### Abstract

The new important property of wide class PDE was found solely by K. A. Volosov. We make an arbitrary replacement of variables. In the case of two independent variables , then it always gives the possibility of expressing all PDE second and more order as . This is a linear algebraic equations system with regards derivatives to old variables , on new variables . This system has the unique solution. In the case of three and more independent variables then it gives the possibility of expressing PDE second order as , if we do same compliment proposes. In the present paper, we suggest a new method for constructing closed formulas for exact solutions of PDE, then support on this important new property.

#### 1. Introduction

A new method of construction of exact solutions for partial differential equations (PDE) is proposed in this article. The classical authors in mathematics used change of variables for classification of linear PDE. However, they did not notice an important property of broad class of PDE, which was discovered in Volosov's articles . (Literature reference  is available for review on http://eqworld.ipmnet.ru/) This property gives the possibility of expressing PDE as . This is a linear algebraic equations system with regards derivatives to old variables on new variables. This equations system has the unique solution. New identity was obtained which follows from solvability conditions of the system. The methods of the above calculations and their consequence are described in this article. Let us consider the:

Let's describe the proposed method based on (1.1). The proposed algorithm works providing that all functions are continuously differentiable functions.

Let's make an arbitrary replacement of variables:

The inverse replace of variables defines the function of (1.1) from the function :

We note that is nonezero and not interminable, where

An inverse transformation exists, at least locally:

The derivatives of the old independent variables on the new variables are determind as follows:

Let us introduce the following relation:

Using (1.6) and (1.7), we obtain the formulas:

Equation (1.1) takes the form

Let us rewrite (1.7) in the form

As is continuously differentiable function, with the necessary of in the variables .

Taking into consideration on (1.6), (1.7), we can write this equality in the form

System (1.8)–(1.11) will be analyzed in two equationstages. At the first stage, we consider system (1.8)–(1.11) as a nonlinear algebraic equationsystem regarding the derivatives

Theorem 1.1. The implicit linear algebraic equationsystem (1.8)–(1.11), with regarding the derivatives has the unique solution: where where and moreover

Proof. Equation (1.11) is linear. Let's divide the first equation (1.8) by , divide second equation by deduct second equation from first equation, and obtain linear equation. Analogously, the linear equation arises from first equation (1.8) and equation (1.9).
We can express any three derivatives by means of one derivative. For example, we can express them by means of .
We can substitute it to (1.9) and obtain a linear algebraic equation for This is valid for any PDE of the second order. This is the substance of the newly discovered property of PDE.
We obtained .
Matrix has the form
where is and . Vectors have the form , , Vector symbol means conjugation. Eigen values of matrix have the form The author proposes the alternative classification for PDE solutions on Eigen values.
Eigen pair can be discovered easily. We are not going to discuss here their interesting properties.
At the second stage, consider the new first-order system (1.12) with the functions .
It is well known that the solvability of a system of this type is verified by calculating the second mixed derivatives of the functions and on the arguments and :

Theorem 1.2. (1) one has the new identity where the functions have the form (1.13)–(1.17).
(2) Two solvability conditions (1.21) of system (1.12) have multiply coefficient (or record monomial factor) of arbitrary functions where are the right-parts in (1.15), (1.16).

Corollary 1.3. If some free functions satisfy the condition (1.23), then systems (1.12) and (1.8)– (1.11) are solvable.

This property (Theorems 1.1 and 1.2) of second-order partial differential equations was not known before.

The specific ways of satisfying the conditions of (1.23) are discussed below.

Example 1.4. Let's consider the Zel'dovich equation, which is well known in combustion theory ,(Kolmogorov, Petrovsky, Piskunov ): Let's consider (1.1) with , and

Suppose that is a function of two variables and , are functions of one variable. We seek the functions in (1.7) in the form where .

In the articles [3, 4] the case was considered.

Theorem 1.5. Let one, . Condition (1.23) takes the form

We can try to satisfy this (1.25) by making equal the coefficients of the powers of to zero. We obtain a system of four equations for the two functions :

It turns out that in a number of interesting cases, all the four equations can be solved. Moreover, the function , as well as , remains arbitrary.

Equation (1.25) can be solved by setting ,

System (1.12) has the form

where The Jacobian has the form .

The exact solutions of the PDE are already constructed, since relations (1.21), (1.23) were satisfied. However, usually it require more detailed formulas. Let's extend reviewed situation.

Let's choose the function so that the last two equations (1.32) of the system take the elementary form

Then, we have the following system of two equations for ,:

Integrating this system, we obtain:

We still need to analyze the first pair of equations (1.31) for the function .

It turns out that, if we can do the variables replacement , then system (1.30)–(1.32) was integrated. After that, we can return to the variables , which have the form

We have (1.35), (1.36) exact solution equation (1.24) in parametric form.

The authors believe that the solution (1.36) cannot be constructed based on the classic technique of the group analysis. However, if we consider we have the possibility to return to the original variables . This equationsimplified solution can be constructed by other methods, for example, by the method of Satsuma-Hirota.

We can came back to the variables . Let's introduce the notation and .

Let's consider (1.24) and the change of variables:

Let a , and let , have that the relation for the function takes the form

(see (1.35), ). Also relation (1.36) takes the form

We have a system of two equations for ,

This is system (1.30)-(1.31) for .

Theorem 1.6. Suppose that (1.39), (1.40) are hold, and . In this case, (1.24) can be solved. Exact solution equation (1.24) has the form

Proof. Let's express from (1.40) and substitute in (1.39): We will substitute in (1.40) and raise it to power and proceed to exponential functions. Here have put a constant of shift on (1.40), It always can be restored.

Consider the case .

Theorem 1.7. Suppose . Exact solution equation (1.24) has the form (1.35), (1.36).

This is a new real family of solution to (1.24).

Example 1.8. Consider the Fitz-Hugh-Nagumo-Semenov semilinear parabolic partial differential equation, which is well known in biophysics theory [6, 9]: Here we show one more way of integration of system (1.30)–(1.32). Suppose that (!), , are functions of one variable. We seek the functions in (1.7) in the form where .

In this case, equation analogous (1.25) can be solved by setting . To avoid any doubts, let's express (1.30)–(1.32) in an obvious form:

Suppose and calculate integrals (!).

The denominator in the system of ODE (1.31)–(1.32) can be presented in the form of six multipliers:

Then the exact solution of (1.43) can be written in parametric form as

Theorem 1.9. Suppose that (1.46) is hold. Exact solution equation (1.43) has the form .

After that, we came back to the variable from (1.46) which takes the form

Also is arbitrary parameter.

Let's substitute this expression in the first parity (1.46) and we will receive the solution in the parametrical form. This is a new real family of solution to (1.43).

Example 1.10. In the three-dimensional case, consider the semilinear parabolic and the change of variables: Suppose that the Jacobian Suppose also that there exists (at least locally) the inverse transformation and the derivatives an related as See . We set The functions are unknown.

Let's supplement the relations written above by the equalities of mixed derivatives

in the variables .

The nonlinear algebraic system of seven equations in the variables is similar to system (1.7)–(1.9), which follows from system (1.48)–(1.51) with respect to the nine variable-derivatives, which are underdetermined. Hence, there is much arbitrariness in the choice of functions. In this case we used "method C" from . Solutions of (1.51) have the form

By integrating the all resulting relations (1.51) and analogous equation (1.9) we have the theorem.

Theorem 1.11. Suppose that is a twice continuously differentiable function of four variables and are twice continuously differentiable functions of two variables, where .
Then, the exact solution of (1.48) can by written in parametric form as

The function is determined from the differential

The twice differentiable functions as well as the function , remain arbitrary.

If , in (1.48), then are functions of one variable. The function is determined from the first-order ordinary differential equation (known as the Abel equation)

Example 1.12. Let's consider the modification equation Fisher-Kolmogorov-Petrovsky-Piskunov : where . In  there were parameters ,

We can choose for (1.1), then , and the relations (1.23), (1.25) hold.

Systems (1.12) has the form: The functions have the form .

The exact solutions of the PDE are already constructed, since relations (1.23), (1.25) were satisfied. However, usually it require more detailed formulas.

Let's assume . The systems (1.57), (1.58) have the form: where

Denominator has three square multipliers. We can integration second and fourth equations (1.59).

Theorem 1.13. The exact solution of (1.56) can be written in parametric form of systems (1.57), and (1.58), where , , and if that one has system (1.59).
Exact solution equation of (1.56) in parametric form has the form:
where ,,

Proof. Exact solution of system (1.59) has the form: Hence, from (1.63) we have

#### 2. Conclusion

The following new facts complementing PDE theory were discovered in the study of Mr. K. A. Volosov.

(1)The system of four first-order equations (1.8)–(1.11) which is replacing the second-order PDE (1.1) is a system of linear algebraic equations regarding derivatives and has unique solution.(2)There is a new identity which allows to record monomial factor (1.23) under the solvability conditions. It is sometimes called closure condition.(3)Monomial factor (1.23) is the closure condition; the solvability condition is one between ness relation for three arbitrary functions . This property was not noticed earlier. It is useful to consider the respective closure condition together with each differential equation.(4)Using this approach it is possible to construct new families of exact solutions in the parametric form, the examples of which were presented in this article. These solutions cannot be constructed using the classical method of batch properties analysis.(5)It is useful to study the Eigen pair's properties for matrix . The alternative classification of PDE solutions is possible through matrix eigenvalues.

Below you can find the extract from the comments to the article of Mr. M.V.Karasev, Doctor of Physics and Mathematics, Professor, Head of Chair of Applied Mathematics of Moscow State Institute of Electronics and Mathematics, Laureate of State award of Russian Federation.

The article contains several interesting solutions and whole classes of solutions for nonlinear differential equations with partial secondary derivatives (PDE) important in applications. Sometimes in the theory of PDE the approach is used when by making certain transformation (e.g. by replacement of variables) which is converted in order to bring it to another equation whose solution is already known. Thus, the known solution generates nontrivial solution of initial equation. In Mr. K.A. Volosov's approach it is suggested a priori not fixing the type of replacement of variables, leaving that arbitrary on the first stage. The system derived after replacement of variables contains both required vector-function (sought after solution and its derivatives) and unknown coordinate transformation (Jacobi's matrix of variables replacements). The number of equations in the system is always less than the number of variables which opens possibilities for construction of solution by introduction of different interconnections between the components of sought after vector-function. Limitation for the choice of connections requires consistency of the equation for coordinate replacement function co-ordinate replacement function must be consistent. A condition of consistency is mandatory for the remaining independent components of sought after vector-function.

For example, in classic case, for PDE second order with two independent variables, the sought after vector-function has three components; coordinate replacement contains two more functions; and all of them together subordinate to four differential equations of first-order. It was found that all the components of Jacobi matrix of co-ordinate replacement cab be explicitly expressed through the components of the sought after vector-function. On this stage it is not required to introduce any additional connections. Then the received formula for the Jacobi's matrix is considered the system of first-order equations relative to two functions of coordinate replacement, and the condition of this system consistency is noted. It is reduced only to one equation, but contains three components of sought after vector-function. As a result, a big freedom appears to meet the consistency condition. On this stage it is useful to introduce connections, which allows in many cases to reduce the condition of consistency to a standard differential equation or even make it explicitly solvable. After that it is necessary to return to construction of coordinate replacement, that is, to integration of the system of first order, but this time with right part already known. To the extent this integration can be executed explicitly in quadrature, to the same extent it is possible to construct the explicit solution of the initial nonlinear with partial derivatives.

This algorithm is not tied to any group or symmetry attributes. The proposed transformation of nonlinear differential multi variable equations has common nature with the mechanism of integration. The coefficients and type of nonlinearity in the equation solved are not specified, but remain general functions.

This work opens very interesting line in many areas of mathematics and its applications dealing with nonlinear equations with higher partial derivatives.

#### Acknowledgments

The author is grateful to V. P. Maslov, M. V. Karasev, S. Yu. Dobrokhotov, V. G. Danilov, A. S. Bratus for attention and advice.