#### Abstract

No one has proved that mathematically general stochastic dynamical systems have a special structure. Thus, we introduce a structure of a general stochastic dynamical system. According to scientific understanding, we assert that its deterministic part can be decomposed into three significant parts: the gradient of the potential function, friction matrix and Lorenz matrix. Our previous work proved this structure for the low-dimension case. In this paper, we prove this structure for the high-dimension case. Hence, this structure of general stochastic dynamical systems is fundamental.

#### 1. Introduction

Stochastic differential equations are widely used to describe random phenomena in complex systems in physics, biology, and chemistry. For such a stochastic dynamical system, researchers usually build an appropriate mathematical model based on basic scientific laws and analyze or simulate it to gain insights about its complex phenomena. However, these models are proposed to solve specific scientific problems [1–3]. Until now, the general theory for stochastic differential equations has been limited.

To gain a deeper understanding of the dynamic behaviors of general stochastic dynamical systems requires the exploration of their intrinsic mechanisms. In 2005, Ao [4] proposed Ao decomposition, which demonstrates that the deterministic part of general stochastic dynamical systems can be decomposed into three significant parts: the friction force, gradient of the potential function, and Lorenz force. This inspired much work [5, 6]. We discuss the scientific significance of these three terms.

##### 1.1. Potential Function

From the biological point of view, the potential function can be explained by evolution theory. As we know, the fundamental nature of biology is determined by evolution. To explain adaptation and speciation, Darwin [7] formulated the theory of evolution based on natural selection. Accordingly, Fisher [8] proposed the fundamental theorem of natural selection, indicating that the increase rate of mean fitness by natural selection is equal to its genetic variance in fitness. In 1932, Wright [9] proposed the fitness landscape concept, by which evolutionary adaptation may be seen as a hill-climbing process on the mean fitness landscape until a local mean fitness peak is reached. In 1940, Waddington [10] proposed the developmental landscape, which is equivalent to the fitness landscape. Wright’s fitness landscape and Fisher’s fundamental theorem of natural selection have been widely used to interpret adaptation as mean fitness maximization (see Figure 1).

**(a)**

**(b)**

This phenomenon is often illustrated on mathematical landscapes as balls rolling downhill. A ball experiencing gravity tends to a minimum of the gravitational potential energy as a function of its spatial position . The force on the ball is given by the local slope, .

The potential landscape is also called a potential function or energy function, which has been applied in fields such as physics, biology, and chemistry [11, 12]. It can compare the relative stability of different attractors [13], account for the transition rates between neighboring steady states induced by noise [14], and provide an intuitive picture that reveals the essential mechanism underlying the complex system [15]. In physics, the potential function is closely related to the non-equilibrium thermodynamic framework [16]; in chemistry, it provides useful explanations for protein folding [17, 18]; in biology, it has been used to explore basic problems in evolution such as the robustness, adaptability, and efficiency of real biological networks [19]. Until now, the general existence of the potential function remains unsolved. Researchers such as Prigogine et al. [20–22] have insisted that the potential function does not exist in non-equilibrium systems because they have not found it.

##### 1.2. The Friction Matrix

The friction matrix (the frictional force) represents dissipation. In this case, the energy of a dynamical system decreases, so the potential function decreases and the corresponding fitness increases. A system that has only friction is a gradient system (see green thick arrow in Figure 1).

##### 1.3. The Lorenz Matrix

Interestingly, Wright’s fitness landscape theory [9] cannot explain the Red Queen hypothesis proposed by Valen [23], which illustrates that the biotic interactions between species provide a driving force resulting in endless evolution for some species even if the physical environment is unchanged. This is because he neglects the Lorenz matrix (Lorenz force). If considering the Lorenz force, the population flow on a landscape is not directly down the gradient of the potential function. It also swirls (see red thin arrow in Figure 1).

The above scientific understanding indicates that these three components exist in general stochastic dynamical systems, but this decomposition lacks rigorous mathematical proof. This paper is the first to prove that the deterministic part of general high-dimension stochastic dynamical systems can be decomposed into three components: the diffusion force, gradient of the potential function, and curl flux. Our previous work proved this structure for the low-dimension case (when the dimension ). On this basis, we prove this structure for the high-dimension case (when , and is an even number, i.e., , when ). Apart from theoretical significance, our result has important guiding significance for applications in both mathematics and subjects such as biology and physics. The potential function provides intuitive and global landscapes. Real dynamical systems are complex and usually have more than one steady state, so the potential function has a wide range of applications in real dynamical systems. For example, Hu and Xu [24] studied the phenomenon of multi-stable chaotic attractors existing in generalized synchronization for a driving and response system named Rössler system. Angeli and Sontag [25] studied the emergence of multi-stability and hysteresis in those monotone input/output systems that arise, under positive feedback, starting from monotone systems with well-defined steady-state responses. Liu and You [26] studied multi-stability, existence of almost periodic solutions of a class of recurrent neural networks with bounded activation functions and all criteria they proposed can be easily extended to fit many concrete forms of neural networks such as Hopfield neural networks, or cellular neural networks, etc.. The potential function has provided a general and unified perspective for researchers to investigate different types of dynamical systems.

The rest of this paper is organized as follows. Section 2 introduces Ao decomposition for general stochastic differential equations and proposes our problem: proving the equivalence of the Langevin equation and the equation after Ao decomposition. In Section 3, we reduce this problem to proving the existence of solutions for first-order partial differential equations, and we accomplish this proof in Section 4.

#### 2. A-Type Decomposition for General Stochastic Differential Equations

The Langevin equation in physics, which has the form of a general stochastic differential equation, is usually a more accurate description of physical processes than the purely deterministic one [27–30]. Here, we use the physicists’ notation for the noise, and we can write this equation in the form

We discuss this equation in -dimensional real Euclidean space. The state variable is a function of time , and the component functions of the state variable are independent. We assume that is an infinitely differentiable smooth function. The noise is a function of and the state variable , and is almost nowhere differentiable. We consider the case that is -dimensional white Gaussian noise with meanand covariance

The superscript denotes the transpose of a matrix (vector), is the Dirac delta function, indicates the average over the noise distribution, and the diffusion matrix is symmetric and positive semi-definite.

Noticing a new formulation of equation (1) [6, 15, 31], we propose Ao decomposition, under which equation (1) can be formally decomposed intowhere is a symmetric semi-definite matrix (which we call the “friction matrix”), is an antisymmetric matrix (the “Lorenz matrix”), is a real and single-valued function of , and is -dimensional white Gaussian noise with meanand covariance

It should be noted that equations (3) and (6) are manifestations of the fluctuation-dissipation theorem, where and reflect dissipation, and the covariance structures and reflect fluctuation [32].

Our main problem is to prove the equivalence of equations (1) and (4). We can also prove that in equation (4) is a potential function.

#### 3. Reduction of Problem into Partial Differential Equations (PDEs)

To show that equation (1) is equivalent to equation (4), we first show that equation (4) implies equation (1). To this end, we assume that the function matrix is invertible and the components of the state variable are independent. If they are not independent, the dimension can be reduced to , until they are independent. Therefore, the equations of this system are linearly independent. Equation (4) can be straightforwardly transformed towhere is noise that takes the form . To match equation (1), we can then set . Notice that with the explicit representation of in terms of , and , as well as equation (6), we can calculate

Comparing the above two calculations with equations (2) and (6), we see that we havewhich gives an explicit representation of .

Next, we consider the problem of whether equation (1) implies equation (4). In fact, transforming from equations (1) to (4) requires much more effort. In this case, we need to obtain , , and from the general dynamic equation (1). We propose heuristic inference. While not a rigorous mathematical proof, it can lead to a reformulation of the problem into PDEs. The main idea of this heuristic inference is that equations (1) and (4) can describe the same dynamical behaviors in . Hence we may replace in equation (4) by the right side of equation (1) to obtain

Regarding as a parameter in , the above equation can be written aswhich has a deterministic part that is differentiable up to an arbitrary order, and a random part that is nondifferentiable everywhere. From the point of view of physics, the two kinds of noises and have the same source. Inspired by this, we may assume that we can establish a classification,

This subjective decomposition is the key to understanding Ao decomposition, which results in the consistency of stable points between a stochastic dynamical system and the corresponding dynamical system. Therefore, the generalized Lyapunov function of the stochastic dynamical system is equivalent to the Lyapunov function of the corresponding dynamical system. It must be noted that the A-type integral derived from Ao decomposition is a new integral that is different from the Itô and Stratonovich integrals. In one dimension, the A-type integral is simplified to the -type [33], where (Itô corresponds to , and Stratonovich to ). In the high-dimensional case, the A-type integral is not usually the -type [34].

Combining equations (3) and (6) with (13), we obtainwhich implies

From the physical point of view, equation (16) is a generalized Einstein relation in greater than one dimension. From equation (16), we havewhere the symmetric part of is . This is the diffusion matrix defined in equation (3). Hence we can rewrite the identityaswhere is an anti-symmetric unknown matrix function and is the identity matrix. Substituting equation (19) in equation (15), we obtain

From equation (15), it is easy to obtain that if , . Moreover, by equation (15) we have

Thus satisfies for all , and it is proven that is a potential function.

Assuming equation (1) holds true, equation (3) is given, and thus is known. We see that to obtain equation (4), we just have to show that there exists an anti-symmetric matrix and potential function that satisfy equation (20). Assuming basic integrability conditions on , and , by the classical Helmholtz decomposition we obtain that it suffices to show that the curl part of the vector field vanishes, i.e.,where equation (22) is a family of first–order quasilinear partial differential equations for the coefficients of in equation (20).

We notice that according to the above heuristic inference, equation (22) is a *sufficient* condition for equations (1) (4). In fact, if equation (22) holds true, then by Helmholtz decomposition, there exists a function such that equation (20) holds true, with the anti-symmetric matrix from (22). Moreover, with from equation (3) and from equation (22), we can construct the matrix , where is symmetric and is anti-symmetric, and and satisfy equations (15) and (16). Thus we can construct the noise from equation (13), which, together with equation (12), implies that we can construct equation (4) from equation (1).

Our problem has now been reduced to proving the existence of solution to equation (22), which is a first-order PDE. The rest of the paper is dedicated to the investigation of this first-order PDE system in dimension .

#### 4. Existence of Solutions to First-Order Quasilinear Partial Differential Equations in Dimension

Obviously, . Assume that

From equation (3), we can assume that

Let vector .

By equation (22), we can assume ,

Therefore, according to the matrix-valued cross-product rule, equation (22) can be transformed to

According to the composite function derivation rule, we obtainwhere superscripts denote the partial derivatives corresponding to , respectively. This is obviously a first-order -dimensional quasilinear system of partial differential equations consisting of equations and unknown functions. Our goal is to prove the existence of solutions for PDEs (27).

We first consider the matrix form of PDEs (27),

The independent variables are and , where . If has no real eigenvalues at any point in a region, PDEs (28) are elliptic in this region, and they obey the rule that the equations do not explicitly contain time. Because is a real matrix, this may occur only when is an even number. Obviously, when the multiplicities of eigenvalues of matrix at each point are constant in the whole region, then the order of every sub-block of the Jordan standard form of is constant in the whole region, such that there exists a nonsingular matrix satisfyingwhere

Then we assume the following:(i) and belong to function space ;(ii)The order of every Jordan sub-block of matrix is constant in the whole region ;(iii)The eigenvalue .

Let . Equation (28) can be transformed to

Then we study the solution of system

This system can be decomposed into a sub-system with the formwhere , and are real vector functions whose dimensions equal the order of matrix . Let , where is the imaginary unit. Then equation (33) can be written in the form

Using operator notation,

When , the sub-system can be decomposed into equations likewhere the value of is selected from . This case has been solved [35].

When , the sub-system can be transformed towhere

Therefore, we consider the systemwhere has the form of equation (38). We assume that the pair of eigenvalues is -fold and the corresponding linear independent eigenvector is only one, so the complex vector in PDEs (39) is -dimensional. Obviously, without loss of generality, we can assume in PDEs (39) satisfies , and PDEs (39) are uniformly elliptic,

We divide both sides of PDEs (39) by to obtainwhere is a unit matrix of order . Because the linear transformation maps the half-plane to disk , we obtain the functionwhich satisfies , where is some positive constant.

*Definition 1. *The matrix is called quasi-diagonal ifA quasi-diagonal matrix is a lower triangular matrix, we set(i) represents an element of the main diagonal;(ii), represent elements of the th diagonal under the main diagonal.Because the coefficient matrix of and in PDEs (41) is quasi-diagonal, PDEs (41) can be written aswhere is quasi-diagonal, and the element in main diagonal of . The first equation of PDEs (41) iswhich is a Beltrami equation. Because , , we can extend such that it belongs to and maintain 0 outside a large enough circle. With , we obtain the solution of Beltrami equation (45) [36].

Under the coordinate transformation , PDEs (44) change to standard form,If we let represent the coefficient matrix of , and still use as the independent variable, we haveA. Douglis derived the quasi-diagonal form of by introducing the algebra of hypercomplex numbers [36, 37].

*Definition 2. *(see [36]). is called a hypercomplex number, where is defined by equation (38), is a complex number, and is the complex number part of , and is the nilpotent part of . Note that , where is the th component of . A hypercomplex function is a map from the plane into this algebra, and it has the formwhere each is complex-valued.

Using Definition 2, we can write PDEs (47) asLetNote the differential operator,where is the th column vector of . Using the nilpotency of , we haveTherefore, PDEs (49) can be written asWe define a generating solution.

*Definition 3. *A hypercomplex function space that has bounded continuous derivatives up to order defined in set is represented by , and a hypercomplex function space whose -order derivative with index is continuous in is represented by . A module of hypercomplex function space is represented by ,A hypercomplex function is called a generating solution of operator if:(1) has the form ;(2);(3).We prove the existence of a generating solution to PDEs (32). We assume thatand can be extended to such that they all belong to and are equal to zero outside a large enough circle. Letwhere is an integral operator,By the property of operator and the assumption on , we obtainBy equation (52), we obtainTherefore, is the generating solution.

has the property of a positive constant such thatwhere is another sign of . The inverse exists because the complex part of is not zero.

Next, we consider two corresponding boundary value problems of original nonhomogeneous PDEs (28), the nonlinear Riemann boundary value problem and nonlinear Riemann-Hilbert boundary value problem. PDEs (26) can be written in the following form in the sense of Douglis algebra:where differential operatorwhere is a known nilpotent hypercomplex function, andwhere is a known complex value function of all its variables.

Assume that is a hypercomplex function of independent variable and hypercomplex variable . Define the Gateaux first-order differential of about ,We can similarly define the second-order Gateaux differential , and so on. We utilize to represent a simple smooth closed contour in the complex plane , whose positive direction is counterclockwise. It divides into bounded interior region and external unbounded region . Assume thatis a known hypercomplex function on variable , with hypercomplex elementsAssume that is a hypercomplex function that satisfies the condition on , whose complex part is not zero forever on . We also introduce the integer notationNow we can introduce the corresponding nonlinear Riemann boundary value problem.

*Definition 4. *(Nonlinear Riemann boundary value problem). Assume that is a bounded and simply connected region in plane , whose boundary is a smooth closed curve, and the positive direction of causes to be located to the left. Note the complement of as , where the origin of coordinates is located in . In the whole plane , we seek the normal block solution to PDEs (61), such that a nonlinear boundary value in that satisfieshas definite order at infinity, where is an integer.

Then we consider the linear Riemann boundary value problem,Assume that , are known hypercomplex functions. Noteand known hypercomplex function on . Then we represent the generating solution of differential operator by . Before stating our main theorem and proof, we introduce four lemmas. The proofs of these lemmas can refer to Appendix.

Lemma 1. *The Cauchy-type integralis a block hyperanalytic function that is equal to 0 at infinity,and the estimate establisheswhere is a positive constant related to , and . In addition, the boundary value condition satisfies on ,*

Lemma 2. *Assume hypercomplex functions . The integral operator is defined bywhere is a hypercomplex function in the whole plane . Hypercomplex functional satisfies:*(i)*;*(ii)*;*(iii)*For any real number , when ,*(iv)* Hypercomplex functional satisfies the following system in the Sobolev sense:*

In (i)–(iii), is a positive constant only relative to , , and . The positive number in (ii) and (iii) is

According to Lemma 2, operator is zero at infinity and continuous in the whole plane . Then we can establish the expression and estimate of the solution of boundary value problem (69).

Lemma 3. *Boundary value problem (69) has a unique solution,whereand is an integral operator defined by Lemma 2.*

Next, we introduce two estimates of solution .

Lemma 4. *For solution of boundary value problem (69), the following estimates hold true:where is a positive constant only relative to , and .*

The same as above, if hypercomplex function , define

For solution of (69), from estimates (81), we can deduce that

Now, we go back to the research on seeking solutions for nonlinear Riemann boundary value problems (61) and (68). The corresponding hypercomplex function is represented by , which means the determined hyperanalytic function defined in the whole plane , and it satisfies boundary value condition on , and has order at infinity. Because it has complex number parts that are not zero everywhere, it has inverse .

Making the substitution

The new hypercomplex function satisfies

By the properties of ,have the same behavior as , *M* (, and ). If , when , . If , there must exist a hyperpolynomial whose order is within ,where all are hypercomplex constants, such that under the transformation

A new hypercomplex function satisfies a similar system and boundary value condition, and when , . That only needs adding some degenerating condition on hypercomplex function and its first order and second order Gateaux differential at infinity. We omit the specific condition.

Finally, we only need to consider the solution of a nonlinear boundary value problem,

We assume the following:(i) Hypercomplex function , . For every fixed , the first- and second-order Gateaux differential of related to exist and are continuous. For all hypercomplex elements , , their coefficients about belong to .(ii) Note that . Assume that for any hypercomplex , hypercomplex function as a function of belongs to . There exists a positive constant , such that for any hypercomplex function we have

Theorem 1. *Under the above assumptions, if positive constant in inequality (91) satisfieswhere is the positive constant appearing on the right side of estimate equations (81) or (83), the solution of nonlinear boundary value problem (89), which can be written as , must exist, and it can be constructed by a successive approximation and continuity method.*

*Proof. *We introduce the parameter , and consider the boundary value problem with parameter ,When , we discuss boundary value problem (89). When , boundary value problem (93) has unique solution . Assume that there exists a solution of (93) for value . We want to prove that there exists a certain positive constant , independent of , such that for all , (93) has solution . Therefore, we can deduce that boundary value problem (93) has a solution when , starting from and by finite steps, i.e., we prove the existence of a solution to PDEs (93). Note that hypercomplex function .

We demonstrate that when condition (92) is established, is bounded. Actually, because satisfieswhere , we use estimate (83) to obtainBy condition (92), we can obtainNow, we regard as a zeroth order approximation, and successively determine the sequence of hypercomplex function according to the formwhere , . Since (97) is a linear boundary value problem about , if , then by Lemmas 3 and 4, (97) has a solution that belongs to . We utilize estimate (83) to easily prove that if is bounded, is also bounded. Therefore, hypercomplex function sequence is uniformly bounded, relying on module .

Now we prove the convergence of sequence . We consider differencesatisfying, when ,