Abstract

On the real line initially there are infinite number of particles on the positive half line, each having one of -negative velocities . Similarly, there are infinite number of antiparticles on the negative half line, each having one of -positive velocities . Each particle moves with constant speed, initially prescribed to it. When particle and antiparticle collide, they both disappear. It is the only interaction in the system. We find explicitly the large time asymptotics of —the coordinate of the last collision before between particle and antiparticle.

1. Introduction

We consider one-dimensional dynamical model of the boundary between two phases (particles and antiparticles, bears and bulls) where the boundary moves due to reaction (annihilation, transaction) of pairs of particles of different phases.

Assume that at time infinite number of -particles and -particles are situated correspondingly on and and have one-point correlation functions: Moreover, for any , that is, two phases move towards each other. Particles of the same phase do not see each other and move freely with the velocities prescribed initially. The only interaction in the system is the following, when two particles of different phases find themselves at the same point they immediately disappear (annihilate). It follows that the phases stay separated, and one might call any point in-between them the phase boundary (e.g., it could be the point of the last collision). Thus, the boundary trajectory is a random piece-wise constant function of time.

The main result of the paper is the explicit formula for the asymptotic velocity of the boundary as the function of parameters—densities and initial velocities. It appears to be continuous but at some hypersurface some first derivatives in the parameters do not exist. This kind of phase transition has very clear interpretation: the particles with smaller activities (velocities) cease to participate in the boundary movement—they are always behind the boundary, that is, do not influence the market price . In this paper, we consider only the case of constant densities , that is, the period of very small volatility in the market. This simplification allows us to get explicit formulae. In [1], the case was considered, however, with nonconstant densities and random dynamics.

Main technical tool of the proof may seem surprising (and may be of its own interest), we reduce this infinite particle problem to the study of a special random walk of one particle in the orthant with . The asymptotic behavior of this random walk is studied using the correspondence between random walks in and dynamical systems introduced in [2].

The organization of the paper is the following. In Section 2, we give exact formulation of the model and of the main result. In Section 3, we introduce the correspondence between infinite particle process, random walks, and dynamical systems. In Sections 4, and 5 we give the proofs.

2. Model and the Main Result

2.1. Initial Conditions

At time on the real axis, there is a random configuration of particles, consisting of -particles and -particles. -particles and -particles differ also by the type: denote the set of types of -particles, and the set of types of -particles. Let be the initial configuration of particles of type , and let be the initial configuration of particles of type , where the second index is the type of the particle in the configuration. Thus, all -particles are situated on and all -particles on . Distances between neighbor particles of the same type are denoted by where we put . The random configurations corresponding to the particles of different types are assumed to be independent. The random distances between neighbor particles of the same type are also assumed to be independent, and, moreover, identically distributed, that is, random variables are independent and their distribution depends only on the upper and second lower indices. Our technical assumption is that all these distributions are absolutely continuous and have finite means. Denote , .

2.2. Dynamics

We assume that all -particles of the type move in the left direction with the same constant speed , where . The -particles of type move in the right direction with the same constant speed , where . If at some time a -particle and a -particle are at the same point (we call this a collision or annihilation event), then both disappear. Collisions between particles of different phases is the only interaction, otherwise, they do not see each other. Thus, for example, at time , the th particle of type could be at the point: if it will not collide with some -particle before time . Absolute continuity of the distributions of random variables guaranties that the events, when more than two particles collide, have zero probability.

We denote this infinite particle process .

We define the boundary between plus and minus phases to be the coordinate of the last collision which occurred at some time . For , we put . Thus, the trajectories of the random process are piecewise constant functions, we will assume them continuous from the left.

2.3. Main Result

For any pair of subsets , define the numbers: The following condition is assumed: If the limit exists a.e., we call it the asymptotic speed of the boundary. Our main result is the explicit formula for .

Theorem 2.1. The asymptotic velocity of the boundary exists and is equal to where

Note that the definition of and is not ambiguous because and .

Now, we will explain this result in more detail. As , there can be 3 possible orderings of the numbers : (1): in this case (2)if , then and , moreover, (3)if , then and , moreover,

Item is evident. Items and will be explained in Appendix B.

2.4. Another Scaling

Normally, the minimal difference between consecutive prices (a tick) is very small. Moreover, one customer can have many units of the commodity. That is why it is natural to consider the scaled densities: for some fixed constants . Then, the phase boundary trajectory will depend on . The results will look even more natural. Namely, it follows from the main theorem, that is, for any , there exists the following limit in probability: that is, the limiting boundary trajectory.

This scaling suggests a curious interpretation of the model—the simplest model of one instrument (e.g., a stock) market. Particle initially at is the seller who wants to sell his stock for the price , which is higher than the existing price . There are groups of sellers characterized by their activity to move towards more realistic price. Similarly, the -particles are buyers who would like to buy a stock for the price lower than . When seller and buyer meet each other, the transaction occurs and both leave the market. The main feature is that the traders do not change their behavior (speeds are constant), that is, in some sense the case of zero volatility.

There are models of the market having similar type (but very different from ours, see [35]). In physical literature, there are also other one-dimensional models of the boundary movement, see [6, 7].

2.5. Example of Phase Transition

The case , that is, when the activities of -particles are the same (and similarly for -particles), is very simple. There is no phase transition in this case. The boundary velocity depends analytically on the activities and densities. This is very easy to prove because the th collision time is given by the simple formula: and th collision point is given by

More complicated situation was considered in [1]. There, the movement of -particles have random jumps in both directions with constant drift (and similarly for -particles). In [1], the order of particles of the same type can be changed with time. There are no such simple formulae as (2.16) and (2.17) in this case. The result is, however, the same as in (2.15).

The phase transition appears already in case when , and, moreover, the -particles stand still, that is, . Denote ,. Consider the function: It is the asymptotic speed of the boundary in the system where there is no -particles of type 2 at all.

Then, the asymptotic velocity is the function: if and if . We see that at the point the function is not differentiable in .

2.6. Balance Equations—Physical Evidence

Assume that the speed of the boundary is constant. Then, the -particle will meet the boundary if and only if . Then, the mean number of -particles of type , meeting the boundary on the time interval , is . The total number of -particles meeting the boundary during time is Similarly, the number of -particles meeting the boundary is

These numbers should be equal (balance equations), and after dividing by , this gives the equation with respect to : Note that both parts are continuous in . Moreover, the left (right) side is decreasing (increasing). This defines uniquely. One can obtain the main result from this equation.

One could think that on this way one can get rigorous proof. However, it is not so easy. We develop here different techniques, that gives much more information about the process than simple balance equations.

3. Random Walk and Dynamical System in

3.1. Associated Random Walk

One can consider the phase boundary as a special kind of server where the customers (particles) arrive in pairs and are immediately served. However, the situation is more involved than in standard queuing theory, because the server moves, and correlation between its movement and arrivals is sufficiently complicated. That is why this analogy does not help much. However, we describe the crucial correspondence between random walks in and the infinite particle problem defined above, that allows to get the solution.

Denote () the coordinate of the extreme right (left), and still existing at time , that is, not annihilated at some time , -particle of type (-particle of type ). Define the distances . The trajectories of the random processes are assumed left continuous. Consider the random process , where .

Denote the state space of . Note that the distances , for any , satisfy the following conservation laws: where and . That is why the state space can be given as the set of nonnegative solutions of the system of linear equations: where . It follows that the dimension of equals . However, it is convenient to speak about random walk in , taking into account that only subset of dimension is visited by the random walk.

Now, we describe the trajectories in more detail. The coordinates decrease linearly with the speeds correspondingly until one of the coordinates becomes zero. Let at some time . This means that -particle of type collided with -particle of type . Let them have numbers and correspondingly. Then, the components of become and other components will not change at all, that is, do not have jumps.

Note that the increments of the coordinates at the jump time do not depend on the history of the process before time , as the random variables. () are independent and equally distributed for fixed type. It follows that is a Markov process. However, this continuous time Markov process has singular transition probabilities (due to partly deterministic movement). This fact, however, does not prevent us from using the techniques from [2] where random walks in were considered.

3.2. Ergodic Case

We call the process ergodic, if there exists a neighborhood of zero, such that the mean value of the first hitting time of from the point is finite for any . In the ergodic case, the correspondence between boundary movement and random walks is completely described by the following theorem.

Theorem 3.1. Two following two conditions are equivalent: (1)the process is ergodic; (2).

All other cases of boundary movement correspond to nonergodic random walks. Even more, we will see that in all other cases the process is transient. Condition (2.6), which excludes the set of parameters of zero measure, excludes in fact null recurrent cases.

To understand the corresponding random walk dynamics introduce a new family of processes.

3.3. Faces

Let . The face of associated with is defined as If , then . For shortness, instead of , we will sometimes write . However, one should note that the inclusion like is always understood for subsets of , not for the faces themselves.

Define the following set of “appropriate” faces .

Lemma 3.2. It holds that

The proof will be given in Appendix A. This lemma explains why in the study of the process one can consider only “appropriate” faces.

3.4. Induced Process

One can define a family of infinite particle processes, where . The process is the process with , and . All other parameters (i.e., the densities and velocities) are the same as for . Note that these processes are in general defined on different probability spaces. Obviously, .

Similarly to , the processes have associated random walks in with . Usefulness of these processes is that they describe all possible types of asymptotic behavior of the main process .

Consider a face , that is, such face that its complement where and . The process will be called an induced process, associated with . The coordinates are defined in the same way as , where . The state space of this process is , where . Face is called ergodic if the induced process is ergodic.

3.5. Induced Vectors

Introduce the plane:

Lemma 3.3. Let be ergodic with , and let be the process with the initial point . Then, there exists vector such that for any , such that , one has, as ,

This vector will be called the induced vector for the ergodic face . We will see other properties of the induced vector below.

3.6. Nonergodic Faces

Let be the face which is not ergodic (nonergodic face). Ergodic face will be called outgoing for , if for . Let be the set of outgoing faces for the nonergodic face .

Lemma 3.4. The set contains the minimal element in the sense that, for any , one has .

This lemma will be proved in Section 5.2.

3.7. Dynamical System

We define now the piece-wise constant vector field in , consisting of induced vectors, as follows: if belongs to ergodic face , and if belongs to nonergodic face , where is the minimal element of . Let be the dynamical system corresponding to this vector field.

It follows that the trajectories of the dynamical system are piecewise linear. Moreover, if the trajectory hits a nonergodic face, it leaves it immediately. It goes with constant speed along an ergodic face until it reaches its boundary.

We call the ergodic face final, if either or all coordinates of the induced vector are positive. The central statement is that the dynamical system hits the final face, stays on it forever, and goes along it to infinity, if .

The following theorem, together with Theorem 3.1, is parallel to Theorem 2.1. That is, in all 3 cases of Theorems 2.1, 3.1, and 3.5 describe the properties of the corresponding random walks in the orthant.

Theorem 3.5. If is ergodic then the origin is the fixed point of the dynamical system . Moreover, all trajectories of the dynamical system hit .
Assume . Then, the process is transient and there exists a unique ergodic final face , such that for . This face is where is defined by (2.9). Moreover, all trajectories of the dynamical system hit and stay there forever.
Assume . Then, the process is transient and there exists a unique ergodic final face , such that for . This face is where is defined by (2.8). Moreover, all trajectories of the dynamical system hit and stay there forever.
For any initial point , the trajectory has finite number of transitions from one face to another, until it reaches or one of the final faces.

This theorem will be proved in Section 5.3.

3.8. Simple Examples of Random Walks and Dynamical Systems

If , the process is a random process on . It is deterministic on —it moves with constant velocity towards the origin. When it reaches at time , it jumps backwards: where has the same distribution as . The dynamical system coincides with inside and has the origin as its fixed point.

If and, moreover, , then the state space of the process is . Inside the quarter plane, the process is deterministic and moves with velocity . From any point of the boundary , it jumps to the random point , and from any point of the boundary , it jumps to the point, where have the same distributions as and correspondingly. The classification results for random walks in can be easily transferred to this case; the dynamical system is deterministic and has negative components of the velocity inside . When it hits one of the axes, it moves along it. The velocity is always negative along the first axis, however, along second axis, it can be either negative or positive. This is the phase transition we described above. Correspondingly, the origin is the fixed point in the first case and has positive value of the vector field along the second axis, in the second case.

4. Collisions

4.1. Basic Process

Now, we come back to our infinite particle process . The collision of particles of the types we will call the collision of type . Denote the number of collisions of type on the time interval .

Lemma 4.1. If the process is ergodic, then the following positive limits exist a.s. and satisfy the following system of linear equations:

Proof. Remind that the collisions can be presented as follows. If , then for any where for and for . Note that the proof of (4.2) is similar to the proof of the corresponding assertion in [8]. For large , we have Note that this is exact equality, if instead of and , we take random distances between particles. By the law of large numbers and by (4.2), the system (4.3) follows.

We will need below the following new notation, (4.3) can be rewritten in the new variables as follows where Obviously, the following balance equation holds: Rewrite the system (4.3) in a more convenient form, using the variables , . Then, It follows that, for all , Introduce the variable . We get the following system of equations with respect to the variables : It is easy to see that this system has the unique solution: where is defined by (2.5). If is ergodic, then by Lemma 4.1 we have for any .

Lemma 4.2. Let the process be ergodic. Then,(1),(2)the speed of the boundary .

Proof . If is ergodic, then by Lemma 4.1, and for all ,  . So, by (4.12), we have
Let be the number of particles of type , which had collisions during time . Then, is the initial coordinate of the particle of type , which was the last annihilated among the particle of this type. Let be the annihilation time of this particle. Then, Rewrite this expression as follows: It follows that By Lemma 4.1 and the strong law of large numbers, as . At the same time, ergodicity of the process gives that as Thus, for any , a.e. Similarly, one can prove that for all , It follows from (4.11) and (4.12) that the boundary velocity is defined by (2.5). Lemma is proved.

4.2. Induced Process

Consider the faces such that , where and . Let be the number of collisions of type on the time interval in the process .

The following lemma is quite similar to Lemma 4.1.

Lemma 4.3. If the process is ergodic, then the following a.e. limits exist and are positive for all pairs , They satisfy the following system of linear equations:

Introduce the following notation: For , , we have , and , .

Due to (4.24), for , we have It follows that for all . Put . In this way, we have obtained the following system of linear equations (similar the system (4.11)) with respect to variables : As previously, this system has the unique solution:

For any process or for the corresponding induced process (see Section 3), we also define the boundary as the coordinate of the last collision before . Let us assume that . The trajectories of the random process are also piece-wise constant, we will assume them left continuous. The following lemma is completely analogous to Lemma 4.2.

Lemma 4.4. Let , where and , and letbe an ergodic face. Then,(1) and ,(2)the boundary velocity for the process (or for the corresponding ) equals (with the a.e. limit)

Note that for .

Lemma 4.5. For any ergodic face (), the vector with the coordinates equal to is the induced vector in the sense of Lemma 3.3.

This is quite similar to Lemma 2.2, page 143 of [8]and Lemma , page 87 of [9].

It follows from (4.30) and (4.28), that the coordinates of the induced vector are given by

Note that by condition (2.6) for all induced vectors if .

Intuitive interpretation of this formula is the following. For example, the inequality means that -particles of type overtake the boundary which moves with velocity . In the contrary case, , that is, -particles of type fall behind the boundary.

5. Proofs

5.1. Proof of Theorem 3.1

The implication has been proved in Lemma 4.2. Now, we prove that implies . We will use the method of Lyapounov functions to prove ergodicity. Define the Lyapounov function: where vector with coordinates will be defined below. One has to verify the following condition: there exists such that for any ergodic face , , where is the induced vector corresponding to the face , see [9].

The system (4.3) can be written in the matrix form: where is the matrix with the elements indexed by , and the vector It is easy to see that the coordinates of the vector are equal to

If the assumption of the theorem holds, then the system of (4.11) has a positive solution, that is, . One can choose positive so that the following condition holds: where and . For example, one can put where Let the vector have coordinates . Then, satisfies the system (5.3), that is, .

For ergodic face , define the vector with coordinates , where for are defined in (4.23) and we put for . It follows from (4.26) and (4.30) that the induced vector can be written as with the matrix and the vector defined in (5.4) and (5.5). By (5.10), we have As the vector belongs to the face and , then

Note that the matrix in (5.3) is a nonnegative operator. In fact, for any vector , where Let for definiteness . By formula (5.13), as , for , if . As the number of faces is finite, one can always choose , so that

The theorem is proved.

5.2. Proof of Lemma 3.4

For any nonergodic face with , where and , define This definition is correct because always

Introduce the face such that . If , then and . By Theorem 3.1, the induced process is ergodic and the face is ergodic.

So, there can be two possible cases.(i)If , then , and. (ii)If , then , and. By construction, we have .

We show that is the minimal ergodic outgoing face for . Consider the first case, namely, . The second one is quite similar. Because of , we can apply Theorem 3.1 and so the induced process is ergodic. This gives ergodicity of the face .

By formula (4.32) for all and by formula (5.18), It follows from Lemma B.1 that Thus, we get for all . It means that the face is outgoing for .

To finish the proof of Lemma 3.4, it is sufficient to show that the constructed face is the minimal outgoing face for . We give the proof by contradiction. Let there exist an ergodic outgoing ( for ) face such that and . Put By (4.31)–(4.33), the coordinates of the induced vector are given for as follows: As the face is outgoing, we must have for all . Thus, the only two situations are possible: or . In the first case, we have and so . But then and this contradicts the assumption .

So . Show that .

Let and there is such that . Then, by Lemma B.1, and, hence, the face cannot be outgoing for . If , there exists some point , where , and by (5.18), It follows from Theorem 3.1 that the induced process is nonergodic and, hence, the face is also nonergodic. This contradicts the assumption on ergodicity of the face . So . The Lemma is proved.

5.3. Proof of Theorem 3.5

The first goal of this subsection is to study trajectories of the dynamical system . After that, using the obtained knowledge about behavior of , we will prove Theorem 3.5. Let be the trajectory of the dynamical system, starting in the point .

According to the definition of , any trajectory , , visits some sequence of faces. In general, this sequence depends on the initial point and contains ergodic and nonergodic faces. It is very complicated to give a precise list of all faces visited by the concrete trajectory started from a given point . Our idea is to find a common finite subsequence of ergodic faces in the order they are visited by any trajectory. We find this subsequence together with the time moments , , where is the first time the trajectory enters the closure of . Moreover, it will follow from our proof that the intervals are finite, the dimensions of the ergodic faces in this sequence decrease and any trajectory, after hitting the closure of some face in this sequence, will never leave this closure.

Proposition 5.1. There exists a monotone sequence of faces: and a sequence of time moments: depending on , and having the following property: where denotes the closure of in . Moreover, the sequence depends only on the parameters of the model (i.e., on the velocities and densities), but the sequence of time moments , depends also on the initial point of the trajectory . Thus, any trajectory will hit the final set in finite time.

The proof of Proposition 5.1 will be given at the end of this subsection.

First, we will present here some algorithm for constructing the sequence . By Lemma 3.2, we can consider only faces , such that . Algorithm consists of several number of steps and constructs a sequence , , , In fact, it constructs a sequence . We prefer here to use notation: and to call a group consisting of particle types listed in .

Notation has the same meaning as earlier:

Algorithm 5.2. Put and find .
If , compare and .   (i) If , then .   (ii) If , then .
If , compare and .   (i) If , then .    (ii) If , then . We have already constructed group:
Find . If and hold, then apply the following steps (-a) and (-b). If and , compare and.(i)If , then .(ii)If , then . If and , we compare and .(i)If , then . (ii)If , then the algorithm is finished and the group is declared to be the final group of the algorithm. If , and , we compare and .(i)If , then . (ii)If , then the algorithm is finished and the group is declared to be the final group of the algorithm. If , and , we compare and .(i)If , then . (ii)If , then the algorithm is finished and the group is declared to be the final group of the algorithm.

If the algorithm did not stop at the steps (-c), (-d), or (-e), then the step should be fulfilled, and so forth. It is clear that the algorithm stops after finite number of steps, and as the result, we get a final group , which will have one of the following types:

where , .

We need not only the final group, corresponding to the face along which the trajectory escapes to infinity, but also the whole chain: As it follows from the algorithm, this chain is uniquely defined by the parameters of the model.

Let us remark that in the algorithm we excluded cases where some of are zero. We will show below (see Remark 5.5) how to modify the algorithm to take into account these cases as well.

The next lemma is needed for the proof of the Theorem 3.5. It is convenient, however, to give this proof here, as it is essentially based on the details of the algorithm defined above.

Lemma 5.3. If , then simultaneously and hold.
If , where, then and .
If , where , then and .

Proof of Lemma 5.3. In fact, if , where , then the algorithm stops on some step (-d), and thus, the condition will hold. As , then we get the proof of the part of the lemma. Part is quite similar.
To prove assertion of the lemma consider the face, previous to the final one: Two cases are possible:
Consider the case and the final fragment of the trajectory in the algorithm: Two cases of the first transition in this chain are possible:(1) and ; (2) and . In both cases, one can claim that To prove this consider both cases separately.
Case 1. As , then we have . Thus, , as is the convex linear combination (CLC(CLC of the numbers is for some numbers , such that .)) and .
Case . Here, we assume . From this, as above, we get that .
Thus, the inequality (5.40) is proved. As is CLC of and negative numbers , , , then Then, we have .
The latter transition in the chain occurs because . Then, , as is CLC of and .
This gives the proof.

Let and are such that The numbers and are non-decreasing functions of . Moreover increases by 1 if increases by 1. What can be the difference between and ? There can be two cases:

Case . Consider .

Case . Consider .

Remind that the face is defined by the set of pairs of indices . Namely, to each pair corresponds positive coordinates in the definition (3.4) of the face and vice versa. For shortness, we say that the face consists of pairs .

Proposition 5.4. Let the chain (5.36) be given and Case occur. For any ergodic face , not containing the pairs: the following holds true: for any pairs as belonging to , the corresponding component of the vector field is negative: If the Case occurs, then for any ergodic face , not containing the pairs (5.43), the following components of the vector field are negative: under the condition, of course, that .

Proof of Proposition 5.4. Remind the notation . As it was mentioned above, the connection between and can be of two kinds— or , which we write schematically as Consider only the Case , as the Case is symmetric. It is necessary to prove that for any ergodic face , which does not contain for any pairs , where , the inequality, holds. Thus, we mean the faces with For such faces .
Consider now the case when the set is not empty. As corresponds to ergodic group of particles, then by Lemma 5.7. As , then The case when the set is empty corresponds to Case includes two possible subcases: Consider firstly (5.54). If the set is not empty, then the subcase (5.54) contradicts the ergodicity assumption for (5.52), thus it is impossible. If the set is empty, then and the assumption (5.54) means that . As , we easily conclude that in this case: Consider now (5.53). If the set is not empty, then due to the ergodicity of the group (5.52), we have strict inequality . If the set is empty, then and consequently . Finally, we conclude that in the subsituation (5.53) always From (5.53), we have and it follows that .
This ends the proof.

Proof of Proposition 5.1. Assume the above algorithm produces the chain of groups (5.36). Let , , , be the faces in , corresponding to the chain ,,  , via the rule (5.31). Denote , , , the closures of these faces in . That is, in notation (5.42), It is clear that , and, moreover, . More exactly, or in the Case or correspondingly.
Let be the coordinate description of the trajectory . To prove that one should check that for all . The trajectory goes along ergodic faces.
Maximal ergodic face is . The vector field on this face is such that . Note that also for any other ergodic face , containing the pair , the component will also be negative, as by (4.31)–(4.33) it can take only one of three following negative values: Thus, for any initial point , there is such that , and, moreover, for all .
Thus, . If the Case occurs, then we have to show the existence of such that for all. If it appeared that , then just put . If, however, , that is, , then belongs to some ergodic face . By Proposition 5.4, and thus there is such that (i.e., ). In future, the dynamical system will never quit . In fact, assume the contrary. Note that can belong either to , or to its boundary (remind that and ). For the trajectory to quit , it is necessary that it used some outgoing ergodic face . There are two possibilities to do this. The first possibility is . But in this case (see (5.59)) , and we get contradiction with the hypothesis that is an ergodic outgoing face. The second possibility is and . But according to the Proposition 5.4 for any such face , and thus the dynamical system cannot quit along such face , This gives the contradiction.
If the Case occurred then, quite similarly, one show existence of such that for all .
(r) We can use further the induction, using subsequently Proposition 5.4, to show on the step , that there exists such that for any : (i) for all  , if the Case holds, (ii) for all, if the Case holds. Let us show now that in any Case for all . For concreteness, consider only the Case , that is, when    Assume that the trajectory of the dynamical system , being at time in , will leave it at some future moment. The set is a finite union of faces having various dimensions. One should understand then which outgoing ergodic faces can be used. Again, there are two possibilities.
Case Consider , that is, . Then, there exists such that (otherwise, , which gives the contradiction). By Proposition 5.4, we have . This contradicts to the fact that the face is outgoing.
Case 2. Consider . Consider Assume for definiteness, that on step of the algorithm, we have Then there exists such , that is, . Applying Proposition 5.4, to , we get and come to the contradiction because is outgoing.
Thus, there exists a time moment such that for the trajectory hits the final ergodic face , which is the complement to the final group (5.35).

Important remark is that the sequence of times depends on the initial point. In particular, for some initial points, some consequent moments and can coincide.

Remark 5.5. Consider the following modification of the algorithm: in Cases (2a) and (-a) change the conditions and on and correspondingly. All the rest we leave untouched. It is easy to see that all results of this section hold after such modification as well. In particular, our study covers the situation when ( coincides with the asymptotic boundary velocity of our system (see Section 5.4) . ).
From the above, it follows that any trajectory reaches the final face in finite time. To proceed with the proof of Theorem 3.5,we will prove the following lemma.

Lemma 5.6. For any initial point , the path has finite number of transitions from one face to another, until it reaches one of the final faces. In other words, the sequence of faces, passed by the path , is finite and the last element of this sequence is the final face.

Proof of Lemma 5.6. Consider an arbitrary trajectory . Let be a sequence of all faces visited by this trajectory. Denote the sequence of the corresponding groups, where . We want to show that the sequence is finite.
Two cases are possible for the transition , or equivalently, for the transition . If the face is ergodic, then the group is obtained by adding some new particle type to the group . During this transition, the dimension of decreases. If the face is nonergodic, then is the minimal outgoing face, containing (see Lemma 3.4). In the transition from , some types are deleted, and the dimension of increases. Thus, the transition can occur with two operations: adding some new type and deleting some types. The same type can be added and deleted several times. If we could show that addition and deletion are possible only finite number of times, that will give finiteness of the sequence .
Note the following fact. Take, for example, some -type . Then, it can be deleted from the group on some step if and only if on the previous step we added to the group some -type with smaller number (i.e., with greater velocity). That is why the type 1, plus or minus, can be added only once and cannot be deleted. -type 2 can be deleted only after adding -type 1. Similarly for -type 2. That is why type 2, plus or minus, can be added to the group not more than twice and can be deleted not more than once. One can prove by induction that any type can be deleted and added not more that finite number of times.

Proof of Theorem 3.5. Let the chain (5.36) be the result of the algorithm. Three cases are possible, defined by simple inequalities between ,, and .
: this corresponds to part (2.1) of Lemma 5.3, that is, . Thus, (Proposition 5.1), all trajectories of the dynamical system reach for finite time and finite number of changes. Note that from this, using well-known methods (see [2, 9]), one can get alternative proof of ergodicity of , in addition to the one of Theorem 3.1. The first assertion of Theorem 3.5 is proved.
: this case corresponds to part 2 of Lemma 5.3, and thus, where . From the rules of the algorithm, it follows immediately that , but . Thus, (see Theorem 3.1), the process is ergodic, and the face is also ergodic. Find now the vector . Note that To find components of , we use the formulas (4.31)–(4.33): By Proposition 5.1 any trajectory, in finite time and after finite number of changes, will reach , and will move along it with constant speed , having strictly positive components (5.66). By standard methods of [2, 9], we conclude that is transient. The second assertion of Theorem 3.5 is proved.
: this case corresponds to part of Lemma 5.3, and the proof is completely similar to the previous case. That proves assertion of Theorem 3.5.
The fourth assertion of theorem 3 is a corollary of Proposition 5.1 and Lemma 5.6.
Theorem 3.5 is proved.

5.4. Proof of Theorem 2.1

If associated random walk is ergodic, then by Lemma 4.2, the speed of the boundary equals which is defined by (2.5).

Let the process be nonergodic. Then, there are two possible cases: or . From the previous Section 5.3, it follows that any trajectory reaches the final face in finite time and during this time only finite number of changing the face occurs.

The following assertion is an obvious analog of the proposition 1.4.3 of [2].

Lemma 5.7. For any and any initial point , a.e. as .

Let . We have proved that any trajectory of the dynamical system reaches the final face , where the coordinates of the induced vector are positive. By Lemma 5.7 the coordinates of the process , where , , grow linearly (a.e.) as . In other words -types with numbers fall behind the boundary and do not contribute to its velocity. It means that the boundary velocity is defined only by the particles of types , and are given by formula (2.5). The case of is quite similar.

Appendices

A. Proof of Lemma 3.2

Let the face be such that is not the direct product. Put Choose an “appropriate” face so that . To prove the lemma, it is sufficient to show that As , we always have . Let us prove that . Let and . Then, there exist and such that , and the following equation holds: Take arbitrary element of the set . As its coordinates , then for all . Thus, , and the lemma is proved.

B. Technical Lemma

For shortness, denote

Lemma B.1. one has (i), , (ii), (iii), . Similarly, (i), (ii), (iii), .

Proof. We prove the first three items. The others are quite similar. Using (2.5), one can check for some such that . It follows that Thus, . If , using , we get
The Lemma is proved.

Let and be defined by (2.9) and (2.8). It follows from the lemma that So the minimum of is reached at point and maximum of is reached at point .

Funding

A. D. Manita was supported by the Russian Foundation of Basic Research (Grants 12-01-00897 and 11-01-90421).