About this Journal Submit a Manuscript Table of Contents
ISRN Mathematical Physics
Volume 2012 (2012), Article ID 327298, 32 pages
http://dx.doi.org/10.5402/2012/327298
Research Article

Explicit Asymptotic Velocity of the Boundary between Particles and Antiparticles

Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, GSP-1, Moscow 119991, Russia

Received 14 April 2012; Accepted 21 June 2012

Academic Editors: V. Putkaradze, P. Roy, and M. Znojil

Copyright © 2012 V. A. Malyshev et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

On the real line initially there are infinite number of particles on the positive half line, each having one of -negative velocities . Similarly, there are infinite number of antiparticles on the negative half line, each having one of -positive velocities . Each particle moves with constant speed, initially prescribed to it. When particle and antiparticle collide, they both disappear. It is the only interaction in the system. We find explicitly the large time asymptotics of —the coordinate of the last collision before between particle and antiparticle.

1. Introduction

We consider one-dimensional dynamical model of the boundary between two phases (particles and antiparticles, bears and bulls) where the boundary moves due to reaction (annihilation, transaction) of pairs of particles of different phases.

Assume that at time infinite number of -particles and -particles are situated correspondingly on and and have one-point correlation functions: Moreover, for any , that is, two phases move towards each other. Particles of the same phase do not see each other and move freely with the velocities prescribed initially. The only interaction in the system is the following, when two particles of different phases find themselves at the same point they immediately disappear (annihilate). It follows that the phases stay separated, and one might call any point in-between them the phase boundary (e.g., it could be the point of the last collision). Thus, the boundary trajectory is a random piece-wise constant function of time.

The main result of the paper is the explicit formula for the asymptotic velocity of the boundary as the function of parameters—densities and initial velocities. It appears to be continuous but at some hypersurface some first derivatives in the parameters do not exist. This kind of phase transition has very clear interpretation: the particles with smaller activities (velocities) cease to participate in the boundary movement—they are always behind the boundary, that is, do not influence the market price . In this paper, we consider only the case of constant densities , that is, the period of very small volatility in the market. This simplification allows us to get explicit formulae. In [1], the case was considered, however, with nonconstant densities and random dynamics.

Main technical tool of the proof may seem surprising (and may be of its own interest), we reduce this infinite particle problem to the study of a special random walk of one particle in the orthant with . The asymptotic behavior of this random walk is studied using the correspondence between random walks in and dynamical systems introduced in [2].

The organization of the paper is the following. In Section 2, we give exact formulation of the model and of the main result. In Section 3, we introduce the correspondence between infinite particle process, random walks, and dynamical systems. In Sections 4, and 5 we give the proofs.

2. Model and the Main Result

2.1. Initial Conditions

At time on the real axis, there is a random configuration of particles, consisting of -particles and -particles. -particles and -particles differ also by the type: denote the set of types of -particles, and the set of types of -particles. Let be the initial configuration of particles of type , and let be the initial configuration of particles of type , where the second index is the type of the particle in the configuration. Thus, all -particles are situated on and all -particles on . Distances between neighbor particles of the same type are denoted by where we put . The random configurations corresponding to the particles of different types are assumed to be independent. The random distances between neighbor particles of the same type are also assumed to be independent, and, moreover, identically distributed, that is, random variables are independent and their distribution depends only on the upper and second lower indices. Our technical assumption is that all these distributions are absolutely continuous and have finite means. Denote , .

2.2. Dynamics

We assume that all -particles of the type move in the left direction with the same constant speed , where . The -particles of type move in the right direction with the same constant speed , where . If at some time a -particle and a -particle are at the same point (we call this a collision or annihilation event), then both disappear. Collisions between particles of different phases is the only interaction, otherwise, they do not see each other. Thus, for example, at time , the th particle of type could be at the point: if it will not collide with some -particle before time . Absolute continuity of the distributions of random variables guaranties that the events, when more than two particles collide, have zero probability.

We denote this infinite particle process .

We define the boundary between plus and minus phases to be the coordinate of the last collision which occurred at some time . For , we put . Thus, the trajectories of the random process are piecewise constant functions, we will assume them continuous from the left.

2.3. Main Result

For any pair of subsets , define the numbers: The following condition is assumed: If the limit exists a.e., we call it the asymptotic speed of the boundary. Our main result is the explicit formula for .

Theorem 2.1. The asymptotic velocity of the boundary exists and is equal to where

Note that the definition of and is not ambiguous because and .

Now, we will explain this result in more detail. As , there can be 3 possible orderings of the numbers : (1): in this case (2)if , then and , moreover, (3)if , then and , moreover,

Item is evident. Items and will be explained in Appendix B.

2.4. Another Scaling

Normally, the minimal difference between consecutive prices (a tick) is very small. Moreover, one customer can have many units of the commodity. That is why it is natural to consider the scaled densities: for some fixed constants . Then, the phase boundary trajectory will depend on . The results will look even more natural. Namely, it follows from the main theorem, that is, for any , there exists the following limit in probability: that is, the limiting boundary trajectory.

This scaling suggests a curious interpretation of the model—the simplest model of one instrument (e.g., a stock) market. Particle initially at is the seller who wants to sell his stock for the price , which is higher than the existing price . There are groups of sellers characterized by their activity to move towards more realistic price. Similarly, the -particles are buyers who would like to buy a stock for the price lower than . When seller and buyer meet each other, the transaction occurs and both leave the market. The main feature is that the traders do not change their behavior (speeds are constant), that is, in some sense the case of zero volatility.

There are models of the market having similar type (but very different from ours, see [35]). In physical literature, there are also other one-dimensional models of the boundary movement, see [6, 7].

2.5. Example of Phase Transition

The case , that is, when the activities of -particles are the same (and similarly for -particles), is very simple. There is no phase transition in this case. The boundary velocity depends analytically on the activities and densities. This is very easy to prove because the th collision time is given by the simple formula: and th collision point is given by

More complicated situation was considered in [1]. There, the movement of -particles have random jumps in both directions with constant drift (and similarly for -particles). In [1], the order of particles of the same type can be changed with time. There are no such simple formulae as (2.16) and (2.17) in this case. The result is, however, the same as in (2.15).

The phase transition appears already in case when , and, moreover, the -particles stand still, that is, . Denote ,. Consider the function: It is the asymptotic speed of the boundary in the system where there is no -particles of type 2 at all.

Then, the asymptotic velocity is the function: if and if . We see that at the point the function is not differentiable in .

2.6. Balance Equations—Physical Evidence

Assume that the speed of the boundary is constant. Then, the -particle will meet the boundary if and only if . Then, the mean number of -particles of type , meeting the boundary on the time interval , is . The total number of -particles meeting the boundary during time is Similarly, the number of -particles meeting the boundary is

These numbers should be equal (balance equations), and after dividing by , this gives the equation with respect to : Note that both parts are continuous in . Moreover, the left (right) side is decreasing (increasing). This defines uniquely. One can obtain the main result from this equation.

One could think that on this way one can get rigorous proof. However, it is not so easy. We develop here different techniques, that gives much more information about the process than simple balance equations.

3. Random Walk and Dynamical System in

3.1. Associated Random Walk

One can consider the phase boundary as a special kind of server where the customers (particles) arrive in pairs and are immediately served. However, the situation is more involved than in standard queuing theory, because the server moves, and correlation between its movement and arrivals is sufficiently complicated. That is why this analogy does not help much. However, we describe the crucial correspondence between random walks in and the infinite particle problem defined above, that allows to get the solution.

Denote () the coordinate of the extreme right (left), and still existing at time , that is, not annihilated at some time , -particle of type (-particle of type ). Define the distances . The trajectories of the random processes are assumed left continuous. Consider the random process , where .

Denote the state space of . Note that the distances , for any , satisfy the following conservation laws: where and . That is why the state space can be given as the set of nonnegative solutions of the system of linear equations: where . It follows that the dimension of equals . However, it is convenient to speak about random walk in , taking into account that only subset of dimension is visited by the random walk.

Now, we describe the trajectories in more detail. The coordinates decrease linearly with the speeds correspondingly until one of the coordinates becomes zero. Let at some time . This means that -particle of type collided with -particle of type . Let them have numbers and correspondingly. Then, the components of become and other components will not change at all, that is, do not have jumps.

Note that the increments of the coordinates at the jump time do not depend on the history of the process before time , as the random variables. () are independent and equally distributed for fixed type. It follows that is a Markov process. However, this continuous time Markov process has singular transition probabilities (due to partly deterministic movement). This fact, however, does not prevent us from using the techniques from [2] where random walks in were considered.

3.2. Ergodic Case

We call the process ergodic, if there exists a neighborhood of zero, such that the mean value of the first hitting time of from the point is finite for any . In the ergodic case, the correspondence between boundary movement and random walks is completely described by the following theorem.

Theorem 3.1. Two following two conditions are equivalent: (1)the process is ergodic; (2).

All other cases of boundary movement correspond to nonergodic random walks. Even more, we will see that in all other cases the process is transient. Condition (2.6), which excludes the set of parameters of zero measure, excludes in fact null recurrent cases.

To understand the corresponding random walk dynamics introduce a new family of processes.

3.3. Faces

Let . The face of associated with is defined as If , then . For shortness, instead of , we will sometimes write . However, one should note that the inclusion like is always understood for subsets of , not for the faces themselves.

Define the following set of “appropriate” faces .

Lemma 3.2. It holds that

The proof will be given in Appendix A. This lemma explains why in the study of the process one can consider only “appropriate” faces.

3.4. Induced Process

One can define a family of infinite particle processes, where . The process is the process with , and . All other parameters (i.e., the densities and velocities) are the same as for . Note that these processes are in general defined on different probability spaces. Obviously, .

Similarly to , the processes have associated random walks in with . Usefulness of these processes is that they describe all possible types of asymptotic behavior of the main process .

Consider a face , that is, such face that its complement where and . The process will be called an induced process, associated with . The coordinates are defined in the same way as , where . The state space of this process is , where . Face is called ergodic if the induced process is ergodic.

3.5. Induced Vectors

Introduce the plane:

Lemma 3.3. Let be ergodic with , and let be the process with the initial point . Then, there exists vector such that for any , such that , one has, as ,

This vector will be called the induced vector for the ergodic face . We will see other properties of the induced vector below.

3.6. Nonergodic Faces

Let be the face which is not ergodic (nonergodic face). Ergodic face will be called outgoing for , if for . Let be the set of outgoing faces for the nonergodic face .

Lemma 3.4. The set contains the minimal element in the sense that, for any , one has .

This lemma will be proved in Section 5.2.

3.7. Dynamical System

We define now the piece-wise constant vector field in , consisting of induced vectors, as follows: if belongs to ergodic face , and if belongs to nonergodic face , where is the minimal element of . Let be the dynamical system corresponding to this vector field.

It follows that the trajectories of the dynamical system are piecewise linear. Moreover, if the trajectory hits a nonergodic face, it leaves it immediately. It goes with constant speed along an ergodic face until it reaches its boundary.

We call the ergodic face final, if either or all coordinates of the induced vector are positive. The central statement is that the dynamical system hits the final face, stays on it forever, and goes along it to infinity, if .

The following theorem, together with Theorem 3.1, is parallel to Theorem 2.1. That is, in all 3 cases of Theorems 2.1, 3.1, and 3.5 describe the properties of the corresponding random walks in the orthant.

Theorem 3.5. If is ergodic then the origin is the fixed point of the dynamical system . Moreover, all trajectories of the dynamical system hit .
Assume . Then, the process is transient and there exists a unique ergodic final face , such that for . This face is where is defined by (2.9). Moreover, all trajectories of the dynamical system hit and stay there forever.
Assume . Then, the process is transient and there exists a unique ergodic final face , such that for . This face is where is defined by (2.8). Moreover, all trajectories of the dynamical system hit and stay there forever.
For any initial point , the trajectory has finite number of transitions from one face to another, until it reaches or one of the final faces.

This theorem will be proved in Section 5.3.

3.8. Simple Examples of Random Walks and Dynamical Systems

If , the process is a random process on . It is deterministic on —it moves with constant velocity towards the origin. When it reaches at time , it jumps backwards: where has the same distribution as . The dynamical system coincides with inside and has the origin as its fixed point.

If and, moreover, , then the state space of the process is . Inside the quarter plane, the process is deterministic and moves with velocity . From any point of the boundary , it jumps to the random point , and from any point of the boundary , it jumps to the point, where have the same distributions as and correspondingly. The classification results for random walks in can be easily transferred to this case; the dynamical system is deterministic and has negative components of the velocity inside . When it hits one of the axes, it moves along it. The velocity is always negative along the first axis, however, along second axis, it can be either negative or positive. This is the phase transition we described above. Correspondingly, the origin is the fixed point in the first case and has positive value of the vector field along the second axis, in the second case.

4. Collisions

4.1. Basic Process

Now, we come back to our infinite particle process . The collision of particles of the types we will call the collision of type . Denote the number of collisions of type on the time interval .

Lemma 4.1. If the process is ergodic, then the following positive limits exist a.s. and satisfy the following system of linear equations:

Proof. Remind that the collisions can be presented as follows. If , then for any where for and for . Note that the proof of (4.2) is similar to the proof of the corresponding assertion in [8]. For large , we have Note that this is exact equality, if instead of and , we take random distances between particles. By the law of large numbers and by (4.2), the system (4.3) follows.

We will need below the following new notation, (4.3) can be rewritten in the new variables as follows where Obviously, the following balance equation holds: Rewrite the system (4.3) in a more convenient form, using the variables , . Then, It follows that, for all , Introduce the variable . We get the following system of equations with respect to the variables : It is easy to see that this system has the unique solution: where is defined by (2.5). If is ergodic, then by Lemma 4.1 we have for any .

Lemma 4.2. Let the process be ergodic. Then,(1),(2)the speed of the boundary .

Proof . If is ergodic, then by Lemma 4.1, and for all ,  . So, by (4.12), we have
Let be the number of particles of type , which had collisions during time . Then, is the initial coordinate of the particle of type , which was the last annihilated among the particle of this type. Let be the annihilation time of this particle. Then, Rewrite this expression as follows: It follows that By Lemma 4.1 and the strong law of large numbers, as . At the same time, ergodicity of the process gives that as Thus, for any , a.e. Similarly, one can prove that for all , It follows from (4.11) and (4.12) that the boundary velocity is defined by (2.5). Lemma is proved.

4.2. Induced Process

Consider the faces such that , where and . Let be the number of collisions of type on the time interval in the process .

The following lemma is quite similar to Lemma 4.1.

Lemma 4.3. If the process is ergodic, then the following a.e. limits exist and are positive for all pairs , They satisfy the following system of linear equations:

Introduce the following notation: For , , we have , and , .

Due to (4.24), for , we have It follows that for all . Put . In this way, we have obtained the following system of linear equations (similar the system (4.11)) with respect to variables : As previously, this system has the unique solution:

For any process or for the corresponding induced process (see Section 3), we also define the boundary as the coordinate of the last collision before . Let us assume that . The trajectories of the random process are also piece-wise constant, we will assume them left continuous. The following lemma is completely analogous to Lemma 4.2.

Lemma 4.4. Let , where and , and letbe an ergodic face. Then,(1) and ,(2)the boundary velocity for the process (or for the corresponding ) equals (with the a.e. limit)

Note that for .

Lemma 4.5. For any ergodic face (), the vector with the coordinates equal to is the induced vector in the sense of Lemma 3.3.

This is quite similar to Lemma 2.2, page 143 of [8]and Lemma , page 87 of [9].

It follows from (4.30) and (4.28), that the coordinates of the induced vector are given by

Note that by condition (2.6) for all induced vectors if .

Intuitive interpretation of this formula is the following. For example, the inequality means that -particles of type overtake the boundary which moves with velocity . In the contrary case, , that is, -particles of type fall behind the boundary.

5. Proofs

5.1. Proof of Theorem 3.1

The implication has been proved in Lemma 4.2. Now, we prove that implies . We will use the method of Lyapounov functions to prove ergodicity. Define the Lyapounov function: where vector with coordinates will be defined below. One has to verify the following condition: there exists such that for any ergodic face , , where is the induced vector corresponding to the face , see [9].

The system (4.3) can be written in the matrix form: where is the matrix with the elements indexed by , and the vector It is easy to see that the coordinates of the vector are equal to

If the assumption of the theorem holds, then the system of (4.11) has a positive solution, that is, . One can choose positive so that the following condition holds: where and . For example, one can put where Let the vector have coordinates . Then, satisfies the system (5.3), that is, .

For ergodic face , define the vector with coordinates , where for are defined in (4.23) and we put for . It follows from (4.26) and (4.30) that the induced vector can be written as with the matrix and the vector defined in (5.4) and (5.5). By (5.10), we have As the vector belongs to the face and , then

Note that the matrix in (5.3) is a nonnegative operator. In fact, for any vector , where Let for definiteness . By formula (5.13), as , for , if . As the number of faces is finite, one can always choose , so that

The theorem is proved.

5.2. Proof of Lemma 3.4

For any nonergodic face with , where and , define This definition is correct because always

Introduce the face such that . If , then and . By Theorem 3.1, the induced process is ergodic and the face is ergodic.

So, there can be two possible cases.(i)If , then , and. (ii)If , then , and. By construction, we have .

We show that is the minimal ergodic outgoing face for . Consider the first case, namely, . The second one is quite similar. Because of , we can apply Theorem 3.1 and so the induced process is ergodic. This gives ergodicity of the face .

By formula (4.32) for all and by formula (5.18), It follows from Lemma B.1 that Thus, we get for all . It means that the face is outgoing for .

To finish the proof of Lemma 3.4, it is sufficient to show that the constructed face is the minimal outgoing face for . We give the proof by contradiction. Let there exist an ergodic outgoing ( for ) face such that and . Put By (4.31)–(4.33), the coordinates of the induced vector are given for as follows: As the face is outgoing, we must have for all . Thus, the only two situations are possible: or . In the first case, we have and so . But then and this contradicts the assumption .

So . Show that .

Let and there is such that . Then, by Lemma B.1, and, hence, the face cannot be outgoing for . If , there exists some point , where , and by (5.18), It follows from Theorem 3.1 that the induced process is nonergodic and, hence, the face is also nonergodic. This contradicts the assumption on ergodicity of the face . So . The Lemma is proved.

5.3. Proof of Theorem 3.5

The first goal of this subsection is to study trajectories of the dynamical system . After that, using the obtained knowledge about behavior of , we will prove Theorem 3.5. Let be the trajectory of the dynamical system, starting in the point .

According to the definition of , any trajectory , , visits some sequence of faces. In general, this sequence depends on the initial point and contains ergodic and nonergodic faces. It is very complicated to give a precise list of all faces visited by the concrete trajectory started from a given point . Our idea is to find a common finite subsequence of ergodic faces in the order they are visited by any trajectory. We find this subsequence together with the time moments , , where is the first time the trajectory enters the closure of . Moreover, it will follow from our proof that the intervals are finite, the dimensions of the ergodic faces in this sequence decrease and any trajectory, after hitting the closure of some face in this sequence, will never leave this closure.

Proposition 5.1. There exists a monotone sequence of faces: and a sequence of time moments: depending on , and having the following property: where denotes the closure of in . Moreover, the sequence depends only on the parameters of the model (i.e., on the velocities and densities), but the sequence of time moments , depends also on the initial point of the trajectory . Thus, any trajectory will hit the final set in finite time.

The proof of Proposition 5.1 will be given at the end of this subsection.

First, we will present here some algorithm for constructing the sequence . By Lemma 3.2, we can consider only faces , such that . Algorithm consists of several number of steps and constructs a sequence , , , In fact, it constructs a sequence . We prefer here to use notation: and to call a group consisting of particle types listed in .

Notation has the same meaning as earlier:

Algorithm 5.2. Put and find .
If , compare and .   (i) If , then .   (ii) If , then .
If , compare and .   (i) If , then .    (ii) If , then . We have already constructed group:
Find . If and hold, then apply the following steps (-a) and (-b). If and , compare and.(i)If , then .(ii)If , then . If and , we compare and .(i)If , then . (ii)If , then the algorithm is finished and the group is declared to be the final group of the algorithm. If , and , we compare and .(i)If , then . (ii)If , then the algorithm is finished and the group is declared to be the final group of the algorithm. If , and , we compare and .(i)If , then