Abstract

Let be a positive integer, a positive constant and be a sequence of independent identically distributed pseudorandom variables. We assume that the ’s take their values in the discrete set and that their common pseudodistribution is characterized by the (positive or negative) real numbers for any . Let us finally introduce the associated pseudorandom walk defined on by and for . In this paper, we exhibit some properties of . In particular, we explicitly determine the pseudodistribution of the first overshooting time of a given threshold for as well as that of the first exit time from a bounded interval. Next, with an appropriate normalization, we pass from the pseudorandom walk to the pseudo-Brownian motion driven by the high-order heat-type equation . We retrieve the corresponding pseudodistribution of the first overshooting time of a threshold for the pseudo-Brownian motion (Lachal, 2007). In the same way, we get the pseudodistribution of the first exit time from a bounded interval for the pseudo-Brownian motion which is a new result for this pseudoprocess.

1. Introduction

Throughout the paper, we denote by the set of integers, by that of nonnegative integers, and by that of positive integers: , , . More generally, for any set of numbers , we set .

Let be a positive integer, a positive constant, and set . Let be a sequence of independent identically distributed pseudorandom variables taking their values in the set of integers . By pseudorandom variable, we mean a measurable function defined on a space endowed with a signed measure with a total mass equaling the unity. We assume that the common pseudodistribution of the ’s is characterized by the (positive or negative) real pseudo-probabilities for any . The parameters sum to the unity: .

Now, let us introduce the associated pseudorandom walk defined on by and for . The infinitesimal generator associated with is defined, for any function defined on , as Here we consider the pseudorandom walk which admits the discrete -iterated Laplacian as a generator infinitesimal. More precisely, by introducing the so-called discrete Laplacian defined, for any function defined on , by the discrete -iterated Laplacian is the operator given by We then choose the ’s such that which yields, by identification, for any ,

When , is the nearest neighbours pseudorandom walk with a possible stay at its current location; it is characterized by the numbers and . Moreover, if , then ; in this case, we are dealing with an ordinary symmetric random walk (with positive probabilities). If , this is the classical symmetric random walk: and .

Actually, with the additional assumption that for any (i.e., the ’s are symmetric, or the pseudorandom walk has no drift), the ’s are the unique numbers such that

where is an analytical extension of and stands for the th derivative of .

Our motivation for studying the pseudorandom walk associated with the parameters defined by (4) is that it is the discrete counterpart of the pseudo-Brownian motion as the classical random walk is for Brownian motion. Let us recall that pseudo-Brownian motion is the pseudo-Markov process with independent and stationary increments, associated with the signed heat-type kernel which is the elementary solution of the high-order heat-type equation . The kernel is characterized by its Fourier transform: The corresponding infinitesimal generator is given, for any -function , by

The reader can find extensive literature on pseudo-Brownian motion. For instance, let us quote the works of Beghin et al. [120] and the references therein.

We observe that (5) and (7) are closely related to the continuous -iterated Laplacian . For , the operator is the two-Laplacian related to the famous biharmonic functions: in the discrete case, and in the continuous case,

In the discrete case, it has been considered by Sato [21] and Vanderbei [22].

The link between the pseudorandom walk and pseudo-Brownian motion is the following one: when normalizing the pseudorandom walk on a grid with small spatial step and temporal step (i.e., we construct the pseudoprocess where denotes the usual floor function), the limiting pseudoprocess as is exactly the pseudo-Brownian motion.

Now, we consider the first overshooting time of a fixed single threshold or ( being integers) for :

as well as the first exit time from a bounded interval :

with the usual convention that . Hence, when , and , the overshoot at time which is can take the values , that is, . Similarly, when , , and when , . We put , and .

In the same way, we introduce the first overshooting times of the thresholds and ( being now real numbers) for :

as well as the first exit time from a bounded interval : with the similar convention that , and we set, when the corresponding time is finite,

In this paper we provide a representation for the generating function of the joint distributions of the couples , , and . In particular, we derive simple expressions for the marginal distributions of , , and . We also obtain explicit expressions for the famous “ruin pseudoprobabilities” and . The main tool employed in this paper is the use of generating functions.

Taking that the limit as goes to zero, we retrieve the joint distributions of the couples and obtained in [10, 11]. Therein, we used Spitzer’s identity for deriving these distributions. Moreover, we obtain the joint distribution of the couple which is a new and an important result for the study of pseudo-Brownian motion. In particular, we deduce the “ruin pseudo-probabilities” and ; the results have been announced without any proof in a survey on pseudo-Brownian motion [13], after a conference held in Madrid (IWAP 2010).

In [11, 17, 18], the authors observed a curious fact concerning the pseudodistributions of and : they are linear combinations of the Dirac distribution and its successive derivatives (in the sense of Schwarz distributions). For instance,

The quantity is to be understood as the functional acting on test functions according to . The appearance of the ’s in (15), which is quite surprising for probabilists, can be better understood thanks to the discrete approach. Indeed, the ’s come from the location at the overshooting time of for the normalized pseudorandom walk: the location takes place in the “cluster” of points .

In order to facilitate the reading of the paper, we have divided it into three parts:Part I—some properties of the pseudorandom walkPart II—first overshooting time of a single thresholdPart III—first exit time from a bounded interval.

The reader will find a list of notations in Table 2 which is postponed to the end of the paper.

2. Part I—Some Properties of the Pseudorandom Walk

2.1. Pseudodistribution of and

We consider the pseudorandom walk related to a family of real parameters satisfying for any and . Let us recall that the infinitesimal generator associated with is defined by

In this section, we look for the values of , , for which the infinitesimal generator is of the form (5). Next, we provide several properties for the corresponding pseudorandom walk.

Suppose that can be extended into an analytical function . In this case, we can expand Therefore, Since , we see that the expression (5) of holds if and only if the ’s satisfy the equations

Proposition 1. The numbers , , satisfying (19), are given by In particular, .

Proof. First, we recall that the solution of a Vandermonde system of the form , , is given by with and, for any , In the notation of and that of forthcoming determinants, we adopt the convention that when the index of certain entries in the determinant lies out of the range of , the corresponding column is discarded. That is, for and , the respective determinants write It is well-known that, for any , In the particular case where for , we have, for any , that Therefore, the solution simply writes
Now, we see that system (19) is a Vandermonde system with the choices , , and for , . With these settings at hands, we explicitly have
and the result of Proposition 1 ensues.

Finally, the value of is obtained as follows: by using the fact that , We find it interesting to compute the cumulative sums of the ’s: for , The last displayed sum is classical and easy to compute by appealing to Pascal’s formula which leads to a telescopic sum: Thus, for , Observe that this sum is nothing but . Next, we compute the total sum of the ’s: by using the fact that , As previously mentioned, there is an interpretation to this sum: this is the total variation of the pseudodistribution of . We can also explicitly determine the generating function of : for any , We sum up below the results we have obtained concerning the pseudodistribution of .

Proposition 2. The pseudodistribution of is determined, for , by or, equivalently, by The total variation of the pseudodistribution of is given by The generating function of is given, for any , by In particular, the Fourier transform of admits the following expression: for any , by

Remark 3. For , we have ; that is, the pseudorandom walk does not stay at its current location. If , it can be easily seen, by using the identity , that . On the other hand, for any , it is clear that . In Table 1 and Figures 1 and 2, we provide some numerical values and (rescaled) profiles of the pseudodistribution of for and and several values of .

In the sequel, we will use the total variation of as an upper bound which we call : Set for any . We notice that and, more precisely, Let us denote this bound by : In view of (40) and (42), since , we see that .

Proposition 4. The pseudodistribution of is given, for any , by Actually, the foregoing sum is taken over the such that . We also have that

Proof. By the independence of the ’s which have the same pseudoprobability distribution, we plainly have that Hence, by inverse Fourier transform, we extract that By writing , we get for the integral lying in (47) that By plugging (48) into (47), we derive (43). Next, we write, for , that If , then the term in sum (49) corresponding to vanishes and The second sum in the foregoing equality is easy to compute: If , then the term in sum (49) corresponding to is and By using the convention that if , we see that the second sum above also coincides with (51). Formula (44) ensues in both cases.

Proposition 5. The upper bound below holds true: for any positive integer and any integer , Assume that . The asymptotics below holds true: for any ,

Proof. Let us introduce the usual norms of any suitable function :
and recall the elementary inequalities .
It is clear from (46) that, for any integer , This proves (53). Next, by (46), since , we have, for any , that The assumption entails that for any . We see that on , and on for any . Hence, Now, choose for a positive . We have that
which clearly entails, for large enough , .Thus, if , which proves (54).
If , . In this case, the same holds true upon splitting the integral into .

Remark 6. A better estimate for can be obtained in the same way: Nevertheless, we will not use it. We also have the following inequality for the total variation of :

Proposition 7. For any bounded function defined on ,

Proof. Recall that we set for any . We extend these settings by putting for . We have that The foregoing sum can be easily evaluated as follows: which proves (62).

2.2. Generating Function of

Let us introduce the generating functions, defined for complex numbers , , by We first study the problem of convergence of the foregoing series. We start from If and , then If we choose , such that , and (which is equivalent to , or ), then the double sum defining the function is absolutely summable. If , then

If we choose such that , then the same conclusion holds.

Now, we have that and, thanks to (38), we can state the following result.

Proposition 8. The double generating function of the , , , is given, for any complex numbers , such that , by In particular, for any and such that ,

On the other hand, By substituting in the foregoing equality, we get the Fourier series of the function : from which we extract the sequence of the coefficients . Indeed, since , we have that and where is the circle of radius 1 centered at the origin and counter clockwise orientated. Then, referring to (70), we obtain, for any satisfying , that where is the polynomial given by

We are looking for the roots of which lie inside the circle . For this, we introduce the th roots of : for ; .

From now on, in order to simplify the expression of the roots of , we make the assumption that is a real number lying in (and then ). The roots of are those of the equations , , where They can be written as with

We notice that . Because of the last coefficient in the polynomial , it is clear that the roots and are inverse: .

Let us check that for any . Straightforward computations yield that where Since , checking that is equivalent to checking that ; that is, . If , we have , and then

If (which happens only when is odd and , , and then .

The above discussion ensures that the roots we are looking for (i.e., those lying inside ) are , ; we discard the ’s.

Remark 9. We notice that and then where we set . The , , are the th roots of with positive real part: and . As a result, we derive the asymptotics, which will be used further,

Example 10. For , the roots explicitly write as with If , it can be simplified into For , the roots explicitly write as with For , the roots explicitly write as with
Now, can be evaluated by residues theorem. Suppose first that (then ) so that is not a pole in the integral defining : The foregoing representation of is valid a priori for any . Actually, in view of the expressions of and , we can see that (93) defines an analytical function in the interval . Since is a power series, by analytical continuation, equality (93) holds true for any . Moreover, by symmetry, we have that for . We display this result in the theorem below.

Theorem 11. For any , the generating function of the , , is given, for any , by

Remark 12. Another proof of Theorem 11 consists in expanding the rational fraction into partial fractions. We find it interesting to outline the main steps of this method. We can write that with We next expand the partial fractions and into power series as follows. We have checked that for any . Now, if for any , from which (94) can be easily extracted.

2.3. Limiting Pseudoprocess

In this section, by pseudoprocess it is meant a continuous-time process driven by a signed measure. Actually, this object is not properly defined on all continuous times but only on dyadic times , . A proper definition consists in seeing it as the limit of a step process associated with the observations of the pseudoprocess on the dyadic times. We refer the reader to [10, 18] for precise details which are cumbersome to reproduce here.

Below, we give an ad hoc definition for the convergence of a family of pseudoprocesses towards a pseudoprocess .

Definition 13. Let be a family of pseudoprocesses and a pseudoprocess. We say that if and only if

This is the weak convergence of the finite-dimensional projections of the family of pseudoprocesses.

In this part, we choose for the family the continuous-time pseudoprocesses defined, for any , by where stands for the usual floor function. The quantity takes its values on the discrete set . Roughly speaking, we normalize the pseudorandom walk on the time space grid . Let be the pseudo-Brownian motion. It is characterized by the following property: for any , any such that and any , We refer to [10, 18] for a proper definition of pseudo-Brownian motion, and to references therein for interesting properties of this pseudoprocess.

Theorem 14. Suppose that . The following convergence holds:

Proof. (i) We begin by computing the Laplace-Fourier transform of . By definition of , we have that and then By (71), we have that
Actually, equality (104) is valid for such that ; that is, . Since is assumed not to be greater than , by (42), we have that and (104) is valid for any .
Now, by using the elementary asymptotics and , we obtain that As a result, for any , from which and (101) we deduce that Notice that the Laplace-Fourier of takes the simple form
(ii) In the same way, we compute the Laplace-Fourier transform of which will be used further. We have . Then
As for (107), we immediately extract the following limit:
(iii) We now compute the joint Fourier transform of for two times , such that . Using the elementary fact that , we observe that Then, we get, for , that By (107) and (110), we obtain the following limit: which yields that
(iv) Finally, we can easily extend the foregoing limiting result by recurrence as follows: for , and for any times such that ,
The proof of Theorem 14 is complete.

We find it interesting to compute in a similar way the -potential of the pseudoprocess . By definition of , we have, for any such that , . Thus,

Interchanging the two sums in the above computations is justified by the fact that the series is absolutely convergent because of the condition . Indeed, by (53), for any , .

Put . This yields that Suppose, for example, that . Then, where stands for the usual ceiling function. By using (85), we deduce that which implies that Therefore, The case is similar to treat. We have obtained the following result.

Proposition 15. The -potential of the pseudoprocess is given by

3. Part II—First Overshooting Time of a Single Threshold

3.1. On the Pseudodistribution of

Let . In this section, we explicitly compute the generating function of . Set, for , We are able to provide an explicit expression of . Before tackling this problem, we need an a priori estimate for . By (62), we immediately derive that . Hence, the power series defining absolutely converges for .

3.1.1. Joint Pseudodistribution of

Theorem 16. The pseudodistribution of is characterized by the identity, valid for any and any , where and for , ,

Proof. Pick an integer . If , then an overshoot of the threshold occurs before time : . This remark and the independence of the increments of the pseudorandom walk entail that Since the series defining and absolutely converge, respectively, for and , and since , we can apply the generating function to the convolution equality (126). We get, for , that Using expression (94) of , namely, for , where , we obtain that Recalling that and setting , system (128) reads , . When limiting the range of to the set , this becomes a homogeneous Vandermonde system whose solution is trivial: , . Thus, we get the following Vandermonde system: System (129) can be explicitly solved. In order to simplify the settings, we will omit the variable in the sequel of the proof. It is convenient to rewrite (129) as Cramer’s formulae yield where and, for any , This last determinant can be expanded as with, for , In fact, the quantity is the coefficient of in the polynomial which is nothing but , the value of which is Using the elementary expansion , we obtain by identification that Plugging this expression into (131), we then derive for representation (124) which is valid at least for . Finally, we observe that (124) defines an analytical function in and that is a power series. Thus, by analytical continuation, (124) holds true for any .

Example 17. For , the settings of Theorem 16 write . Then, formula (124) reads where is given in Example 10. Of course, in this case, the condition is redundant since we are dealing with an ordinary random walk with jumps of one unity at most. When , this is the classical symmetric random walk and (124) recovers the most well-known formula in random walk theory: For , the settings of Theorem 16 write where and are given in Example 10 and (124) reads

Remark 18. We have the similar expression related to below. The analogous system to (129) writes as where . The solution is given by where and, for , ,
The double generating function of defined by admits an interesting representation by means of Lagrange interpolating polynomials that we display in the theorem below.

Theorem 19. The double generating function of is given, for any and , by where are the Lagrange interpolating polynomials with respect to the variable such that .

Proof. By (131) and by omitting the variable as previously mentioned, we have that It is clear that the quantity , which explicitly writes as defines a polynomial of the variable of degree which vanishes at and equals at . Hence, by putting back the variable , it coincides with the Lagrange polynomial and formula (147) immediately ensues.

Example 20. For , (147) reads This is in good agreement with the formulae of Example 17. We retrieve a result of [21].

3.1.2. Pseudodistribution of

In order to derive the pseudodistribution of which is characterized by the numbers , , we solve the system obtained by taking the limit in (129) as .

Lemma 21. The following system holds:

Proof. By (85), we have the expansion , where for any . Putting this into (129), we get that that is, Set Then, equality (154) reads
This is a Vandermonde system, the solution of which is given by , , where Since, by (85), for any , we have that
and second,
which implies, for , that
Therefore, . On the other hand, for , referring to the definition of , we can see that the quantity can be expressed as a linear combination of the ’s plus a constant. Hence, the limit exists and, by appealing to a Tauberian theorem, it coincides with . This finishes the proof of (152).

Theorem 22. The pseudodistribution of is characterized by the following pseudo-probabilities: for any , Moreover, .

Proof. We explicitly solve system (152) rewritten as The matrix of the system is which admits as an inverse with the convention of settings if . The solution of the system is given, for , by This proves (161). Now, by summing the , , given by (161), we obtain that Writing , we see that
Hence . The proof of Theorem 22 is finished.

In the sequel, when considering , we will omit the condition .

Example 23. Let us have a look on the particular values 1, 2, 3, 4 of .
(i) Case . Evidently, in this case and then
This is the case of the ordinary random walk!
(ii) Case . In this case the pseudorandom variables , , have two-valued upward jumps. Then the overshooting place must be either or : . We have that
Of course, we immediately see that .
(iii) Case . In this case and
We can easily check that .
(iv) Case . In this case and We can easily check that .

3.1.3. Pseudomoments of

In the sequel, we use the notation for any and any and . Of course, and if . We also use the conventions for any negative integer and if .

In this section, we compute several functionals related to the pseudomoments of . More precisely, we provide formulae for (Theorem 25), (Corollary 26), , and (Theorem 27).

Putting the elementary identity into the equality we get the following integral representation of .

Theorem 24. For any function defined on ,

Theorem 25. For any integers and , the factorial pseudo-moment of of order is given by If , we simply have that

Proof. By (171), we have that Next, by observing that , we obtain that Applying Leibniz rule to (175), we see that Therefore,
Finally, plugging (177) into (175) and (174) yields (172).
Assume now that and . If , we can write in (172) that Then, Putting (179) into (172) yields (173). If (which requires that ), in (172), we write instead that Then,
Putting (181) into (172) yields (173) in this case too.
Assume finally that and . We write in (172) that Then
Putting (183) into (172) yields (173).

By choosing in Theorem 25, we derive that We immediately obtain the following particular result which will be used in Theorem 28.

Corollary 26. The factorial pseudomoments of are given by

The above identity can be rewritten, if , as

Moreover, since , it is clear that which immediately entails that for any ; then for as stated in Corollary 26.

By choosing in Theorem 25, we plainly extract that for . Moreover, as previously mentioned, for any ; then for . Actually, we can compute the factorial pseudomoments of , , for . The formula of Theorem 25 seems to be untractable, so we provide another way for evaluating them.

Theorem 27. The factorial pseudomoments of are given by Moreover, for , the pseudo-moment of of order vanishes:

Proof. We focus on the case where . We have that
The intermediate sum lying in the last displayed equality can be evaluated as follows: by observing that and appealing to Leibniz rule, we obtain that Consequently, This is the result announced in Theorem 27 when .
Next, concerning the pseudomoments of , we appeal to an elementary argument of linear algebra: the family (recall that ) is a basis of the space of polynomials of degree not greater than . So, can be written as a linear combination of . Then can be written as a linear combination of the factorial pseudomoments of of order between and . The latters cancel for . As a result, .
The same argument ensures the equalities , which is equal to , and which vanishes. Each of them yields the value of .
The proof of Theorem 27 is completed.

3.2. Link with the High-Order Finite-Difference Operator

Set for any and for any . Set also . The quantity is the iterated forward finite-difference operator given by Conversely, can be expressed by means of , , according to We have the following expression for any functional of the pseudorandom variable .

Theorem 28. One has, for any function defined on , that

Proof. By (193), we see that which immediately yields (194) thanks to (186).

Corollary 29. The generating function of is given by

Proof. Let us apply Theorem 28 to the function for which we plainly have . This immediately yields (196).

Remark 30. A direct computation with (161) yields the alternative representation:
Of special interest is the case when the starting point of the pseudorandom walk is any point . By translating into and the function into the shifted function in formula (194), we get that Thus, we obtain the following result.

Theorem 31. One has, for any function defined on , that with and, for , The , , are Newton interpolating polynomials. They are of degree not greater than and characterized, for any , by

Proof. Coming back to the proof of Theorem 28 and appealing to Theorem 22, we write that where, for any , The expression defines a polynomial of the variable of degree , so is a polynomial of degree not greater than . It is obvious that for . On the other hand, . By putting this into (202), we get that Next, we obtain, for any , that
The proof of Theorem 31 is finished.

We complete this paragraph by stating a strong pseudo-Markov property related to time .

Theorem 32. One has, for any function defined on and any , that In (206), the operator acts on the variable .

Proof. We denote by the pseudoprobability associated with the pseudoexpectation . Actually, it represents the pseudoprobability related to the pseudorandom walk started at point at time . We have, by independence of the ’s, that Hence, by setting , we have obtained that which proves (206) thanks to (199).

Example 33. Below, we display the form of (206) for the particular values 1, 2, 3 of . (i)For , (206) reads which is of course trivial! This is the strong Markov property for the ordinary random walk.(ii)For , (206) reads (iii)For , (206) reads

3.3. Joint Pseudodistribution of

Below, we give an ad hoc definition for the convergence of a family of exit times.

Definition 34. Let be a family of pseudoprocesses which converges towards a pseudoprocess when in the sense of Definition 13. Let be a subset of and set , and , .
We say that if and only if We say that if and only if
As in Section 2.3, we choose for the family the pseudoprocesses defined, for any , by and for the pseudoprocess the pseudo-Brownian motion. For , we choose the interval so that , and , . Set where is the usual ceiling function. We have and . Recall the setting , .

Theorem 35. Assume that . The following convergence holds: where, for any and any ,

Proof. We already pointed out that the assumption entails that . Therefore, (147) holds for , that is, for . So, by (147), we have, for , that Recall that we previously set . Thanks to asymptotics (119), we get that Thus, Finally, we can easily conclude with the help of the elementary limits that

Theorem 36. The following convergence holds: where, for any , This is the Fourier transform of the pseudorandom variable . Moreover,

Proof. By (194), we have that We can easily conclude by using the elementary asymptotics that

Corollary 37. The pseudodistribution of is given by

This formula should be understood as follows: for any -times differentiable function , by omitting the condition , We retrieve a result of [11] and, in the case , a pioneering result of [18].

4. Part III—First Exit Time from a Bounded Interval

4.1. On the Pseudodistribution of

Let , be two integers such that and let . In this section, we explicitly compute the generating function of . Set, for ,

We are able to provide an explicit expression of . As in Section 3.1, due to (62), we have the following a priori estimate: . As a byproduct, the power series defining absolutely converges for .

4.1.1. Joint Pseudodistribution of

Theorem 38. The pseudodistribution of is characterized by the identity, valid for any , where and, if , is the determinantand if , is the determinant

Proof. Pick an integer such that or . If , then an exit of the interval occurs before time : . This remark and the independence of the increments of the pseudorandom walk entail that Thanks to the absolute convergence of the series defining and for and , respectively, we can apply the generating function to equality (235). We get, for , that Using expression (94) of , namely, , we get, for (recall that ), that and, for , that When limiting the range of to the set in (237) and to the set in (238), we see that (237) and (238) are homogeneous Vandermonde systems whose solutions are trivial; that is, the terms within parentheses in (237) and (238) vanish. Thus, we get the two systems below: It will be convenient to relabel the ’s and ’s, , as and ; note that for any and . By using the relabeling , , we obtain the two equivalent following systems of equations and unknowns, for the first one, for the second one: Systems (240) and (241) are “lacunary” Vandermonde systems (some powers of are missing). For instance, let us rewrite system (240) as
Cramer’s formulae immediately yield (231) at least for . By analyticity of the ’s on , it is easily seen that (231) holds true for . Systems (240) and (241) will be used in Lemma 42.

A method for computing the determinants exhibited in Theorem 38 and solving system (242) is proposed in Appendix A.1. In particular, we can deduce from Proposition A.3 an alternative representation of which can be seen as the analogous of (124). Set and, for , , for any integer such that or and, for , Set also Then, applying Proposition A.3 with the choices and leads, for any , to

The double generating function defined by admits an interesting representation by means of interpolating polynomials that we display in the following theorem.

Theorem 39. The double generating function of is given, for and , by where, for The functions are interpolating polynomials satisfying and can be expressed as where are some polynomials of degree .

Proof. By (231), we have that In order to simplify the text, we omit the variable . We expand the determinant , with respect to its th column:
where , , is the determinant
which plainly coincides with
Therefore, we obtain that
Next, we can see that the foregoing sum within parentheses is the expansion of the determinant below with respect to its th row (by putting back the variable ):
As a result, by setting
we obtain that
Similarly, we could check that
where is the determinant
By adding (258) and (259) and setting
we obtain that
We observe that the polynomials with respect to the variable are of degree and satisfy the equalities for all and . Hence they can be expressed by means of the Lagrange fundamental polynomials as displayed in Theorem 39.

Example 40. For , (248) reads where Since and , and also , from which we extract the well-known formulae related to the case of an ordinary random walk: In particular, if (case of the classical random walk), by Example 10, we have in the above formulae that For , (248) reads where All the polynomials , , have the form .

Remark 41. By expanding the determinant with respect to its th raw, we obtain an expansion for the polynomial as a linear combination of , that is, an expansion of the form Hence, from which we extract, for any , that Actually, the foregoing sum comes from the quotient given by (231) by expanding the determinant with respect to the th column or th column according as the number satisfies or .

4.1.2. Pseudodistribution of

In order to derive the pseudodistribution of which is characterized by the numbers , , we solve the systems obtained by taking the limit in (262) as .

Lemma 42. The following identities hold: for ,

Proof. By (85), we have the expansion , where for any . Actually such asymptotics holds true for any because of the equality for . We put this into systems (240) and (241). For doing this, it is convenient to rewrite the latter as We obtain that System (276) writes Set Then, equality (278) reads This is a Vandermonde system which can be solved as in the proof of Lemma 21 upon changing into . We can check that , which entails that Similarly, using (277), we can prove that
Actually, we will only use (281) and (282) restricted to which immediately yields system (273) and (274).

Now, we state one of the most important result of this work. We solve the famous problem of the “gambler’s ruin” in the context of the pseudorandom walk.

Theorem 43. The pseudodistribution of is given, for , by where Moreover, and where

Proof. We have to solve system (273) and (274). For (273), for instance, the principal matrix and the right-hand side matrix are and the matrix form of the solution is given by
The computation of this product being quite fastidious, we postponed it to Appendix A.3. The result is given by Theorem A.8:
The entries of this matrix provide the pseudo-probabilities , , which are exhibited in Theorem 43. The analogous formula for holds true in the same way.
Next, by observing that and that , we have that Noticing that and , we get that The computations can be pursued by performing the change of variables in the above integral: Putting (293) into (291) yields the expression of displayed in Theorem 43. The similar expression for holds true. Finally, The foregoing integral is quite elementary: which entails that .

In the sequel, when considering , we will omit the condition .

Example 44. Let us have a look on the particular values of .
(i) Case . In this case and We retrieve one of the most well-known and important results for the ordinary random walk: this is the famous problem of the gambler’s ruin!
(ii) Case  . In this case, the pseudorandom variables , , have two-valued upward jumps and two-valued downward jumps. Hence, the exit place must be either , , or : . We have that We can easily check that as well as .
(iii) Case . In this case and

4.1.3. Pseudomoments of

Let us recall the notation we previously introduced in Section 3.1.3: for any and any and , as well as the conventions for any negative integer and if .

In this section, we compute several functionals related to the pseudomoments of . Namely, we provide formulae for (Theorem 46), (Corollary 47), and (Theorem 48). This schedule may seem surprising; actually, we have been able to carry out the calculations by following this chronology.

Putting the identities and into the equality we immediately get the following integral representations for , and the analogous ones hold true for .

Theorem 45. For any function defined on ,

In view of (300) and (302) and in order to compute the pseudomoments of , it is convenient to introduce the function defined by for any integers and such that . In particular, . We immediately see that, by choosing in Theorem 45, quantities (300) and (302) are opposite. As a byproduct, . More generally, we have the results below.

Theorem 46. For any positive integer , In particular, for any ,

Proof. By (300), we get that By noticing that for and for , we obtain that Hence, if ,
and we arrive at (304). Moreover, if and , we have . Then, it is clear that and (304) still holds in this case.
On the other hand, by Theorem 43, we get that For , we can write that Then, which proves (305). For , we write instead that Therefore, If , the above derivative vanishes since is a root of multiplicity of the polynomial . Finally, for , we appeal to Leibniz rule for evaluating the derivative of interest: This proves (305) in this case.

Corollary 47. For , the pseudo-moment of of order vanishes: Moreover,

Proof. As in the proof of Theorem 27, we appeal to the following argument: the polynomial is a linear combination of . Then can be written as a linear combination of which vanish when . Thus, . The same argument entails that
which proves (317).

In the following theorem, we provide an integral representation for certain factorial pseudomoments of and which will be used in the next section.

Theorem 48. For any integer , The above identities can be rewritten as

Proof. By (301), we have that The sum lying in the above integral can be easily calculated: Hence, Performing the change of variables in the foregoing integral immediately yields (319). Formula (320) can be deduced from (303) exactly in the same way.

4.2. Link with High-Order Finite-Difference Equations

Set and for any and and for any . Set also . The quantities and are the iterated forward and backward finite-difference operators given by Conversely, and can be expressed by means of , , according to We have the following expression for any functional of the pseudorandom variable .

Theorem 49. One has, for any function defined on , that with

Proof. By (327), we see that which immediately yields (328) thanks to (321) and (322).

Corollary 50. The generating function of is given by

Proof. Let us apply Theorem 49 to the function for which we plainly have and . This immediately yields (331).

Of special interest is the case when the starting point of the pseudorandom walk is some point . By translating , into , and the function into the shifted function , we have that More precisely, we have the following result.

Theorem 51. One has, for any function defined on , that where and , , are polynomials of degree not greater than characterized, for any , by

Proof. By setting and , (330) immediately yields (333). By observing that , it is enough to work with, for example, . Coming back to the proof of Theorem 49 and appealing to Theorem 43, we write that where, for any ,
The expression defines a polynomial of the variable of degree , so is a polynomial of degree not greater than .
It is obvious that for . Then, which implies that for any . Now, let us evaluate for . We plainly have that for and that By putting this into (335), we get that Next, we obtain, for any , that The proof of Theorem 51 is finished.

Example 52. In the case where , (333) writes as with
Below, we state a strong pseudo-Markov property related to time .

Theorem 53. One has, for any function defined on and any , that In (342), the operators and act on the variables and .

Proof. Formula (342) can be proved exactly in the same way as (206): by setting , we have that . This proves (342) thanks to (333).

Example 54. Below, we display the form of (342) for the particular values of . (i)For , (206) reads as which is of course well known! This is the strong Markov property for the ordinary random walk.(ii)For , (206) reads as
Now, we consider the discrete Laplacian . It is explicitly defined by . Let us introduce the iterated Laplacian . We compute for any function and any : By using the elementary identity , we get that As a result, we obtain the expression of announced in the introduction, namely,

Example 55. Fix a nonnegative integer and put for any . It is plain that, if , and if , . Therefore, if , and if , . By using a linear algebra argument, we deduce that for any polynomial of degree not greater than . As a byproduct,

Now, the main link between time and finite-difference equations is the following one.

Theorem 56. Let be a function defined on . The function defined on by is the solution to the discrete Lauricella’s problem

Proof. By (333) we write that With this representation at hand, identities (334) and (348) immediately yield equations (349).

4.3. Joint Pseudodistribution of

As in Section 3.3, we choose for the family the pseudoprocesses defined, for any , by

and for the pseudoprocess the pseudo-Brownian motion. In Definition 34, we choose for the interval ; then , and , . Set and , where and , respectively stand for the usual floor and ceiling functions. We have and .

Theorem 57. The following convergence holds: where, for any and any , In the foregoing formula, , , and are the respective determinants
In the two last determinants, we have put .

In Theorem 57, we obtain the joint pseudodistribution of characterized by its Laplace-Fourier transform. This is a new result for pseudo-Brownian motion that we will develop in a forthcoming paper [14].

Proof. By Definition 34 and Theorem 39, we have that
Recall that and that the quantities and are expressed by means of the determinant
By replacing the columns labeled as , , by the linear combinations if , and if , the foregoing determinant remains invariant and can be rewritten as
Then, by replacing the ’s by , by and by using the asymptotics and coming from (119), we get that
Similarly, by using the elementary asymptotics and , we obtain that
where denotes the determinant
By putting (358) and (359) into (355), we derive that
It is plain that which finishes the proof of Theorem 57.

Theorem 58. The following convergence holds: where, for any , with Moreover,

Proof. By Definition 34 and by (331), Concerning, for example, the quantity , we have that By performing the change of variables in the above integral and by expanding as we get that By putting the asymptotics into (369), we obtain that Next, using the asymptotics expression (367) admits the following asymptotics: Then, we see that the second limit lying in (366) equals In the same way, it may be seen that the first term of the sum lying in (366) tends to
As a result, we derive (363).
Finally, let us have a look on the pseudoprobability . We have that By using the elementary identity which comes from the equality together with the expansion, for example, for , , we get that As a byproduct, Similarly, and we deduce that The proof of Theorem 58 is finished.

Corollary 59. The pseudodensity of is given by In particular,

This result has been announced in [13] without any proof. We will develop it in a forthcoming paper.

Appendix

A.

A.1. Lacunary Vandermonde Systems

Let us introduce the “lacunary” Vandermonde determinant (of type ): We put and, for , We say “lacunary” because it comes from a genuine Vandermonde determinant where the powers from to are missing. More precisely, the determinant , is extracted from the classical Vandermonde determinant of type by removing the last rows and the st, ,th columns. We decompose into blocks as follows:

By moving the last columns before the previous ones, this determinant can be rewritten asBy appealing to an expansion by blocks of a determinant, it may be seen that , is the cofactor of the “south-east” block of the above determinant. Since the product of, for example, the diagonal terms of this last block is , the determinant is also the coefficient of in . Now, let us expand : with We have that The symbol in the above sum denotes the set of the permutations of the numbers , is the signature of the permutation and is the signature of the permutation mapping into . The product is given by For obtaining the coefficient of , we only keep in the foregoing sum the indices such that , , and , , the indices being all distinct. This gives that

Finally, we can observe that the foregoing sum is nothing but the expansion of the determinant As a result, we obtain the result below.

Proposition A.1. The determinant admits the following expression:

Let now be the determinant deduced from by replacing one column by a general column , that is, the determinant given, if , by

and, if , by

We have that where is the determinant given, if , by

and, if , by

In fact, is the coefficient of in . Let us introduce , , for any integer such that or and, for ,

We need to isolate in and . First, we write that

Second, by isolating in according to , we get that the determinant can be rewritten as

By introducing vectors with coordinates (written as a column), this determinant can be rewritten as

By appealing to multilinearity, it is easy to see that Now, let us multiply the sum lying in (A.18) by (A.21):

Recalling the convention that if or , the coefficient of in (A.22) can be written as

In this form, we see that the coefficient of in (A.22) is nothing but the product of by the expansion of the determinant

Proposition A.2. The determinant admits the following expression:

As a consequence of Propositions A.1 and A.2, we get the result below. Set

Proposition A.3. Let be positive integers, let be distinct complex numbers, and let be complex numbers. Set . The solution of the system , , or, more explicitly, is given by

Proof. Cramer’s formulae yield that By using the factorisations provided by Propositions A.1 and A.2, namely, we immediately get (A.28).

A.2. A Combinatoric Identity

Lemma A.4. The following identity holds for any positive integers , , : It can be rewritten as

Proof. Suppose first that . Noticing that we immediately get that We expand the last displayed derivative by using Leibniz rule: Since , we have that if , and, if , which coincides with the announced result. Second, suppose that . Noticing that we get that which coincides with the announced result.

A.3. Some Matrices

Let such that and set with the convention of settings if . These matrices have been used for solving systems (273) and (274) with the choices and . The aim of this section is to compute the product of the inverse of by , namely, . For this, we use Gauss elimination. The result is displayed in Theorem A.8. The calculations are quite technical, so we perform them progressively, the intermediate steps being stated in several lemmas (Lemmas A.4, A.5, A.6, and A.7).

Lemma A.5. The matrix can be decomposed into , where the matrices and are given by The regular matrices and are respectively, upper and lower triangular.

Proof. We begin by detailing the algorithm providing the matrix . Call the columns of , that is, for , Apply to them, except for , the transformation defined, for , by The are the columns of a new matrix . Actually, this transformation corresponds to a matrix multiplication acting on : with
Simple computations show that
Next, apply the second transformation to the columns of except for and , defined, for , by
The are the columns of a new matrix , whereStraightforward algebra yields that This method can be recursively extended: apply the th transformation () defined, for , by In particular,
The are the columns of the th matrix with , where is the matrix
and is the matrix
The matrices and can be simply written as
Clever algebra yields that
We will not prove (A.53); we will only check it below in the case .
We progressively arrive at the last transformation which corresponds to :
The are the columns of the last matrix given by , where
Formula (A.53) gives the following expression for that will be checked below:
Hence, by putting and , we see that is a lower triangular matrix and is an upper triangular matrix and we have obtained that .
Finally, we directly check the decomposition . The generic term of is Observing that this term can be rewritten as
The sum can be explicitly evaluated thanks to Lemma A.4. Its value is . Therefore, we can easily get the generic term of , and the proof of Lemma A.5 is finished.

Lemma A.6. The inverse of the matrix is given by

Proof. We simplify the entries of the product The generic term of this matrix is The last sum can be computed as follows: clearly, it vanishes when and it equals when . If , by using Lemma A.4, As a consequence, the entries of the product (A.61) are which proves that the second factor of (A.61) coincides with .

Lemma A.7. The matrix is given by

Proof. The generic term of is By performing the change of index and by using Lemma A.4, the sum in (A.65) is equal to By putting this into (A.65), we see that the generic term of writes which ends up the proof of Lemma A.7.

Theorem A.8. The matrix is given by

Proof. Referring to Lemmas A.5 and A.7, we have that The generic term of is The foregoing sum can be easily evaluated as follows:
Putting this into (A.70) yields the matrix displayed in Theorem A.8.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.