Table of Contents Author Guidelines Submit a Manuscript
Journal of Probability and Statistics
Volume 2011 (2011), Article ID 350382, 21 pages
http://dx.doi.org/10.1155/2011/350382
Research Article

On the One Dimensional Poisson Random Geometric Graph

Institut Télécom, Télécom ParisTech, CNRS LTCI, 75634 Paris, France

Received 24 March 2011; Revised 20 July 2011; Accepted 22 July 2011

Academic Editor: Kelvin K. W. Yau

Copyright © 2011 L. Decreusefond and E. Ferraz. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Given a Poisson process on a bounded interval, its random geometric graph is the graph whose vertices are the points of the Poisson process, and edges exist between two points if and only if their distance is less than a fixed given threshold. We compute explicitly the distribution of the number of connected components of this graph. The proof relies on inverting some Laplace transforms.

1. Motivation

As technology goes on [13], one can expect a wide expansion of the so-called sensor networks. Such networks represent the next evolutionary step in building, utilities, industry, home, agriculture, defense, and many other contexts [4].

These networks are built upon a multitude of small and cheap sensors which are devices with limited transmission capabilities. Each sensor monitors a region around itself by measuring some environmental quantities (e.g., temperature, humidity), detecting intrusion and so forth, and broadcasts its collected information to other sensors or to a central node. The question of whether information can be shared among the whole network is then of crucial importance. Mathematically speaking, sensors can be abstracted as points in 𝐑2, 𝐑3, or a manifold. The region a sensor monitors is represented by a circle centered at the location of the sensor. In what follows, it is assumed that the broadcast radius, that is, the distance at which a sensor can communicate with another sensor, is equal to the monitoring radius. Two questions are then of interest: can any two sensors communicate using others as hopping relays and is the whole region covered by all the sensors? The recent works of Ghrist and his collaborators [5, 6] show how, in any dimension, algebraic topology can be used to answer these questions. Their method consists in building the so-called simplicial complex associated to the configuration of points and the radius of communication. Then, simple algebraic computations yield the Betti numbers: the first Betti number, usually denoted as 𝛽0, is the number of connected components; the second number 𝛽1 is the number of coverage holes. Thus, we have a satisfactory deployment whenever 𝛽0=1 and 𝛽1=0. Trying to pursue their work for random settings, we quickly realized that the dimension of the ambient space played a key role. We then first began by the analysis of dimension 1, which appeared to be the simplest situation. In this case, there is no need of algebraic topology so we will not go further in the description of this line of thought even if it was our first motivation.

In dimension 1, the only question of interest is that of the connexity but it can take different forms. Imagine we are given [0,1] as a domain in which 𝑛 points {𝑥1,,𝑥𝑛} are drawn. For a radius 𝑟, one can wonder whether [0,1]𝑖=1,,𝑛[𝑥𝑖𝑟,𝑥𝑖+𝑟], or one can investigate whether [𝑥𝑖𝑟,𝑥𝑖+𝑟][𝑥𝑖+1𝑟,𝑥𝑖+1+𝑟] for all 𝑖=1,,𝑛1. The second situation is less restrictive since we do not impose that the frontier of the interval is to be covered. Depending on the application, we have in mind, both questions are sensible. A slightly different but somehow close problem is that of the circle: consider now that the points are dispatched along a circle of unit perimeter 𝐶1 and ask again whether 𝐶1𝑖=1,,𝑛𝐵(𝑥𝑖,𝑟) where 𝐵(𝑥,𝑟) is the 2-dimensional ball of center 𝑥 and radius 𝑟. Several years ago, this problem has been thoroughly analyzed ([7] and references therein) for a fixed number of i.i.d. arcs over the circle. A closed form formula can be given for the probability of coverage as a function of the number and of the common law of the arcs length. Some variations of this problem have been investigated since then, see, for instance, [8]. More recently, in [9], algorithms are devised to determine whether a domain can be protected from intrusion by a “belt” of sensors (namely, a ring or the border of a rectangle). There is no performance analysis in this work which is focused on algorithmic solutions for this special problem of coverage. Still motivated by applications to sensor networks, [10] considers the situation where sensors are actually placed in a plan, have a fixed radius of observation, and analyse the connectivity of the trace of the covered region over a line. Some recent results of Kahle [11, 12] are actually hardly linked to our results: the motivation is the same, studying the Betti numbers of some random simplicial complexes, but the results are only asymptotic and valid in dimension greater than 2.

Our main result is the distribution of the number of connected components for a Poisson distribution of sensors in a bounded interval. We could not use the method of [7] since the number of gaps does not determine the connectivity of the domain. For instance, one may have only one gap at the “beginning” which means that all the points are pairwise within the threshold distance and, thus, that the network is connected, or one may have only one gap in the “middle” which means that there is a true hole of connectivity.

Actually, our method is very much related to the queueing theory. Indeed, clusters, that is, sequence of neighboring points, are the strict analogous of busy periods—see Section 2. As will appear below, our analysis turns down to be that of an M/D/1/1 queue with a preemption: when a customer arrives during a service, it preempts the server, and, since there is no buffer, the customer who was in service is removed from the queuing system. This analogy led us to use standard tools of queueing theory: Laplace transform and renewal processes—see, for instance, [13, 14]. This works perfectly, and, with a bit of calculus, we can compute all the characteristics we are interested in. It is worthwhile to note that a queueing model (namely the M/G/) also appears in [10].

The paper is organized as follows: Section 2 presents the model and defines the relevant quantities to be calculated. The calculations and analytical results are presented in Section 3. For our situation, we find results analogous to that of [7]. In Section 4, two other scenarios are presented, considering the number of incomplete clusters and clusters placed in a circle. In Section 5, numerical examples are presented and analyzed.

2. Problem Formulation

Let 𝐿>0, we assume that we are given a Poisson process, denoted by 𝑁, of intensity 𝜆 on [0,𝐿]. Let (𝑋𝑖,𝑖1) be the atoms of 𝑁. We, thus, know that the random variables, Δ𝑋𝑖=𝑋𝑖+1𝑋𝑖 are i.i.d. and exponentially distributed. We fix 𝜖>0. Two points, located, respectively, at 𝑥 and 𝑦, are said to be directly connected whenever |𝑥𝑦|𝜖. For 𝑖<𝑗, two points of 𝑁, say 𝑋𝑖 and 𝑋𝑗, are indirectly connected if 𝑋𝑙 and 𝑋𝑙+1 are directly connected for any 𝑙=𝑖,,𝑗1. A set of points directly or indirectly connected is called a cluster, a complete cluster is a cluster which begins and ends within [0,𝐿]. The connectivity of the whole network is measured by the number of clusters.

The number of points in the interval [0,𝑥] is denoted by 𝑁𝑥=𝑛=0𝟏{𝑋𝑛𝑥}. The random variable 𝐴𝑖 given by 𝐴𝑖=𝑋1𝑋if𝑖=1,inf𝑗𝑋𝑗>𝐴𝑖1,𝑋𝑗𝑋𝑗1>𝜖if𝑖>1,(2.1) represents the beginning of the 𝑖th cluster, denoted by 𝐶𝑖. In the same way, the end of this same cluster, 𝐸𝑖, is defined by 𝐸𝑖𝑋=inf𝑗+𝜖𝑋𝑗>𝐴𝑖,𝑋𝑗+1𝑋𝑗.>𝜖(2.2) So, the 𝑖th cluster, 𝐶𝑖, has a number of points given by 𝑁𝐸𝑖𝑁𝐴𝑖. We define the length 𝐵𝑖 of 𝐶𝑖 as 𝐸𝑖𝐴𝑖. The intercluster size, 𝐷𝑖, is the distance between the end of 𝐶𝑖 and the beginning of 𝐶𝑖+1, which means that 𝐷𝑖=𝐴𝑖+1𝐸𝑖, and Δ𝐴𝑖 is the distance between the first points of two consecutive clusters 𝐶𝑖, given by Δ𝐴𝑖=𝐴𝑖+1𝐴𝑖=𝐵𝑖+𝐷𝑖.

Remark 2.1. With this set of assumptions and definitions, we can see our problem as an 𝑀/𝐷/1/1 preemptive queue, see Figure 1. In this nonconservative system, the service time is deterministic and given by 𝜖. When a customer arrives during a service, the served customer is removed from the system and replaced by the arriving customer. Within this framework, a cluster corresponds to what is called a busy period, the intercluster size is an idle time, and 𝐴𝑖+𝐷𝑖 is the length of the 𝑖th cycle.

350382.fig.001
Figure 1: Queueing representation of the model. A down arrow denotes that user 𝑖 starts his service. An up arrow indicates that user 𝑖 leaves the system without having finished the service. A double up arrow illustrates that the service of user 𝑖 finishes. Beginning and end of the 𝑖th busy period, respectively, 𝐴𝑖 and 𝐸𝑖, are also shown.

The number of complete clusters in [0,𝐿] corresponds to the number of connected components 𝛽0(𝐿) (since, in dimension 1, it coincides with the Euler characteristics of the union of intervals, see [5]) of the network. The distance between the beginning of the first cluster and the beginning of the (𝑖+1)th one is defined as 𝑈𝑖=𝑖𝑘=1Δ𝐴𝑘. We also define Δ𝑋0=𝐷0=𝑋1. Figure 2 illustrates these definitions.

350382.fig.002
Figure 2: Definitions of the relevant quantities of the network. Distance between points, distance between clusters, clusters size, interclusters size, beginning and ends of clusters.

For the sake of completeness, we recall the essentials of Markov’s process theory needed to go along, for further details we refer, for instance, to [13, 14]. In what follows, for a process 𝑋, (𝑋𝑡,𝑡0) is the filtration generated by the sample-paths of 𝑋: 𝑋𝑡=𝜎{𝑋(𝑠),𝑠𝑡}.(2.3)

Definition 2.2. A process (𝑋(𝑡),𝑡0) with values in a denumerable space 𝐸 is said to be Markov whenever 𝔼𝐹(𝑋(𝑡+𝑠))𝑋𝑡[],=𝔼𝐹(𝑋(𝑡+𝑠))𝑋(𝑡)(2.4) for any bounded function 𝐹 from 𝐸 to 𝐑, any 𝑡0 and 𝑠0.
Equivalently, a process 𝑋 is Markov if and only if, given the present (i.e., given 𝑋(𝑡)), the past (i.e., the sample-path of 𝑋 before time 𝑡) and the future (i.e., the sample-path of 𝑋 after time 𝑡) of the process are independent.

Definition 2.3. A random variable 𝜏 with values in 𝐑+{+} is an 𝑋-stopping time whenever, for any 𝑡0, the event {𝜏𝑡} belongs to 𝑋𝑡.

The point is that (2.4) still holds when 𝑡 is replaced by a stopping time 𝜏: given 𝑋(𝜏), the past and the future of 𝑋 are independent. 𝑋 is then said to be strong Markov. This property always holds for the Markov processes with values in a denumerable space but is not necessarily true for the Markov processes with values in an arbitrary space.

From now on, the Markov process under consideration is 𝑁, the Poisson process of intensity 𝜆 over [0,𝐿].

Lemma 2.4. For any 𝑖1, 𝐴𝑖 and 𝐸𝑖 are stopping times.

Proof. Let us consider the filtration 𝑁𝑡=𝜎{𝑁𝑎,𝑎𝑡}. For 𝑖=1, we have 𝐴1𝑋𝑡1𝑁𝑡𝑡1𝑁𝑡.(2.5)
Thus, 𝐴1 is a stopping time. For 𝐴2, we have 𝐴2>𝑡𝑛1𝑁𝑡=𝑛,𝑛𝑗=1Δ𝑋𝑗𝜖,𝑛𝑘=𝑗+1Δ𝑋𝑘𝜖𝑁𝑡,(2.6) so 𝐴2 is also a stopping time. We proceed along the same line for others 𝐴𝑖 and as well for 𝐸𝑖 to prove that they are stopping times.

Since 𝑁 is a (strong) Markov’s process, the next corollary is immediate.

Corollary 2.5. The set {𝐵𝑖,𝐷𝑖,𝑖1} is a set of independent random variables. Moreover, 𝐷𝑖 is distributed as an exponential random variable with mean 1/𝜆, and the random variables {𝐵𝑖,i1} are i.i.d.

3. Calculations

Theorem 3.1. The Laplace transform of the distribution of 𝐵𝑖 is given by 𝔼𝑒𝑠𝐵𝑖=𝜆+𝑠𝜆+𝑠𝑒(𝜆+𝑠)𝜖(3.1)

Proof. Since Δ𝑋𝑗 is an exponentially distributed random variable, 𝔼𝑒𝑠Δ𝑋𝑗𝟏{Δ𝑋𝑗𝜖}=𝜖0𝑒𝑠𝑡𝜆𝑒𝜆𝑡𝜆𝑑𝑡=𝑠+𝜆1𝑒(𝑠+𝜆)𝜖.(3.2)
Hence, the Laplace transform of the distribution of 𝐵1 is given by 𝔼𝑒𝑠𝐵1=𝑛=1𝔼𝑒𝑠𝐵1,𝑁𝐸1==𝑛𝑛=1𝔼𝑒𝑠(𝑛1𝑗=1Δ𝑋𝑗+𝜖)𝟏{Δ𝑋𝑛>𝜖}𝑛1𝑗=1𝟏{Δ𝑋𝑗𝜖}=𝑛=1𝔼𝑒𝑠Δ𝑋1𝟏{Δ𝑋1𝜖}𝑛1𝔼𝑒𝑠Δ𝑋𝑛𝟏{Δ𝑋𝑛>𝜖}𝑒𝑠𝜖=𝑛=0𝜆𝑠+𝜆1𝑒(𝑠+𝜆)𝜖𝑛𝑒𝑠𝜆𝑒𝑠𝜖=𝜆+𝑠𝑠𝑒𝜆𝜖𝑒𝑠𝜖.+𝜆(3.3)
Using Corollary 2.5, we have 𝔼[𝑒𝑠𝐵1]=𝔼[𝑒𝑠𝐵𝑖], which concludes the proof.

From this result, we can immediately calculate the Laplace transform of the distribution of Δ𝐴𝑖. Since Δ𝐴𝑖=𝐵𝑖+𝐷𝑖, we have 𝔼[𝑒𝑠Δ𝐴𝑖]=𝔼[𝑒𝑠(𝐵𝑖+𝐷𝑖)], and using Corollary 2.5: 𝔼𝑒𝑠Δ𝐴𝑖𝑒=𝔼𝑠𝐵𝑖𝔼𝑒𝑠𝐷𝑖=𝜆𝜆+𝑠𝑒(𝜆+𝑠)𝜖(3.4)

Corollary 3.2. The Laplace transform of the distribution of 𝑈𝑛, for 𝑛0, is given by 𝔼𝑒𝑠𝑈𝑛=𝜆𝑛𝜆+𝑠𝑒(𝜆+𝑠)𝜖𝑛(3.5)

Proof. We use Corollary 2.5 and Theorem 3.1 to calculate the Laplace transform of the distribution of 𝑈𝑛 since 𝑈𝑛=𝑛𝑖=1(𝐵𝑖+𝐷𝑖): 𝔼𝑒𝑠𝑈𝑛=𝑛𝑖=1𝔼𝑒𝑠𝐵𝑖𝔼𝑒𝑠𝐷𝑖=𝜆+𝑠𝜆+𝑠𝑒(𝜆+𝑠)𝜖𝑛𝜆𝜆+𝑠𝑛,(3.6) hence, the result.

Let us define the function 𝑝𝑛 as 𝑝𝑛𝑥𝐑+𝑝𝑛𝛽(𝑥)=Pr0(𝑥)=𝑛,(3.7) that is, 𝑝𝑛(𝑥) is the probability of having 𝑛 clusters in the interval [0,𝑥]. Since, for all 𝑥𝐑+, 0𝑝𝑛(𝑥)1, the Laplace transform of 𝑝𝑛 with respect to 𝑥, 𝑝𝑛(𝑠)=0𝑒𝑠𝑥𝑝𝑛(𝑥)𝑑𝑥,(3.8) is well defined.

Theorem 3.3. For any 𝑛0, the Laplace transform of 𝑝𝑛 is given by 𝑝𝑛𝜆(𝑠)=𝑛𝑒(𝜆+𝑠)𝜖𝑠𝑒(𝜆+𝑠)𝜖+𝜆𝑛(3.9)

Proof. We note that; see Figure 3, 𝛽0(𝑥)𝑛Δ𝑋0+𝑈𝑛1+𝐵𝑛𝐿if𝑛1,Δ𝑋0<if𝑛=0.(3.10)
Hence, 𝛽Pr0(𝑥)=0=1PrΔ𝑋0+𝐵1𝛽𝑥,(3.11)Pr0(𝑥)=𝑛=PrΔ𝑋0+𝑈𝑛1+𝐵𝑛𝑥PrΔ𝑋0+𝑈𝑛+𝐵𝑛+1𝑥.(3.12) Let 𝑌𝑛=Δ𝑋0+𝑈𝑛1+𝐵𝑛,if𝑛1,0,if𝑛=0,(3.13) then we have 𝑌Pr𝑛(𝑠)=0𝑌Pr𝑛𝑒𝑥𝑠𝑥=𝑑𝑥0𝑥0𝑑𝑃𝑌𝑛(𝑦)𝑒𝑠𝑥=1𝑑𝑥𝑠𝔼𝑒𝑠𝑌𝑛=1𝑠𝔼𝑒𝑠Δ𝑋0𝔼𝑒𝑠𝑈𝑛1𝔼𝑒𝑠𝐵𝑛=1𝑠𝜆𝑛𝑒𝜆𝜖𝑠𝑒𝑠𝜖+𝜆𝑛,(3.14) for 𝑛1, where we used Corollary 2.5 in the third line. For 𝑛=0, the Laplace transform is trivial and given by {Pr(𝑌0)}(𝑠)=1/𝑠. Substituting (3.14) in the Laplace transform of both sides of (3.12) yields 𝑝𝑛𝑌(𝑠)=Pr𝑛𝑌(𝑠)Pr𝑛+1=𝑒(𝑠)𝜖𝜆𝑒𝜖𝑠𝜆𝑛𝑒𝜖𝜆𝑠𝑒𝜖𝑠+𝜆𝑛+1,𝑛0.(3.15) The proof is, thus, complete.

350382.fig.003
Figure 3: Illustration of the condition equivalent to 𝛽0𝑛.

Lemma 3.4. Let 𝑚 be an positive integer. For any 𝑥>0, when 𝜖0, 𝔼[𝛽𝑚0]𝔼[𝑁𝑚𝐿].

Proof. Since there is almost surely a finite number of points in [0,𝑥], for almost all sample-paths, there exists 𝜂>0 such that Δ𝑋𝑗𝜂 for any 𝑗=1,,𝑁𝑥. Hence, for 𝜖<𝜂, 𝛽0(𝑥)=𝑁𝑥. This implies that 𝛽0(𝑥) tends almost surely to 𝑁𝑥 as 𝜖 goes to 0. Moreover, it is immediate by the very definition of 𝛽0(𝑥) that 𝛽0(𝑥)𝑁𝑥. Since, for any 𝑚, 𝔼[𝑁𝑚𝑥] is finite, the proof follows by dominated convergence.

Let Li𝑡(𝑧), 𝑧,𝑡𝐑, 𝑧<1, be the polylogarithm function with parameter 𝑡, defined by Li𝑡(𝑧)=𝑘=1𝑧𝑘𝑘𝑡(3.16) For 𝑚 a positive integer, consider the function of 𝑥𝑀𝑚𝛽0𝛽𝑥𝔼𝑚0(=𝑥)𝑖=0𝑖𝑚𝑝𝑖(𝑥).(3.17) Its Laplace transform is given by 𝑀𝑚𝛽0(𝑠)=0𝔼𝛽0(𝑥)𝑚𝑒𝑠𝐿𝑑𝑥.(3.18)

Corollary 3.5. Let 𝛼 be defined as follows: 𝑒𝛼=𝜖𝜆𝜆𝑠𝑒𝜖𝑠.(3.19) The Laplace transform of the 𝑚th moment of 𝛽0(𝐿) is 𝑀𝑚𝛽0𝛼(𝑠)=𝑠(𝛼+1)Li𝑚1,𝛼+1(3.20) which converges, provided that 𝛼>0.

Proof. Applying the Laplace transform of both sides of (3.17), we get 𝑀𝑚𝛽0(𝑠)=𝑖=1𝑖𝑚𝑝𝑖(=𝑒𝑠)𝜖𝜆𝑒/𝜆𝜖𝑠𝑒𝜖𝜆/𝜆𝑠𝑒𝜖𝑠+1𝑖=1𝑖𝑚𝑒𝜖𝜆/𝜆𝑠𝑒𝜖𝑠+1𝑖=𝛼𝑠(𝛼+1)Li𝑚1,𝛼+1(3.21) concluding the proof.

We define {𝑚𝑘} as the Stirling number of second kind [15]; that is, {𝑚𝑘} is the number of ways to partition a set of 𝑚 objects into 𝑘 groups. They are intimately related to polylogarithm by the following identity (see [16]) valid for any positive integer 𝑚, Li𝑚(𝑧)=𝑚𝑘=0(1)𝑚+𝑘𝑘!𝑚+1𝑘+1(1𝑧)𝑘+1(3.22)

Corollary 3.6. The 𝑚th moment of the number of clusters on the interval [0,𝐿] is given by 𝑀𝑚𝛽0(𝐿)=𝑚𝑘=1𝑚𝑘𝐿𝜖𝑘𝑘𝜆𝜖𝑒𝜖𝜆𝑘𝟏{𝐿/𝜖>𝑘}.(3.23)

Proof. Using (3.22) in the result of Corollary 3.5, we get 𝑀𝑚𝛽0𝛼(𝑠)=𝑠𝑚𝑘=0(1)𝑚+𝑘𝑘!𝑚+1𝑘+1(1+𝛼)𝑘𝛼𝑘+1=1(𝛼+1)𝑠𝑚𝑘=0𝑐𝑘,𝑚1𝛼𝑘,(3.24) where the coefficients 𝑐𝑘,𝑚 are integers given by 𝑐𝑘,𝑚=𝑚𝑗=𝑘(1)𝑗𝑗𝑘𝑗!𝑚+1𝑗+1.(3.25) Using the following identity of the Stirling numbers [17], 𝑚𝑗=0(1)𝑗𝑗!𝑚+1𝑗+1=0,(3.26) we find that 𝑐0,𝑚=0 for 𝑚 a positive integer. So, we can write the Laplace transform of the moments as 𝑀𝑚𝛽0(𝑠)=𝑚𝑘=1𝑐𝑘,𝑚𝜆𝑒𝜖𝜆𝑘𝑠𝑘+1𝑒𝑘𝑠𝜖,(3.27) and apply the inverse of the Laplace transform in both sides of (3.12) to obtain 𝑀𝑚𝛽0(𝐿)=1𝑚𝑘=1𝑐𝑘,𝑚𝜆𝑒𝜖𝜆𝑘𝑠𝑘+1𝑒𝑘𝑠𝜖=(𝐿)𝑚𝑘=1𝑐𝑘,𝑚𝜆𝑒𝜖𝜆𝑘11𝑠𝑘+1𝑒𝑘𝑠𝜖=(𝐿)𝑚𝑘=1𝑐𝑘,𝑚(𝑘!𝐿𝑘𝜖)𝑘𝜆𝑒𝜖𝜆𝑘𝟏{𝐿>𝑘𝜖}.(3.28) According to Lemma 3.4, when 𝜖0, we obtain 𝑀𝑚𝛽0𝑁(𝐿)=𝔼𝑚𝐿=𝑚𝑘=1𝑐𝑘,𝑚𝑘!(𝐿𝜆)𝑘𝟏{𝐿>0}.(3.29) Hence, for any 𝜆>0, 𝑚𝑘=1𝑐𝑘,𝑚𝑘!(𝐿𝜆)𝑘𝟏{𝐿>0}=𝑚𝑘=1𝑚𝑘(𝐿𝜆)𝑘𝟏{𝐿>0},(3.30) which shows that 𝑐𝑘,𝑚=𝑚𝑘𝑘!.(3.31) Thus, we have proved (3.23) for any positive integer 𝑚.

Theorem 3.7. For any 𝑛, 𝐿, 𝜆, and 𝜖, we have 𝛽𝑃𝑟0=1(𝐿)=𝑛𝑛!𝐿/𝜖𝑛𝑖=0(1)𝑖𝑖!(𝐿(𝑛+𝑖)𝜖)𝜆𝑒𝜆𝜖𝑛+𝑖.(3.32)

Proof. Since 𝛽0(𝐿)𝑁𝐿 and since 𝔼[𝑒𝑠𝑁𝐿] is finite for any 𝑠𝐑, we have, for any 𝑠0, 𝔼𝑒𝑠𝛽0(𝐿)=𝑘=0(1)𝑘𝑠𝑘𝔼𝛽𝑘!𝑘0(.𝐿)(3.33) Rearranging the terms of the right-hand side, and substituting 𝑀𝑚𝛽0(𝐿), by the result of (3.23), we obtain 𝔼𝑒𝑠𝛽0(𝐿)=𝑘=0(𝐿𝑘𝜖)𝑘𝜆𝑒𝜆𝜖𝑘𝟏{𝐿>𝑘𝜖}𝑗=𝑘(𝑠)𝑗𝑗𝑘𝑗!(3.34) Furthermore, it is known (see [17]) that 𝑗=𝑘𝑥𝑗𝑗𝑘=1𝑗!𝑘!(𝑒𝑥1)𝑘.(3.35) Hence, 𝔼𝑒𝑠𝛽0(𝐿)=𝑘=0(𝐿𝑘𝜖)𝑘𝜆𝑒𝜆𝜖𝑘𝟏{𝐿>𝑘𝜖}(𝑒𝑠1)𝑘𝑘!(3.36) By inverting the Laplace transforms, we get 𝑘=0𝑖=𝑘(1)𝑖𝑖𝑛𝛿𝑖!(𝑘𝑛)(𝑘𝜖𝐿)𝑘𝜆𝑒𝜆𝜖𝑘𝟏{𝐿>𝑘𝜖},(3.37) where 𝛿𝑎 is the Dirac measure at point 𝑎. After some simple algebra, we find the expression of the probability that an interval contains 𝑛 complete clusters: 𝛽Pr0(𝐿)=𝑛=𝑝𝑛1(𝐿)=𝑛!𝐿/𝜖𝑛𝑖=0(1)𝑖[]𝑖!𝐿(𝑛+𝑖)𝜖𝜆𝑒𝜆𝜖𝑛+𝑖,(3.38) concluding the proof.

Lemma 3.8. For 𝑥0, 𝑝𝑛(𝑥) has the three following properties.(i)𝑝𝑛(𝑥) is differentiable.(ii)lim𝑥𝑝𝑛(𝑥)=0.(iii)lim𝑥𝑑𝑝𝑛(𝑥)/𝑑𝑥=0.

Proof. Let 𝑗 be a nonnegative integer. The function is obviously differentiable when 𝑥/𝜖𝑗. Besides, we have lim𝑥𝜖𝑗+𝑝𝑛(𝑥)lim𝑥𝜖𝑗𝑝𝑛(𝑥)=lim𝑥𝜖𝑗+(1)𝑗1𝑗!(𝑥(𝑛+𝑗)𝜖)𝑎𝑛+𝑗(3.39) Since the right-hand term function of 𝑥 is zero as well as its derivative for all 𝑗, the function is also derivable when 𝑥/𝜖=𝑗, which proves (i). Items (ii) and (iii) are direct consequences of the Final Value theorem in the Laplace transform of 𝑝𝑛 and its derivative.

The expression of 𝑝𝑛 gives us a Laplace pair between the 𝑥 and 𝑠 domains: 𝟏{𝑥0}𝑛!𝑥/𝜖𝑛𝑖=0(1)𝑖1𝑖!(𝑥(𝑛+𝑖)𝜖)𝑎𝑛+𝑖𝑎𝑒𝜖𝑠(𝑎𝑠𝑒𝜖𝑠+1)𝑛+1(3.40) We can use this relation to find the distributions of 𝐵𝑖 and 𝑈𝑛.

Theorem 3.9. The probability density functions of 𝐵𝑖 and 𝑈𝑛, denoted, respectively, by 𝑓𝐵𝑖(𝑥) and 𝑓𝑈𝑛(𝑥), are given: 𝑓𝐵𝑖(𝑥)=𝜆𝑒𝜖𝜆𝑝0(𝑥𝜖)+𝑒𝜖𝜆𝑑𝑝𝑑𝑥0𝟏(𝑥𝜖){𝑥>𝜖},𝑓(3.41)𝑈𝑛(𝑥)=𝜆𝑒𝜖𝜆𝑝𝑛1(𝑥𝜖)𝟏{𝑥>𝜖},(3.42) where the expressions of 𝑝0(𝑥𝜖) and (𝑑/𝑑𝑥)𝑝0(𝑥𝜖) are straightforwardly obtained from (3.32).

Proof. According to Theorem 3.1, 𝔼𝑒𝑠𝐵𝑖=1𝜆(𝜆+𝑠)𝑒𝜆𝜖/𝜆𝑠𝑒𝑠𝜖+1=𝜆𝑒𝜖𝜆𝑒𝜖𝜆𝜆𝑒𝜖𝑠𝑒𝜖𝜆/𝜆𝑠𝑒𝜖𝑠𝑒+1𝜖𝑠+𝑒𝜖𝜆𝑠𝑒𝜖𝜆𝜆𝑒𝜖𝑠𝑒𝜖𝜆/𝜆𝑠𝑒𝜖𝑠𝑒+1𝜖𝑠=𝜆𝑒𝜖𝜆𝑒𝜖𝑠𝑝0()(𝑠)+𝑒𝜖𝜆𝑒𝜖𝑠𝑝𝑠0()(𝑠).(3.43) Here, using the inverse Laplace transform established in (3.40) and remembering that 𝑝0(𝑥)=0, we get an analytical expression for 𝑓𝐵𝑖(𝑥), proving (3.41).
Proceeding in a similar fashion, we can find the distribution of 𝑈𝑛 by inverting its Laplace transform given by Corollary 3.2 as follows: 𝔼𝑒𝑠𝑈𝑛=1𝑒𝜆𝜖/𝜆𝑠e𝑠𝜖+1𝑛=𝜆𝑒𝜖𝜆𝑒𝜖𝜆𝜆𝑒𝜖𝑠𝑒𝜖𝜆/𝜆𝑠𝑒𝜖𝑠+1𝑛𝑒𝜖𝑠=𝜆𝑒𝜖𝜆𝑒𝜖𝑠𝑝𝑛1()(𝑠).(3.44) We, thus, have (3.42).

We can also obtain the probability that the segment [0,𝐿] is completely covered by the sensors. To do this, we remember that the first point (if there is one) is capable to cover the interval [𝑋1𝜖,𝑋1+𝜖].

Theorem 3.10. Let 𝑅𝑚,𝑛(𝑥) be defined as follows: 𝑅𝑚,𝑛(𝑥)=𝑥/𝜖1𝑖=𝑚𝑒𝜆𝜖𝑖+𝑛𝑖+𝑛𝑗=0[])(𝜆(1𝑖)𝜖𝑥𝑗𝑗!(3.45) Then, []𝑃𝑟(0,𝐿iscovered)=𝑅0,1(𝐿)𝑒𝜆𝜖𝑅0,1(𝐿𝜖)𝑒𝜆𝜖𝑅1,0(𝐿)+𝑒2𝜆𝜖𝑅1,0(𝐿𝜖).(3.46)

Proof. The condition of total coverage is the same as []𝑥0,𝐿,𝑋𝑖[]𝑋0,𝐿𝑥1𝜖,𝑋1+𝜖,(3.47) which means that {[]𝐵0,𝐿iscovered}1𝐿𝑋1𝑋1𝜖.(3.48) Hence, []Pr(0,𝐿iscovered)=𝜖0𝐵Pr1𝐿𝑋1𝑋1=𝑥𝑑𝑃𝑋1(𝑥),(3.49) and since 𝐵1 and 𝑋1 are independent []Pr(0,𝐿iscovered)=𝜖0𝐿𝑥𝑓𝐵1(𝑢)𝜆𝑒𝑥𝜆𝑑𝑢𝑑𝑥.(3.50) The result then follows from Lemma 3.8 and some tedious but straightforward algebra.

4. Other Scenarios

The method can be used to calculate 𝑝𝑛 for other definitions of the number of clusters. We consider two other definitions: the number of incomplete clusters and the number of clusters in a circle.

4.1. Number of Incomplete Clusters

The major difference with Section 3 is that a cluster is now taken into account as soon as one of the point of the cluster is inside the interval [0,𝐿]. So, for instance, in Figure 3, we count actually 𝑛+1 incomplete clusters. We define 𝛽0(𝐿) as the number of incomplete clusters on an interval [0,𝐿].

Theorem 4.1. Let 𝐺(𝑘) be defined as 𝐺(𝑘)=(1)𝑘𝑒𝑘𝑘𝜆𝜖𝑗=0[]𝜆(𝑘𝜖𝐿)𝑗𝑗!𝑒𝜆𝐿𝟏{𝑇>𝑘𝜖},(4.1) for 𝑘+ and 𝐺(1)=𝑒𝜆𝐿. Then, 𝛽𝑃𝑟0=(𝐿)=𝑛𝐿/𝜖+1𝑖=𝑛(1)𝑖+𝑛𝑖𝑛(𝐺(𝑖1)+𝐺(𝑖)),for𝑛0.(4.2)

Proof. The condition of 𝛽0(𝐿)𝑛 is now given by 𝛽0𝑛Δ𝑋0+𝑈𝑛1𝐿if𝑛1,Δ𝑋0<if𝑛=0.(4.3) We define 𝑌𝑛 as 𝑌𝑛=Δ𝑋0+𝑈𝑛1if𝑛1,0if𝑛=0.(4.4) Repeating the same calculations, we find the Laplace transform of Pr(𝛽0()=𝑛): 𝛽Pr0𝜆()=𝑛(𝑠)=𝑒𝑠+𝜆𝜖𝜆𝜆𝑒𝜖𝑠𝑒𝜖𝜆/𝜆𝑠𝑒𝜖𝑠+1𝑛1if𝑛1,𝜆+𝑠if𝑛=0.(4.5) With this expression, following the lines of Lemma 3.4, we obtain 𝔼𝛽0()𝑚(𝑠)=𝑚+1𝑘=1𝑘1𝑚+1(𝑘1)!𝑠𝑘𝜆𝜆+𝑠𝜆𝑒𝜆𝜖𝑒𝑠𝜖𝑘1.(4.6) Then, we write 𝜆1𝜆+𝑠𝑠𝑘=(1)𝑘𝜆𝑘11+𝜆+𝑠𝑘𝑖=11𝑠𝑖1𝜆𝑘𝑖,(4.7) to find an expression with a well-known Laplace transform inverse, and, after inverting it, we obtain 𝔼𝛽0𝑚=𝑚𝑘=0𝑚+1𝑘+1𝑘!𝐺(𝑘).(4.8) Expanding the Laplace transform of the distribution of 𝛽0(𝐿) in a Taylor series and rearranging terms, we get 𝔼𝑒𝑠𝛽0(𝐿)=1+𝐺(0)𝑗=1(𝑠)𝑗𝑗1+𝑗!𝑘=1𝐺(𝑘)𝑗=𝑘(𝑠)𝑗𝑗!𝑗+1𝑘+1.(4.9) Now, we use another recurrence that the Stirling numbers obey [17], =𝑗𝑘𝑗𝑗+1𝑘+1+(𝑘+1)𝑘+1,(4.10) to get 𝑗=𝑘𝑥𝑗=𝑗!𝑗+1𝑘+1𝑗=𝑘𝑥𝑗𝑗𝑘𝑗=1𝑗!+(𝑘+1)𝑘+1𝑘!(𝑒𝑥1)𝑘+1𝑘!(𝑒𝑥1)𝑘+1.(4.11) Hence, 𝔼𝑒𝑠𝛽0(𝐿)=1+𝑘=1(𝐺(𝑘1)+𝐺(𝑘))(𝑒𝑠1)𝑘.(4.12) Inverting this expression for any nonnegative integer 𝑛, we have the searched distribution.

4.2. Number of Clusters in a Circle

We investigate now the case where the points of the process are deployed over a circumference, and we want to count the number of complete clusters, which corresponds to calculate the Euler's Characteristic of the total coverage, so we call this quantity 𝜒. Without loss of generality, we can choose an arbitrary point to be the origin.

Theorem 4.2. The distribution of the Euler's Characteristic, 𝜒(𝐿), when the points are deployed over a circumference of length 𝐿 is given by 𝑃𝑟(𝜒(𝐿)=𝑛)=𝑒𝜆𝐿𝟏{𝑛=0}+1𝑒𝜆𝐿𝜆𝑒𝜖𝜆×𝑛!𝐿/𝜖𝑛𝑖=0(1)𝑖[]𝑖!𝐿(𝑛+𝑖)𝜖𝜆𝑒𝜖𝜆𝑛+𝑖11𝐿+(𝑛+𝑖)𝜆𝜖for𝑛0.(4.13)

Proof. If there is no points on the circle, 𝜒(𝐿)=0. Otherwise, if there is at least one point, we choose the origin at this point, and we have equivalence between the events: 𝑈{𝜒(𝐿)𝑛}𝑛1+𝐵𝑛𝑁𝐿𝐿>0if𝑛1,Δ𝑋0<if𝑛=0.(4.14) In Figure 4, we present an example of this equivalence.
We can define 𝑌𝑛 as 𝑌𝑛=𝑈𝑛1+𝐵𝑛if𝑛1,0if𝑛=0,(4.15) to find the Laplace transform or Pr(𝜒(𝐿)=𝑛): {Pr(𝜒()=𝑛)}(𝑠)=1𝑒𝜆𝐿𝜆+𝑠𝜆𝑒𝜖𝜆𝜆𝑒𝜖𝑠𝑒𝜖𝜆/𝜆𝑠𝑒𝜖𝑠+1𝑛(4.16) The number of clusters is almost surely equal to the number of points when 𝜖0, so 𝔼𝜒(𝐿)𝑚=1𝑒𝜆𝐿𝜆𝑒𝑚𝜖𝜆𝑘=1𝑚𝑘[]𝐿𝑘𝜖𝜆𝑒𝜖𝜆𝑘11𝐿+𝑘𝜆𝟏𝜖{𝐿>𝑘𝜖}.(4.17) Expanding the Laplace transform in a Taylor series and rearranging terms, as we did previously, yields 𝔼𝑒𝑠𝜒(𝐿)=1𝑒𝜆𝐿𝜆𝑒𝜖𝜆𝑘=0[]𝐿𝑘𝜖𝜆𝑒𝜖𝜆𝑘11𝐿+𝑘𝜆𝟏𝜖{𝐿>𝑘𝜖}𝑗=𝑘(𝑠)𝑗𝑗𝑘𝑗!.(4.18) Since 𝑗=𝑘(𝑠)𝑗𝑗𝑘=(𝑒𝑗!𝑠1)𝑘𝑘!,(4.19) we can directly invert this Laplace transform, add the case where there are no points for 𝜒(𝐿)=0, and the theorem is proved.

350382.fig.004
Figure 4: Illustration of the condition equivalent to 𝜒(𝐿)𝑛. Since the coverage of the last point on [0,𝐿] overlaps the cluster with a point in zero, they are actually contained in the same cluster.

5. Examples

We consider some examples to illustrate the results of the paper. Here, the behavior of the mean and the variance of 𝛽0(𝐿) as well as Pr(𝛽0(𝐿)=𝑛) are presented.

From (3.23), we have that 𝔼[𝛽0(𝐿)] is given by 𝔼𝛽0(𝐿)=(𝐿𝜖)𝜆𝑒𝜖𝜆𝟏{𝐿>𝜖}.(5.1) This expression agrees with the intuition in that there are three typical regions given a fixed 𝜖. When 𝜆 is much smaller than 1/𝜖, the number of clusters is approximatively the number of sensors, since the connections with few sensors will unlikely happen, which can be seen from the fact that 𝔼[𝛽0(𝐿)]𝐿𝜆 when 𝜆0. As we increase 𝜆, the mean number of direct connections overcomes the mean number of sensors, and, at some value of 𝜆, we expect that 𝔼[𝛽0(𝐿)] decreases, when adding a point is likely to connect disconnected clusters. We remark that the maximum occurs exactly for 𝜖=1/𝜆, that is, when the mean distance between two sensors equals the threshold distance for them to be connected. At this maximum, 𝔼[𝛽0(𝐿)] takes the value of (𝐿/𝜖1)𝑒1. Finally, when 𝜆 is too large, all sensors tend to be connected, and there is only one cluster which even goes beyond 𝐿, so there are no complete clusters into the interval [0,𝐿]. This is trivial when we make 𝜆 in the last equation. Figure 5 shows this behavior when 𝐿=4 and 𝜖=1.

350382.fig.005
Figure 5: Variation of the mean number of clusters in function of 𝜆 when 𝐿=4 and 𝜖=1.

The variance can be obtained also by (3.23) as follows: 𝛽Var0(𝐿)=(𝐿𝜖)𝜆𝑒𝜖𝜆𝟏{𝐿>𝜖}+(𝐿2𝜖)𝜆2𝑒2𝜖𝜆𝟏{𝐿>2𝜖}(𝐿𝜖)2𝜆2𝑒2𝜖𝜆𝟏{𝐿>𝜖},(5.2) and, under the condition that 𝐿>2𝜖𝛽Var0=(𝐿)(𝐿𝜖)𝜆𝑒𝜖𝜆+𝜖(3𝜖2𝐿)𝜆2𝑒2𝜖𝜆.(5.3)

Figure 6 shows a plot of Var(𝛽0(𝐿)) in function of 𝜆 for 𝐿=4 and 𝜖=1. We can expect that, when 𝜆 is small compared to 𝜖, the plot should be approximatively linear, since there would not be too much connections in the network, and the variance of the number of clusters should be close to the variance of the number of sensors given by 𝜆𝐿. Since 𝛽0(𝐿) tends almost surely to 0 when 𝜆 goes to infinity, Var(𝛽0(𝐿)) should also tend to 0 in this case. Those two properties are observed in the plot. Besides, we find the critical points of this function, and again, 𝜆=1/𝜖 is one of them, and at this value Var(𝛽0(𝐿))=(𝐿/𝜖)𝑒1+(32𝐿/𝜖)𝑒1. The other two are the ones satisfying the transcendent equation: 𝜆𝑒𝜆𝜖=𝐿𝜖2𝜖(2𝐿3𝜖)(5.4) By using the second derivative, we realize that 1/𝜖 is actually a minimum. Besides, if 𝐿2𝜖, there is just one critical point, a maximum, at 𝜆=1/𝜖.

350382.fig.006
Figure 6: Behavior of the variance of the number of clusters in function of 𝜆 when 𝐿=4 and 𝜖=1.

The last example in the section is performed with the result obtained in Theorem 3.7. We consider again 𝐿=4 and 𝜖=1 to obtain the following distributions: 𝛽Pr0(𝐿)=0=13𝜆𝑒𝜆+2𝜆2𝑒2𝜆1/6𝜆3𝑒3𝜆,𝛽Pr0(𝐿)=1=3𝜆𝑒𝜆4𝜆2𝑒2𝜆+1/2𝜆3𝑒3𝜆,𝛽Pr0(𝐿)=2=2𝜆2𝑒2𝜆1/2𝜆3𝑒3𝜆,𝛽Pr0(𝐿)=3=1/6𝜆3𝑒3𝜆,𝛽Pr0(𝐿)>3=0.(5.5) Those expressions are simple, and they have at most four terms, since 𝐿=4𝜖. We plot these functions in Figure 7. The critical points on those plots at 𝜆=1/𝜖 are confirmed for the fact that, in function of 𝜆 for every 𝑛, Pr(𝜒(𝐿)=𝑛) can be represented as a sum: 𝑗𝑖=0𝑞𝑖,𝑗𝜆𝑒𝜆𝜖𝑖,(5.6) where the coefficients 𝑞𝑖,𝑗 are constant in relation to 𝜆. However, (𝜆𝑒𝜆𝜖)𝑖 has a critical point at 𝜆=1/𝜖 for all 𝑖>0, so this should be also a critical point of Pr(𝜒(𝐿)=𝑛). If 𝜆 is small, we should expect that Pr(𝜒(𝐿)=0) is close to one, since it is likely to 𝑁 have no points. For this reason, in this region, Pr(𝜒(𝐿)=𝑛) for 𝑛>0 is small. When 𝜆 is large, we expect to have very large clusters, likely to be larger than 𝐿, so it is unlikely to have a complete cluster in the interval, and, again, Pr(𝜒(𝐿)=0) approaches to the unity, while Pr(𝜒(𝐿)=𝑛) for 𝑛>0 become again small.

350382.fig.007
Figure 7: Probabilities of connectiveness, Pr(𝛽0(𝐿)=𝑛), for 𝑛=0,1,2,3, in function of 𝜆 when 𝐿=4 and 𝜖=1.

Acknowledgment

The authors would like to thank the anonymous referee whose constructive remarks helped us to improve the presentation of this paper.

References

  1. J. M. Kahn, R. H. Katz, and K. S. J. Pister, “Mobile networking for smart dust,” in Proceedings of the International Conference on Mobile Computing and Networking (MOBICOM '99), pp. 271–278, Seattle, Wash, USA, August 1999.
  2. F. L. Lewis, Wireless Sensor Networks, chapter 2, John Wiley, New York, NY, USA, 2004.
  3. G. J. Pottie and W. J. Kaiser, “Wireless integrated network sensors,” Communications of the ACM, vol. 43, no. 5, pp. 51–58, 2000. View at Publisher · View at Google Scholar
  4. C. Y. Chong and S. P. Kumar, “Sensor networks: evolution, opportunities, and challenges,” Proceedings of the IEEE, vol. 91, no. 8, pp. 1247–1256, 2003. View at Publisher · View at Google Scholar
  5. R. Ghrist and A. Muhammad, “Coverage and hole-detection in sensor networks via homology,” in Proceedings of the 4th International Symposium on Information Processing in Sensor Networks (IPSN '05), pp. 254–260, April 2005. View at Publisher · View at Google Scholar · View at Scopus
  6. V. de Silva and R. Ghrist, “Coordinate-free coverage in sensor networks with controlled boundaries via homology,” International Journal of Robotics Research, vol. 25, no. 12, pp. 1205–1222, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. A. F. Siegel and L. Holst, “Covering the circle with random arcs of random sizes,” Journal of Applied Probability, vol. 19, no. 2, pp. 373–381, 1982. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. L. Holst, “On multiple covering of a circle with random arcs,” Journal of Applied Probability, vol. 17, no. 1, pp. 284–290, 1980. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. S. Kumar, T. H. Lai, and A. Arora, “Barrier coverage with wireless sensors,” in Proceedings of the 11th Annual International Conference on Mobile Computing and Networking (MOBICOM '05), pp. 284–298, Cologne, Germany, September 2005. View at Publisher · View at Google Scholar
  10. P. Manohar, S. S. Ram, and D. Manjunath, “Path coverage by a sensor field: the nonhomogeneous case,” ACM Transactions on Sensor Networks, vol. 5, no. 2, pp. 17:1–17:26, 2009. View at Publisher · View at Google Scholar
  11. M. Kahle, “Random geometric complexes,” Discrete & Computational Geometry, vol. 45, no. 3, pp. 553–573, 2011. View at Publisher · View at Google Scholar
  12. M. Kahle and E. Meckes, “Limit theorems for Betti numbers of random simplicial complexes,” Probability. Preprint, http://arxiv.org/abs/1009.4130.
  13. S. Asmussen, Applied Probability and Queues, vol. 51 of Stochastic Modelling and Applied Probabilit, Springer, New York, NY, USA, 2nd edition, 2003. View at Zentralblatt MATH
  14. R. B. Cooper, Introduction to Queueing Theory, North-Holland, New York, NY, USA, 2nd edition, 1981.
  15. R. L. Graham, D. E. Knuth, and O. Patashnik, Concrete Mathematics: A Foundation for Computer Scienc, chapter 6.1, Addison-Wesley, Reading, Mass, USA, 2nd edition, 1994.
  16. D. C. Wood, “Technical report 15-92,” Tech. Rep., University of Kent computing Laboratory, University of Kent, Canterbury, UK, 1992. View at Google Scholar
  17. S. Roman, The Umbral Calculus, Pure and Applied Mathematics, Academic Press, New York, NY, USA, 1984.