Abstract

Given a Poisson process on a bounded interval, its random geometric graph is the graph whose vertices are the points of the Poisson process, and edges exist between two points if and only if their distance is less than a fixed given threshold. We compute explicitly the distribution of the number of connected components of this graph. The proof relies on inverting some Laplace transforms.

1. Motivation

As technology goes on [1–3], one can expect a wide expansion of the so-called sensor networks. Such networks represent the next evolutionary step in building, utilities, industry, home, agriculture, defense, and many other contexts [4].

These networks are built upon a multitude of small and cheap sensors which are devices with limited transmission capabilities. Each sensor monitors a region around itself by measuring some environmental quantities (e.g., temperature, humidity), detecting intrusion and so forth, and broadcasts its collected information to other sensors or to a central node. The question of whether information can be shared among the whole network is then of crucial importance. Mathematically speaking, sensors can be abstracted as points in 𝐑2, 𝐑3, or a manifold. The region a sensor monitors is represented by a circle centered at the location of the sensor. In what follows, it is assumed that the broadcast radius, that is, the distance at which a sensor can communicate with another sensor, is equal to the monitoring radius. Two questions are then of interest: can any two sensors communicate using others as hopping relays and is the whole region covered by all the sensors? The recent works of Ghrist and his collaborators [5, 6] show how, in any dimension, algebraic topology can be used to answer these questions. Their method consists in building the so-called simplicial complex associated to the configuration of points and the radius of communication. Then, simple algebraic computations yield the Betti numbers: the first Betti number, usually denoted as 𝛽0, is the number of connected components; the second number 𝛽1 is the number of coverage holes. Thus, we have a satisfactory deployment whenever 𝛽0=1 and 𝛽1=0. Trying to pursue their work for random settings, we quickly realized that the dimension of the ambient space played a key role. We then first began by the analysis of dimension 1, which appeared to be the simplest situation. In this case, there is no need of algebraic topology so we will not go further in the description of this line of thought even if it was our first motivation.

In dimension 1, the only question of interest is that of the connexity but it can take different forms. Imagine we are given [0,1] as a domain in which 𝑛 points {π‘₯1,…,π‘₯𝑛} are drawn. For a radius π‘Ÿ, one can wonder whether [0,1]βŠ‚βˆͺ𝑖=1,…,𝑛[π‘₯π‘–βˆ’π‘Ÿ,π‘₯𝑖+π‘Ÿ], or one can investigate whether [π‘₯π‘–βˆ’π‘Ÿ,π‘₯𝑖+π‘Ÿ]∩[π‘₯𝑖+1βˆ’π‘Ÿ,π‘₯𝑖+1+π‘Ÿ]β‰ βˆ… for all 𝑖=1,…,π‘›βˆ’1. The second situation is less restrictive since we do not impose that the frontier of the interval is to be covered. Depending on the application, we have in mind, both questions are sensible. A slightly different but somehow close problem is that of the circle: consider now that the points are dispatched along a circle of unit perimeter 𝐢1 and ask again whether 𝐢1βŠ‚βˆͺ𝑖=1,…,𝑛𝐡(π‘₯𝑖,π‘Ÿ) where 𝐡(π‘₯,π‘Ÿ) is the 2-dimensional ball of center π‘₯ and radius π‘Ÿ. Several years ago, this problem has been thoroughly analyzed ([7] and references therein) for a fixed number of i.i.d. arcs over the circle. A closed form formula can be given for the probability of coverage as a function of the number and of the common law of the arcs length. Some variations of this problem have been investigated since then, see, for instance, [8]. More recently, in [9], algorithms are devised to determine whether a domain can be protected from intrusion by a β€œbelt” of sensors (namely, a ring or the border of a rectangle). There is no performance analysis in this work which is focused on algorithmic solutions for this special problem of coverage. Still motivated by applications to sensor networks, [10] considers the situation where sensors are actually placed in a plan, have a fixed radius of observation, and analyse the connectivity of the trace of the covered region over a line. Some recent results of Kahle [11, 12] are actually hardly linked to our results: the motivation is the same, studying the Betti numbers of some random simplicial complexes, but the results are only asymptotic and valid in dimension greater than 2.

Our main result is the distribution of the number of connected components for a Poisson distribution of sensors in a bounded interval. We could not use the method of [7] since the number of gaps does not determine the connectivity of the domain. For instance, one may have only one gap at the β€œbeginning” which means that all the points are pairwise within the threshold distance and, thus, that the network is connected, or one may have only one gap in the β€œmiddle” which means that there is a true hole of connectivity.

Actually, our method is very much related to the queueing theory. Indeed, clusters, that is, sequence of neighboring points, are the strict analogous of busy periodsβ€”see Section 2. As will appear below, our analysis turns down to be that of an M/D/1/1 queue with a preemption: when a customer arrives during a service, it preempts the server, and, since there is no buffer, the customer who was in service is removed from the queuing system. This analogy led us to use standard tools of queueing theory: Laplace transform and renewal processesβ€”see, for instance, [13, 14]. This works perfectly, and, with a bit of calculus, we can compute all the characteristics we are interested in. It is worthwhile to note that a queueing model (namely the M/G/∞) also appears in [10].

The paper is organized as follows: Section 2 presents the model and defines the relevant quantities to be calculated. The calculations and analytical results are presented in Section 3. For our situation, we find results analogous to that of [7]. In Section 4, two other scenarios are presented, considering the number of incomplete clusters and clusters placed in a circle. In Section 5, numerical examples are presented and analyzed.

2. Problem Formulation

Let 𝐿>0, we assume that we are given a Poisson process, denoted by 𝑁, of intensity πœ† on [0,𝐿]. Let (𝑋𝑖,𝑖β‰₯1) be the atoms of 𝑁. We, thus, know that the random variables, Δ𝑋𝑖=𝑋𝑖+1βˆ’π‘‹π‘– are i.i.d. and exponentially distributed. We fix πœ–>0. Two points, located, respectively, at π‘₯ and 𝑦, are said to be directly connected whenever |π‘₯βˆ’π‘¦|β‰€πœ–. For 𝑖<𝑗, two points of 𝑁, say 𝑋𝑖 and 𝑋𝑗, are indirectly connected if 𝑋𝑙 and 𝑋𝑙+1 are directly connected for any 𝑙=𝑖,…,π‘—βˆ’1. A set of points directly or indirectly connected is called a cluster, a complete cluster is a cluster which begins and ends within [0,𝐿]. The connectivity of the whole network is measured by the number of clusters.

The number of points in the interval [0,π‘₯] is denoted by 𝑁π‘₯=βˆ‘βˆžπ‘›=0𝟏{𝑋𝑛≀π‘₯}. The random variable 𝐴𝑖 given by 𝐴𝑖=𝑋1𝑋if𝑖=1,infπ‘—βˆΆπ‘‹π‘—>π΄π‘–βˆ’1,π‘‹π‘—βˆ’π‘‹π‘—βˆ’1ξ€Ύ>πœ–if𝑖>1,(2.1) represents the beginning of the 𝑖th cluster, denoted by 𝐢𝑖. In the same way, the end of this same cluster, 𝐸𝑖, is defined by 𝐸𝑖𝑋=inf𝑗+πœ–βˆΆπ‘‹π‘—>𝐴𝑖,𝑋𝑗+1βˆ’π‘‹π‘—ξ€Ύ.>πœ–(2.2) So, the 𝑖th cluster, 𝐢𝑖, has a number of points given by π‘πΈπ‘–βˆ’π‘π΄π‘–. We define the length 𝐡𝑖 of 𝐢𝑖 as πΈπ‘–βˆ’π΄π‘–. The intercluster size, 𝐷𝑖, is the distance between the end of 𝐢𝑖 and the beginning of 𝐢𝑖+1, which means that 𝐷𝑖=𝐴𝑖+1βˆ’πΈπ‘–, and Δ𝐴𝑖 is the distance between the first points of two consecutive clusters 𝐢𝑖, given by Δ𝐴𝑖=𝐴𝑖+1βˆ’π΄π‘–=𝐡𝑖+𝐷𝑖.

Remark 2.1. With this set of assumptions and definitions, we can see our problem as an 𝑀/𝐷/1/1 preemptive queue, see Figure 1. In this nonconservative system, the service time is deterministic and given by πœ–. When a customer arrives during a service, the served customer is removed from the system and replaced by the arriving customer. Within this framework, a cluster corresponds to what is called a busy period, the intercluster size is an idle time, and 𝐴𝑖+𝐷𝑖 is the length of the 𝑖th cycle.

The number of complete clusters in [0,𝐿] corresponds to the number of connected components 𝛽0(𝐿) (since, in dimension 1, it coincides with the Euler characteristics of the union of intervals, see [5]) of the network. The distance between the beginning of the first cluster and the beginning of the (𝑖+1)th one is defined as π‘ˆπ‘–=βˆ‘π‘–π‘˜=1Ξ”π΄π‘˜. We also define Δ𝑋0=𝐷0=𝑋1. Figure 2 illustrates these definitions.

For the sake of completeness, we recall the essentials of Markov’s process theory needed to go along, for further details we refer, for instance, to [13, 14]. In what follows, for a process 𝑋, (ℱ𝑋𝑑,𝑑β‰₯0) is the filtration generated by the sample-paths of 𝑋: ℱ𝑋𝑑=𝜎{𝑋(𝑠),𝑠β‰₯𝑑}.(2.3)

Definition 2.2. A process (𝑋(𝑑),𝑑β‰₯0) with values in a denumerable space 𝐸 is said to be Markov whenever 𝔼𝐹(𝑋(𝑑+𝑠))βˆ£β„±π‘‹π‘‘ξ€»[],=𝔼𝐹(𝑋(𝑑+𝑠))βˆ£π‘‹(𝑑)(2.4) for any bounded function 𝐹 from 𝐸 to 𝐑, any 𝑑β‰₯0 and 𝑠β‰₯0.
Equivalently, a process 𝑋 is Markov if and only if, given the present (i.e., given 𝑋(𝑑)), the past (i.e., the sample-path of 𝑋 before time 𝑑) and the future (i.e., the sample-path of 𝑋 after time 𝑑) of the process are independent.

Definition 2.3. A random variable 𝜏 with values in 𝐑+βˆͺ{+∞} is an ℱ𝑋-stopping time whenever, for any 𝑑β‰₯0, the event {πœβ‰€π‘‘} belongs to ℱ𝑋𝑑.

The point is that (2.4) still holds when 𝑑 is replaced by a stopping time 𝜏: given 𝑋(𝜏), the past and the future of 𝑋 are independent. 𝑋 is then said to be strong Markov. This property always holds for the Markov processes with values in a denumerable space but is not necessarily true for the Markov processes with values in an arbitrary space.

From now on, the Markov process under consideration is 𝑁, the Poisson process of intensity πœ† over [0,𝐿].

Lemma 2.4. For any 𝑖β‰₯1, 𝐴𝑖 and 𝐸𝑖 are stopping times.

Proof. Let us consider the filtration ℱ𝑁𝑑=𝜎{π‘π‘Ž,π‘Žβ‰€π‘‘}. For 𝑖=1, we have 𝐴1ξ€ΎβŸΊξ€½π‘‹β‰€π‘‘1ξ€ΎβŸΊξ€½π‘β‰€π‘‘π‘‘ξ€Ύβ‰₯1βˆˆβ„±π‘π‘‘.(2.5)
Thus, 𝐴1 is a stopping time. For 𝐴2, we have 𝐴2ξ€ΎβŸΊξš>𝑑𝑛β‰₯1𝑁𝑑=𝑛,π‘›ξšπ‘—=1Δ𝑋𝑗β‰₯πœ–,π‘›ξšπ‘˜=𝑗+1ξ€½Ξ”π‘‹π‘˜ξ€Ύβ‰€πœ–ξƒ°ξƒ°βˆˆβ„±π‘π‘‘,(2.6) so 𝐴2 is also a stopping time. We proceed along the same line for others 𝐴𝑖 and as well for 𝐸𝑖 to prove that they are stopping times.

Since 𝑁 is a (strong) Markov’s process, the next corollary is immediate.

Corollary 2.5. The set {𝐡𝑖,𝐷𝑖,𝑖β‰₯1} is a set of independent random variables. Moreover, 𝐷𝑖 is distributed as an exponential random variable with mean 1/πœ†, and the random variables {𝐡𝑖,iβ‰₯1} are i.i.d.

3. Calculations

Theorem 3.1. The Laplace transform of the distribution of 𝐡𝑖 is given by π”Όξ€Ίπ‘’βˆ’π‘ π΅π‘–ξ€»=πœ†+π‘ πœ†+𝑠𝑒(πœ†+𝑠)πœ–β‹…(3.1)

Proof. Since Δ𝑋𝑗 is an exponentially distributed random variable, π”Όξ‚ƒπ‘’βˆ’π‘ Ξ”π‘‹π‘—πŸ{Ξ”π‘‹π‘—β‰€πœ–}ξ‚„=ξ€œπœ–0π‘’βˆ’π‘ π‘‘πœ†π‘’βˆ’πœ†π‘‘πœ†π‘‘π‘‘=𝑠+πœ†1βˆ’π‘’βˆ’(𝑠+πœ†)πœ–ξ€Έ.(3.2)
Hence, the Laplace transform of the distribution of 𝐡1 is given by π”Όξ€Ίπ‘’βˆ’π‘ π΅1ξ€»=βˆžξ“π‘›=1π”Όξ€Ίπ‘’βˆ’π‘ π΅1,𝑁𝐸1ξ€»==π‘›βˆžξ“π‘›=1π”Όξƒ¬π‘’βˆ‘βˆ’π‘ (π‘›βˆ’1𝑗=1Δ𝑋𝑗+πœ–)𝟏{Δ𝑋𝑛>πœ–}π‘›βˆ’1𝑗=1𝟏{Ξ”π‘‹π‘—β‰€πœ–}ξƒ­=βˆžξ“π‘›=1ξ€·π”Όξ€Ίπ‘’βˆ’π‘ Ξ”π‘‹1𝟏{Δ𝑋1β‰€πœ–}ξ€»ξ€Έπ‘›βˆ’1π”Όξ€Ίπ‘’βˆ’π‘ Ξ”π‘‹π‘›πŸ{Δ𝑋𝑛>πœ–}ξ€»π‘’βˆ’π‘ πœ–=βˆžξ“π‘›=0ξ‚€πœ†ξ€·π‘ +πœ†1βˆ’π‘’βˆ’(𝑠+πœ†)πœ–ξ€Έξ‚π‘›π‘’βˆ’π‘ πœ†π‘’βˆ’π‘ πœ–=πœ†+π‘ π‘ π‘’πœ†πœ–π‘’π‘ πœ–.+πœ†(3.3)
Using Corollary 2.5, we have 𝔼[π‘’βˆ’π‘ π΅1]=𝔼[π‘’βˆ’π‘ π΅π‘–], which concludes the proof.

From this result, we can immediately calculate the Laplace transform of the distribution of Δ𝐴𝑖. Since Δ𝐴𝑖=𝐡𝑖+𝐷𝑖, we have 𝔼[π‘’βˆ’π‘ Ξ”π΄π‘–]=𝔼[π‘’βˆ’π‘ (𝐡𝑖+𝐷𝑖)], and using Corollary 2.5: π”Όξ€Ίπ‘’βˆ’π‘ Ξ”π΄π‘–ξ€»ξ€Ίπ‘’=π”Όβˆ’π‘ π΅π‘–ξ€»π”Όξ€Ίπ‘’βˆ’π‘ π·π‘–ξ€»=πœ†πœ†+𝑠𝑒(πœ†+𝑠)πœ–β‹…(3.4)

Corollary 3.2. The Laplace transform of the distribution of π‘ˆπ‘›, for 𝑛β‰₯0, is given by π”Όξ€Ίπ‘’βˆ’π‘ π‘ˆπ‘›ξ€»=πœ†π‘›ξ€·πœ†+𝑠𝑒(πœ†+𝑠)πœ–ξ€Έπ‘›β‹…(3.5)

Proof. We use Corollary 2.5 and Theorem 3.1 to calculate the Laplace transform of the distribution of π‘ˆπ‘› since π‘ˆπ‘›=βˆ‘π‘›π‘–=1(𝐡𝑖+𝐷𝑖): π”Όξ€Ίπ‘’βˆ’π‘ π‘ˆπ‘›ξ€»=𝑛𝑖=1π”Όξ€Ίπ‘’βˆ’π‘ π΅π‘–ξ€»π”Όξ€Ίπ‘’βˆ’π‘ π·π‘–ξ€»=ξ‚€πœ†+π‘ πœ†+𝑠𝑒(πœ†+𝑠)πœ–ξ‚π‘›ξ‚€πœ†ξ‚πœ†+𝑠𝑛,(3.6) hence, the result.

Let us define the function 𝑝𝑛 as π‘π‘›βˆΆπ‘₯βˆˆπ‘+βŸΌπ‘π‘›ξ€·π›½(π‘₯)=Pr0ξ€Έ(π‘₯)=𝑛,(3.7) that is, 𝑝𝑛(π‘₯) is the probability of having 𝑛 clusters in the interval [0,π‘₯]. Since, for all π‘₯βˆˆπ‘+, 0≀𝑝𝑛(π‘₯)≀1, the Laplace transform of 𝑝𝑛 with respect to π‘₯, β„’ξ€½π‘π‘›ξ€Ύξ€œ(𝑠)=∞0π‘’βˆ’π‘ π‘₯𝑝𝑛(π‘₯)𝑑π‘₯,(3.8) is well defined.

Theorem 3.3. For any 𝑛β‰₯0, the Laplace transform of 𝑝𝑛 is given by β„’ξ€½π‘π‘›ξ€Ύπœ†(𝑠)=𝑛𝑒(πœ†+𝑠)πœ–ξ€·π‘ π‘’(πœ†+𝑠)πœ–ξ€Έ+πœ†π‘›β‹…(3.9)

Proof. We note that; see Figure 3, 𝛽0ξ€ΎβŸΊξ‚»ξ€½(π‘₯)β‰₯𝑛Δ𝑋0+π‘ˆπ‘›βˆ’1+𝐡𝑛≀𝐿if𝑛β‰₯1,Δ𝑋0ξ€Ύ<∞if𝑛=0.(3.10)
Hence, 𝛽Pr0ξ€Έξ€·(π‘₯)=0=1βˆ’PrΔ𝑋0+𝐡1𝛽≀π‘₯,(3.11)Pr0(ξ€Έξ€·π‘₯)=𝑛=PrΔ𝑋0+π‘ˆπ‘›βˆ’1+𝐡𝑛≀π‘₯βˆ’PrΔ𝑋0+π‘ˆπ‘›+𝐡𝑛+1≀π‘₯.(3.12) Let π‘Œπ‘›=Δ𝑋0+π‘ˆπ‘›βˆ’1+𝐡𝑛,if𝑛β‰₯1,0,if𝑛=0,(3.13) then we have β„’ξ€½ξ€·π‘ŒPrπ‘›ξ€œβ‰€β‹…ξ€Έξ€Ύ(𝑠)=∞0ξ€·π‘ŒPr𝑛𝑒≀π‘₯βˆ’π‘ π‘₯=ξ€œπ‘‘π‘₯∞0ξ€œπ‘₯0π‘‘π‘ƒπ‘Œπ‘›(𝑦)π‘’βˆ’π‘ π‘₯=1𝑑π‘₯π‘ π”Όξ€Ίπ‘’βˆ’π‘ π‘Œπ‘›ξ€»=1π‘ π”Όξ€Ίπ‘’βˆ’π‘ Ξ”π‘‹0ξ€»π”Όξ€Ίπ‘’βˆ’π‘ π‘ˆπ‘›βˆ’1ξ€»π”Όξ€Ίπ‘’βˆ’π‘ π΅π‘›ξ€»=1π‘ πœ†π‘›ξ€·π‘’πœ†πœ–π‘ π‘’π‘ πœ–ξ€Έ+πœ†π‘›,(3.14) for 𝑛β‰₯1, where we used Corollary 2.5 in the third line. For 𝑛=0, the Laplace transform is trivial and given by β„’{Pr(π‘Œ0≀⋅)}(𝑠)=1/𝑠. Substituting (3.14) in the Laplace transform of both sides of (3.12) yields β„’ξ€½π‘π‘›ξ€Ύξ€½ξ€·π‘Œ(𝑠)=β„’Prπ‘›ξ€½ξ€·π‘Œβ‰€β‹…ξ€Έξ€Ύ(𝑠)βˆ’β„’Pr𝑛+1=𝑒≀⋅(𝑠)πœ–πœ†π‘’πœ–π‘ πœ†π‘›ξ€·π‘’πœ–πœ†π‘ π‘’πœ–π‘ ξ€Έ+πœ†π‘›+1,𝑛β‰₯0.(3.15) The proof is, thus, complete.

Lemma 3.4. Let π‘š be an positive integer. For any π‘₯>0, when πœ–β†’0, 𝔼[π›½π‘š0]→𝔼[π‘π‘šπΏ].

Proof. Since there is almost surely a finite number of points in [0,π‘₯], for almost all sample-paths, there exists πœ‚>0 such that Δ𝑋𝑗β‰₯πœ‚ for any 𝑗=1,…,𝑁π‘₯. Hence, for πœ–<πœ‚, 𝛽0(π‘₯)=𝑁π‘₯. This implies that 𝛽0(π‘₯) tends almost surely to 𝑁π‘₯ as πœ– goes to 0. Moreover, it is immediate by the very definition of 𝛽0(π‘₯) that 𝛽0(π‘₯)≀𝑁π‘₯. Since, for any π‘š, 𝔼[π‘π‘šπ‘₯] is finite, the proof follows by dominated convergence.

Let Li𝑑(𝑧), 𝑧,π‘‘βˆˆπ‘, 𝑧<1, be the polylogarithm function with parameter 𝑑, defined by Li𝑑(𝑧)=βˆžξ“π‘˜=1π‘§π‘˜π‘˜π‘‘β‹…(3.16) For π‘š a positive integer, consider the function of π‘₯π‘€π‘šπ›½0ξ€Ίπ›½βˆΆπ‘₯βŸΌπ”Όπ‘š0(ξ€»=π‘₯)βˆžξ“π‘–=0π‘–π‘šπ‘π‘–(π‘₯).(3.17) Its Laplace transform is given by β„’ξ‚†π‘€π‘šπ›½0ξ‚‡ξ€œ(𝑠)=∞0𝔼𝛽0(π‘₯)π‘šξ€»π‘’βˆ’π‘ πΏπ‘‘π‘₯.(3.18)

Corollary 3.5. Let 𝛼 be defined as follows: 𝑒𝛼=πœ–πœ†πœ†π‘ π‘’πœ–π‘ .(3.19) The Laplace transform of the π‘šth moment of 𝛽0(𝐿) is β„’ξ‚†π‘€π‘šπ›½0𝛼(𝑠)=𝑠(𝛼+1)Liβˆ’π‘šξ‚€1,𝛼+1(3.20) which converges, provided that 𝛼>0.

Proof. Applying the Laplace transform of both sides of (3.17), we get β„’ξ‚†π‘€π‘šπ›½0(𝑠)=βˆžξ“π‘–=1π‘–π‘šβ„’ξ€½π‘π‘–ξ€Ύ(=𝑒𝑠)ξ€·ξ€·πœ–πœ†ξ€Έπ‘’/πœ†πœ–π‘ ξ€Έπ‘’ξ€·ξ€·πœ–πœ†ξ€Έ/πœ†π‘ π‘’πœ–π‘ ξ€Έ+1βˆžξ“π‘–=1π‘–π‘šπ‘’ξ€·ξ€·πœ–πœ†ξ€Έ/πœ†π‘ π‘’πœ–π‘ ξ€Έ+1𝑖=𝛼𝑠(𝛼+1)Liβˆ’π‘šξ‚€1,𝛼+1(3.21) concluding the proof.

We define {π‘šπ‘˜} as the Stirling number of second kind [15]; that is, {π‘šπ‘˜} is the number of ways to partition a set of π‘š objects into π‘˜ groups. They are intimately related to polylogarithm by the following identity (see [16]) valid for any positive integer π‘š, Liβˆ’π‘š(𝑧)=π‘šξ“π‘˜=0(βˆ’1)π‘š+π‘˜ξ€½π‘˜!π‘š+1π‘˜+1ξ€Ύ(1βˆ’π‘§)π‘˜+1β‹…(3.22)

Corollary 3.6. The π‘šth moment of the number of clusters on the interval [0,𝐿] is given by π‘€π‘šπ›½0(𝐿)=π‘šξ“π‘˜=1⎧βŽͺ⎨βŽͺβŽ©π‘šπ‘˜βŽ«βŽͺ⎬βŽͺβŽ­ξ‚€πΏπœ–ξ‚βˆ’π‘˜π‘˜ξ€·πœ†πœ–π‘’βˆ’πœ–πœ†ξ€Έπ‘˜πŸ{𝐿/πœ–>π‘˜}.(3.23)

Proof. Using (3.22) in the result of Corollary 3.5, we get β„’ξ‚†π‘€π‘šπ›½0𝛼(𝑠)=π‘ π‘šξ“π‘˜=0(βˆ’1)π‘š+π‘˜ξ€½π‘˜!π‘š+1π‘˜+1ξ€Ύ(1+𝛼)π‘˜π›Όπ‘˜+1=1(𝛼+1)π‘ π‘šξ“π‘˜=0π‘π‘˜,π‘š1π›Όπ‘˜,(3.24) where the coefficients π‘π‘˜,π‘š are integers given by π‘π‘˜,π‘š=π‘šξ“π‘—=π‘˜(βˆ’1)π‘—βŽ§βŽͺ⎨βŽͺ⎩⎫βŽͺ⎬βŽͺβŽ­βŽ›βŽœβŽœβŽπ‘—π‘˜βŽžβŽŸβŽŸβŽ π‘—!π‘š+1𝑗+1.(3.25) Using the following identity of the Stirling numbers [17], π‘šξ“π‘—=0(βˆ’1)π‘—βŽ§βŽͺ⎨βŽͺ⎩⎫βŽͺ⎬βŽͺβŽ­π‘—!π‘š+1𝑗+1=0,(3.26) we find that 𝑐0,π‘š=0 for π‘š a positive integer. So, we can write the Laplace transform of the moments as β„’ξ‚†π‘€π‘šπ›½0(𝑠)=π‘šξ“π‘˜=1π‘π‘˜,π‘šξ€·πœ†π‘’βˆ’πœ–πœ†ξ€Έπ‘˜π‘ π‘˜+1π‘’π‘˜π‘ πœ–,(3.27) and apply the inverse of the Laplace transform in both sides of (3.12) to obtain π‘€π‘šπ›½0(𝐿)=β„’βˆ’1ξƒ―π‘šξ“π‘˜=1π‘π‘˜,π‘šξ€·πœ†π‘’βˆ’πœ–πœ†ξ€Έπ‘˜π‘ π‘˜+1π‘’π‘˜π‘ πœ–ξƒ°=(𝐿)π‘šξ“π‘˜=1π‘π‘˜,π‘šξ€·πœ†π‘’βˆ’πœ–πœ†ξ€Έπ‘˜β„’βˆ’11π‘ π‘˜+1π‘’π‘˜π‘ πœ–ξ‚‡=(𝐿)π‘šξ“π‘˜=1π‘π‘˜,π‘š(π‘˜!πΏβˆ’π‘˜πœ–)π‘˜ξ€·πœ†π‘’βˆ’πœ–πœ†ξ€Έπ‘˜πŸ{𝐿>π‘˜πœ–}.(3.28) According to Lemma 3.4, when πœ–β†’0, we obtain π‘€π‘šπ›½0𝑁(𝐿)=π”Όπ‘šπΏξ€»=π‘šξ“π‘˜=1π‘π‘˜,π‘šπ‘˜!(πΏπœ†)π‘˜πŸ{𝐿>0}.(3.29) Hence, for any πœ†>0, π‘šξ“π‘˜=1π‘π‘˜,π‘šπ‘˜!(πΏπœ†)π‘˜πŸ{𝐿>0}=π‘šξ“π‘˜=1⎧βŽͺ⎨βŽͺβŽ©π‘šπ‘˜βŽ«βŽͺ⎬βŽͺ⎭(πΏπœ†)π‘˜πŸ{𝐿>0},(3.30) which shows that π‘π‘˜,π‘š=⎧βŽͺ⎨βŽͺβŽ©π‘šπ‘˜βŽ«βŽͺ⎬βŽͺβŽ­π‘˜!.(3.31) Thus, we have proved (3.23) for any positive integer π‘š.

Theorem 3.7. For any 𝑛, 𝐿, πœ†, and πœ–, we have ξ€·π›½π‘ƒπ‘Ÿ0ξ€Έ=1(𝐿)=𝑛𝑛!⌊𝐿/πœ–βŒ‹βˆ’π‘›ξ“π‘–=0(βˆ’1)𝑖𝑖!(πΏβˆ’(𝑛+𝑖)πœ–)πœ†π‘’βˆ’πœ†πœ–ξ€Έπ‘›+𝑖.(3.32)

Proof. Since 𝛽0(𝐿)≀𝑁𝐿 and since 𝔼[𝑒𝑠𝑁𝐿] is finite for any π‘ βˆˆπ‘, we have, for any 𝑠β‰₯0, π”Όξ€Ίπ‘’βˆ’π‘ π›½0(𝐿)ξ€»=βˆžξ“π‘˜=0(βˆ’1)π‘˜π‘ π‘˜π”Όξ€Ίπ›½π‘˜!π‘˜0(ξ€».𝐿)(3.33) Rearranging the terms of the right-hand side, and substituting π‘€π‘šπ›½0(𝐿), by the result of (3.23), we obtain π”Όξ€Ίπ‘’βˆ’π‘ π›½0(𝐿)ξ€»=βˆžξ“π‘˜=0βŽ›βŽœβŽœβŽ(πΏβˆ’π‘˜πœ–)π‘˜ξ€·πœ†π‘’βˆ’πœ†πœ–ξ€Έπ‘˜πŸβˆž{𝐿>π‘˜πœ–}𝑗=π‘˜(βˆ’π‘ )π‘—βŽ§βŽͺ⎨βŽͺβŽ©π‘—π‘˜βŽ«βŽͺ⎬βŽͺβŽ­βŽžβŽŸβŽŸβŽ π‘—!β‹…(3.34) Furthermore, it is known (see [17]) that βˆžξ“π‘—=π‘˜π‘₯π‘—βŽ§βŽͺ⎨βŽͺβŽ©π‘—π‘˜βŽ«βŽͺ⎬βŽͺ⎭=1𝑗!π‘˜!(𝑒π‘₯βˆ’1)π‘˜.(3.35) Hence, π”Όξ€Ίπ‘’βˆ’π‘ π›½0(𝐿)ξ€»=βˆžξ“π‘˜=0(πΏβˆ’π‘˜πœ–)π‘˜ξ€·πœ†π‘’βˆ’πœ†πœ–ξ€Έπ‘˜πŸ{𝐿>π‘˜πœ–}(π‘’βˆ’π‘ βˆ’1)π‘˜β‹…π‘˜!(3.36) By inverting the Laplace transforms, we get βˆžξ“βˆžπ‘˜=0𝑖=π‘˜(βˆ’1)π‘–βŽ›βŽœβŽœβŽπ‘–π‘›βŽžβŽŸβŽŸβŽ π›Ώπ‘–!(π‘˜βˆ’π‘›)(π‘˜πœ–βˆ’πΏ)π‘˜ξ€·πœ†π‘’βˆ’πœ†πœ–ξ€Έπ‘˜πŸ{𝐿>π‘˜πœ–},(3.37) where π›Ώπ‘Ž is the Dirac measure at point π‘Ž. After some simple algebra, we find the expression of the probability that an interval contains 𝑛 complete clusters: 𝛽Pr0ξ€Έ(𝐿)=𝑛=𝑝𝑛1(𝐿)=𝑛!⌊𝐿/πœ–βŒ‹βˆ’π‘›ξ“π‘–=0(βˆ’1)𝑖[]𝑖!πΏβˆ’(𝑛+𝑖)πœ–πœ†π‘’βˆ’πœ†πœ–ξ€Έπ‘›+𝑖,(3.38) concluding the proof.

Lemma 3.8. For π‘₯β‰₯0, 𝑝𝑛(π‘₯) has the three following properties.(i)𝑝𝑛(π‘₯) is differentiable.(ii)limπ‘₯β†’βˆžπ‘π‘›(π‘₯)=0.(iii)limπ‘₯β†’βˆžπ‘‘π‘π‘›(π‘₯)/𝑑π‘₯=0.

Proof. Let 𝑗 be a nonnegative integer. The function is obviously differentiable when π‘₯/πœ–β‰ π‘—. Besides, we have limπ‘₯β†’πœ–π‘—+𝑝𝑛(π‘₯)βˆ’limπ‘₯β†’πœ–π‘—βˆ’π‘π‘›(π‘₯)=limπ‘₯β†’πœ–π‘—+(βˆ’1)𝑗1𝑗!(π‘₯βˆ’(𝑛+𝑗)πœ–)π‘Žξ‚π‘›+𝑗⋅(3.39) Since the right-hand term function of π‘₯ is zero as well as its derivative for all 𝑗, the function is also derivable when π‘₯/πœ–=𝑗, which proves (i). Items (ii) and (iii) are direct consequences of the Final Value theorem in the Laplace transform of 𝑝𝑛 and its derivative.

The expression of 𝑝𝑛 gives us a Laplace pair between the π‘₯ and 𝑠 domains: 𝟏{π‘₯β‰₯0}𝑛!⌊π‘₯/πœ–βŒ‹βˆ’π‘›ξ“π‘–=0(βˆ’1)𝑖1𝑖!(π‘₯βˆ’(𝑛+𝑖)πœ–)π‘Žξ‚β„’π‘›+π‘–βŸΊπ‘Žπ‘’πœ–π‘ (π‘Žπ‘ π‘’πœ–π‘ +1)𝑛+1β‹…(3.40) We can use this relation to find the distributions of 𝐡𝑖 and π‘ˆπ‘›.

Theorem 3.9. The probability density functions of 𝐡𝑖 and π‘ˆπ‘›, denoted, respectively, by 𝑓𝐡𝑖(π‘₯) and π‘“π‘ˆπ‘›(π‘₯), are given: 𝑓𝐡𝑖(π‘₯)=πœ†π‘’βˆ’πœ–πœ†π‘0(π‘₯βˆ’πœ–)+π‘’βˆ’πœ–πœ†π‘‘π‘π‘‘π‘₯0ξ‚„πŸ(π‘₯βˆ’πœ–){π‘₯>πœ–},𝑓(3.41)π‘ˆπ‘›(π‘₯)=πœ†π‘’βˆ’πœ–πœ†π‘π‘›βˆ’1(π‘₯βˆ’πœ–)𝟏{π‘₯>πœ–},(3.42) where the expressions of 𝑝0(π‘₯βˆ’πœ–) and (𝑑/𝑑π‘₯)𝑝0(π‘₯βˆ’πœ–) are straightforwardly obtained from (3.32).

Proof. According to Theorem 3.1, π”Όξ€Ίπ‘’βˆ’π‘ π΅π‘–ξ€»=1πœ†(πœ†+𝑠)ξ€·π‘’πœ†πœ–ξ€Έ/πœ†π‘ π‘’π‘ πœ–+1=πœ†π‘’βˆ’πœ–πœ†π‘’πœ–πœ†πœ†π‘’πœ–π‘ ξ€·π‘’πœ–πœ†ξ€Έ/πœ†π‘ π‘’πœ–π‘ π‘’+1βˆ’πœ–π‘ +π‘’βˆ’πœ–πœ†π‘ π‘’πœ–πœ†πœ†π‘’πœ–π‘ ξ€·π‘’πœ–πœ†ξ€Έ/πœ†π‘ π‘’πœ–π‘ π‘’+1βˆ’πœ–π‘ =πœ†π‘’βˆ’πœ–πœ†π‘’βˆ’πœ–π‘ β„’ξ€½π‘0ξ€Ύ(β‹…)(𝑠)+π‘’βˆ’πœ–πœ†π‘’βˆ’πœ–π‘ ξ€½π‘π‘ β„’0ξ€Ύ(β‹…)(𝑠).(3.43) Here, using the inverse Laplace transform established in (3.40) and remembering that 𝑝0(π‘₯βˆ’)=0, we get an analytical expression for 𝑓𝐡𝑖(π‘₯), proving (3.41).
Proceeding in a similar fashion, we can find the distribution of π‘ˆπ‘› by inverting its Laplace transform given by Corollary 3.2 as follows: π”Όξ€Ίπ‘’βˆ’π‘ π‘ˆπ‘›ξ€»=1π‘’ξ€·ξ€·πœ†πœ–ξ€Έ/πœ†π‘ eπ‘ πœ–ξ€Έ+1𝑛=πœ†π‘’βˆ’πœ–πœ†π‘’πœ–πœ†πœ†π‘’πœ–π‘ π‘’ξ€·ξ€·πœ–πœ†ξ€Έ/πœ†π‘ π‘’πœ–π‘ ξ€Έ+1π‘›π‘’βˆ’πœ–π‘ =πœ†π‘’βˆ’πœ–πœ†π‘’βˆ’πœ–π‘ β„’ξ€½π‘π‘›βˆ’1ξ€Ύ(β‹…)(𝑠).(3.44) We, thus, have (3.42).

We can also obtain the probability that the segment [0,𝐿] is completely covered by the sensors. To do this, we remember that the first point (if there is one) is capable to cover the interval [𝑋1βˆ’πœ–,𝑋1+πœ–].

Theorem 3.10. Let π‘…π‘š,𝑛(π‘₯) be defined as follows: π‘…π‘š,𝑛(π‘₯)=⌊π‘₯/πœ–βŒ‹βˆ’1𝑖=π‘šξƒ¬ξ€·π‘’βˆ’πœ†πœ–ξ€Έπ‘–+𝑛𝑖+𝑛𝑗=0[])(πœ†(1βˆ’π‘–)πœ–βˆ’π‘₯𝑗⋅𝑗!(3.45) Then, []π‘ƒπ‘Ÿ(0,𝐿iscovered)=𝑅0,1(𝐿)βˆ’π‘’βˆ’πœ†πœ–π‘…0,1(πΏβˆ’πœ–)βˆ’π‘’βˆ’πœ†πœ–π‘…1,0(𝐿)+π‘’βˆ’2πœ†πœ–π‘…1,0(πΏβˆ’πœ–).(3.46)

Proof. The condition of total coverage is the same as ξ€½[]βˆ€π‘₯∈0,𝐿,βˆƒπ‘‹π‘–βˆˆ[]𝑋0,𝐿∣π‘₯∈1βˆ’πœ–,𝑋1+πœ–ξ€»ξ€Ύ,(3.47) which means that {[]𝐡0,𝐿iscovered}⟺1β‰₯πΏβˆ’π‘‹1ξ€Ύβˆ©ξ€½π‘‹1ξ€Ύβ‰€πœ–.(3.48) Hence, []ξ€œPr(0,𝐿iscovered)=πœ–0𝐡Pr1β‰₯πΏβˆ’π‘‹1βˆ£π‘‹1ξ€Έ=π‘₯𝑑𝑃𝑋1(π‘₯),(3.49) and since 𝐡1 and 𝑋1 are independent []ξ€œPr(0,𝐿iscovered)=πœ–0ξ€œβˆžπΏβˆ’π‘₯𝑓𝐡1(𝑒)πœ†π‘’βˆ’π‘₯πœ†π‘‘π‘’π‘‘π‘₯.(3.50) The result then follows from Lemma 3.8 and some tedious but straightforward algebra.

4. Other Scenarios

The method can be used to calculate 𝑝𝑛 for other definitions of the number of clusters. We consider two other definitions: the number of incomplete clusters and the number of clusters in a circle.

4.1. Number of Incomplete Clusters

The major difference with Section 3 is that a cluster is now taken into account as soon as one of the point of the cluster is inside the interval [0,𝐿]. So, for instance, in Figure 3, we count actually 𝑛+1 incomplete clusters. We define π›½ξ…ž0(𝐿) as the number of incomplete clusters on an interval [0,𝐿].

Theorem 4.1. Let 𝐺(π‘˜) be defined as 𝐺(π‘˜)=(βˆ’1)π‘˜ξƒ©π‘’π‘˜βˆ’π‘˜πœ†πœ–ξ“π‘—=0[]πœ†(π‘˜πœ–βˆ’πΏ)𝑗𝑗!βˆ’π‘’βˆ’πœ†πΏξƒͺ𝟏{𝑇>π‘˜πœ–},(4.1) for π‘˜βˆˆβ„•+ and 𝐺(βˆ’1)=π‘’βˆ’πœ†πΏ. Then, ξ€·π›½π‘ƒπ‘Ÿξ…ž0ξ€Έ=(𝐿)=π‘›βŒŠπΏ/πœ–βŒ‹+1𝑖=𝑛(βˆ’1)𝑖+π‘›βŽ›βŽœβŽœβŽπ‘–π‘›βŽžβŽŸβŽŸβŽ (𝐺(π‘–βˆ’1)+𝐺(𝑖)),for𝑛β‰₯0.(4.2)

Proof. The condition of π›½ξ…ž0(𝐿)β‰₯𝑛 is now given by ξ€½π›½ξ…ž0ξ€ΎβŸΊξ‚»ξ€½β‰₯𝑛Δ𝑋0+π‘ˆπ‘›βˆ’1≀𝐿if𝑛β‰₯1,Δ𝑋0ξ€Ύ<∞if𝑛=0.(4.3) We define π‘Œπ‘› as π‘Œπ‘›=Δ𝑋0+π‘ˆπ‘›βˆ’1if𝑛β‰₯1,0if𝑛=0.(4.4) Repeating the same calculations, we find the Laplace transform of Pr(π›½ξ…ž0(β‹…)=𝑛): ℒ𝛽Prξ…ž0⎧βŽͺ⎨βŽͺβŽ©πœ†(β‹…)=𝑛(𝑠)=𝑒𝑠+πœ†πœ–πœ†πœ†π‘’πœ–π‘ π‘’ξ€·ξ€·πœ–πœ†ξ€Έ/πœ†π‘ π‘’πœ–π‘ ξ€Έ+1𝑛1if𝑛β‰₯1,πœ†+𝑠if𝑛=0.(4.5) With this expression, following the lines of Lemma 3.4, we obtain β„’ξ€½π”Όξ€Ίπ›½ξ…ž0(β‹…)π‘šξ€»ξ€Ύ(𝑠)=π‘š+1ξ“π‘˜=1⎧βŽͺ⎨βŽͺβŽ©π‘˜βŽ«βŽͺ⎬βŽͺ⎭1π‘š+1(π‘˜βˆ’1)!π‘ π‘˜πœ†ξ‚΅πœ†+π‘ πœ†π‘’βˆ’πœ†πœ–π‘’π‘ πœ–ξ‚Άπ‘˜βˆ’1.(4.6) Then, we write πœ†1πœ†+π‘ π‘ π‘˜=(βˆ’1)π‘˜πœ†π‘˜βˆ’11+πœ†+π‘ π‘˜ξ“π‘–=11π‘ π‘–ξ‚€βˆ’1πœ†ξ‚π‘˜βˆ’π‘–,(4.7) to find an expression with a well-known Laplace transform inverse, and, after inverting it, we obtain 𝔼𝛽0ξ…žπ‘šξ€»=π‘šξ“π‘˜=0⎧βŽͺ⎨βŽͺ⎩⎫βŽͺ⎬βŽͺβŽ­π‘š+1π‘˜+1π‘˜!𝐺(π‘˜).(4.8) Expanding the Laplace transform of the distribution of π›½ξ…ž0(𝐿) in a Taylor series and rearranging terms, we get π”Όξ‚ƒπ‘’βˆ’π‘ π›½β€²0(𝐿)ξ‚„=1+𝐺(0)βˆžξ“π‘—=1(βˆ’π‘ )π‘—βŽ§βŽͺ⎨βŽͺβŽ©π‘—1⎫βŽͺ⎬βŽͺ⎭+βŽ›βŽœβŽœβŽπ‘—!βˆžξ“π‘˜=1𝐺(π‘˜)βˆžξ“π‘—=π‘˜(βˆ’π‘ )π‘—βŽ§βŽͺ⎨βŽͺ⎩⎫βŽͺ⎬βŽͺβŽ­βŽžβŽŸβŽŸβŽ π‘—!𝑗+1π‘˜+1.(4.9) Now, we use another recurrence that the Stirling numbers obey [17], ⎧βŽͺ⎨βŽͺ⎩⎫βŽͺ⎬βŽͺ⎭=⎧βŽͺ⎨βŽͺβŽ©π‘—π‘˜βŽ«βŽͺ⎬βŽͺ⎭⎧βŽͺ⎨βŽͺβŽ©π‘—βŽ«βŽͺ⎬βŽͺβŽ­π‘—+1π‘˜+1+(π‘˜+1)π‘˜+1,(4.10) to get βˆžξ“π‘—=π‘˜π‘₯π‘—βŽ§βŽͺ⎨βŽͺ⎩⎫βŽͺ⎬βŽͺ⎭=𝑗!𝑗+1π‘˜+1βˆžξ“π‘—=π‘˜π‘₯π‘—βŽ›βŽœβŽœβŽβŽ§βŽͺ⎨βŽͺβŽ©π‘—π‘˜βŽ«βŽͺ⎬βŽͺ⎭⎧βŽͺ⎨βŽͺβŽ©π‘—βŽ«βŽͺ⎬βŽͺ⎭⎞⎟⎟⎠=1𝑗!+(π‘˜+1)π‘˜+1π‘˜!(𝑒π‘₯βˆ’1)π‘˜+1π‘˜!(𝑒π‘₯βˆ’1)π‘˜+1.(4.11) Hence, π”Όξ‚ƒπ‘’βˆ’π‘ π›½β€²0(𝐿)ξ‚„=1+βˆžξ“π‘˜=1(𝐺(π‘˜βˆ’1)+𝐺(π‘˜))(π‘’βˆ’π‘ βˆ’1)π‘˜.(4.12) Inverting this expression for any nonnegative integer 𝑛, we have the searched distribution.

4.2. Number of Clusters in a Circle

We investigate now the case where the points of the process are deployed over a circumference, and we want to count the number of complete clusters, which corresponds to calculate the Euler's Characteristic of the total coverage, so we call this quantity πœ’. Without loss of generality, we can choose an arbitrary point to be the origin.

Theorem 4.2. The distribution of the Euler's Characteristic, πœ’(𝐿), when the points are deployed over a circumference of length 𝐿 is given by π‘ƒπ‘Ÿ(πœ’(𝐿)=𝑛)=π‘’βˆ’πœ†πΏπŸ{𝑛=0}+ξ€·1βˆ’π‘’βˆ’πœ†πΏξ€Έπœ†π‘’βˆ’πœ–πœ†Γ—π‘›!⌊𝐿/πœ–βŒ‹βˆ’π‘›ξ“π‘–=0ξ‚Έ(βˆ’1)𝑖[]𝑖!πΏβˆ’(𝑛+𝑖)πœ–πœ†π‘’βˆ’πœ–πœ†ξ€Έπ‘›+π‘–βˆ’1ξ‚€ξ‚€1𝐿+(𝑛+𝑖)πœ†ξ‚Ήβˆ’πœ–ξ‚ξ‚for𝑛β‰₯0.(4.13)

Proof. If there is no points on the circle, πœ’(𝐿)=0. Otherwise, if there is at least one point, we choose the origin at this point, and we have equivalence between the events: ξ‚»ξ€½π‘ˆ{πœ’(𝐿)β‰₯𝑛}βŸΊπ‘›βˆ’1+π΅π‘›ξ€Ύβˆ©ξ€½π‘β‰€πΏπΏξ€Ύξ€½>0if𝑛β‰₯1,Δ𝑋0ξ€Ύ<∞if𝑛=0.(4.14) In Figure 4, we present an example of this equivalence.
We can define π‘Œπ‘› as π‘Œπ‘›=ξ‚»π‘ˆπ‘›βˆ’1+𝐡𝑛if𝑛β‰₯1,0if𝑛=0,(4.15) to find the Laplace transform or Pr(πœ’(𝐿)=𝑛): ξ€·β„’{Pr(πœ’(β‹…)=𝑛)}(𝑠)=1βˆ’π‘’βˆ’πœ†πΏξ€Έπœ†+π‘ πœ†π‘’πœ–πœ†πœ†π‘’πœ–π‘ π‘’ξ€·ξ€·πœ–πœ†ξ€Έ/πœ†π‘ π‘’πœ–π‘ ξ€Έ+1𝑛⋅(4.16) The number of clusters is almost surely equal to the number of points when πœ–β†’0, so π”Όξ€Ίπœ’(𝐿)π‘šξ€»=ξ€·1βˆ’π‘’βˆ’πœ†πΏξ€Έπœ†π‘’π‘šβˆ’πœ–πœ†ξ“π‘˜=1⎑⎒⎒⎣⎧βŽͺ⎨βŽͺβŽ©π‘šπ‘˜βŽ«βŽͺ⎬βŽͺβŽ­ξ€·[]πΏβˆ’π‘˜πœ–πœ†π‘’βˆ’πœ–πœ†ξ€Έπ‘˜βˆ’1ξ‚€ξ‚€1𝐿+π‘˜πœ†πŸβˆ’πœ–ξ‚ξ‚{𝐿>π‘˜πœ–}⎀βŽ₯βŽ₯⎦.(4.17) Expanding the Laplace transform in a Taylor series and rearranging terms, as we did previously, yields π”Όξ€Ίπ‘’βˆ’π‘ πœ’(𝐿)ξ€»=ξ€·1βˆ’π‘’βˆ’πœ†πΏξ€Έπœ†π‘’βˆžβˆ’πœ–πœ†ξ“π‘˜=0βŽ‘βŽ’βŽ’βŽ£ξ€·[]πΏβˆ’π‘˜πœ–πœ†π‘’βˆ’πœ–πœ†ξ€Έπ‘˜βˆ’1ξ‚€ξ‚€1𝐿+π‘˜πœ†πŸβˆ’πœ–ξ‚ξ‚βˆž{𝐿>π‘˜πœ–}𝑗=π‘˜(βˆ’π‘ )π‘—βŽ§βŽͺ⎨βŽͺβŽ©π‘—π‘˜βŽ«βŽͺ⎬βŽͺ⎭⎀βŽ₯βŽ₯βŽ¦π‘—!.(4.18) Since βˆžξ“π‘—=π‘˜(βˆ’π‘ )π‘—βŽ§βŽͺ⎨βŽͺβŽ©π‘—π‘˜βŽ«βŽͺ⎬βŽͺ⎭=(𝑒𝑗!βˆ’π‘ βˆ’1)π‘˜π‘˜!,(4.19) we can directly invert this Laplace transform, add the case where there are no points for πœ’(𝐿)=0, and the theorem is proved.

5. Examples

We consider some examples to illustrate the results of the paper. Here, the behavior of the mean and the variance of 𝛽0(𝐿) as well as Pr(𝛽0(𝐿)=𝑛) are presented.

From (3.23), we have that 𝔼[𝛽0(𝐿)] is given by 𝔼𝛽0ξ€»(𝐿)=(πΏβˆ’πœ–)πœ†π‘’βˆ’πœ–πœ†πŸ{𝐿>πœ–}.(5.1) This expression agrees with the intuition in that there are three typical regions given a fixed πœ–. When πœ† is much smaller than 1/πœ–, the number of clusters is approximatively the number of sensors, since the connections with few sensors will unlikely happen, which can be seen from the fact that 𝔼[𝛽0(𝐿)]β†’πΏπœ† when πœ†β†’0. As we increase πœ†, the mean number of direct connections overcomes the mean number of sensors, and, at some value of πœ†, we expect that 𝔼[𝛽0(𝐿)] decreases, when adding a point is likely to connect disconnected clusters. We remark that the maximum occurs exactly for πœ–=1/πœ†, that is, when the mean distance between two sensors equals the threshold distance for them to be connected. At this maximum, 𝔼[𝛽0(𝐿)] takes the value of (𝐿/πœ–βˆ’1)π‘’βˆ’1. Finally, when πœ† is too large, all sensors tend to be connected, and there is only one cluster which even goes beyond 𝐿, so there are no complete clusters into the interval [0,𝐿]. This is trivial when we make πœ†β†’βˆž in the last equation. Figure 5 shows this behavior when 𝐿=4 and πœ–=1.

The variance can be obtained also by (3.23) as follows: 𝛽Var0ξ€Έ(𝐿)=(πΏβˆ’πœ–)πœ†π‘’βˆ’πœ–πœ†πŸ{𝐿>πœ–}+(πΏβˆ’2πœ–)πœ†2π‘’βˆ’2πœ–πœ†πŸ{𝐿>2πœ–}βˆ’(πΏβˆ’πœ–)2πœ†2π‘’βˆ’2πœ–πœ†πŸ{𝐿>πœ–},(5.2) and, under the condition that 𝐿>2πœ–ξ€·π›½Var0ξ€Έ=(𝐿)(πΏβˆ’πœ–)πœ†π‘’βˆ’πœ–πœ†+πœ–(3πœ–βˆ’2𝐿)πœ†2π‘’βˆ’2πœ–πœ†.(5.3)

Figure 6 shows a plot of Var(𝛽0(𝐿)) in function of πœ† for 𝐿=4 and πœ–=1. We can expect that, when πœ† is small compared to πœ–, the plot should be approximatively linear, since there would not be too much connections in the network, and the variance of the number of clusters should be close to the variance of the number of sensors given by πœ†πΏ. Since 𝛽0(𝐿) tends almost surely to 0 when πœ† goes to infinity, Var(𝛽0(𝐿)) should also tend to 0 in this case. Those two properties are observed in the plot. Besides, we find the critical points of this function, and again, πœ†=1/πœ– is one of them, and at this value Var(𝛽0(𝐿))=(𝐿/πœ–)π‘’βˆ’1+(3βˆ’2𝐿/πœ–)π‘’βˆ’1. The other two are the ones satisfying the transcendent equation: πœ†π‘’βˆ’πœ†πœ–=πΏβˆ’πœ–β‹…2πœ–(2πΏβˆ’3πœ–)(5.4) By using the second derivative, we realize that 1/πœ– is actually a minimum. Besides, if 𝐿≀2πœ–, there is just one critical point, a maximum, at πœ†=1/πœ–.

The last example in the section is performed with the result obtained in Theorem 3.7. We consider again 𝐿=4 and πœ–=1 to obtain the following distributions: 𝛽Pr0ξ€Έ(𝐿)=0=1βˆ’3πœ†π‘’βˆ’πœ†+2πœ†2π‘’βˆ’2πœ†βˆ’1/6πœ†3π‘’βˆ’3πœ†,𝛽Pr0ξ€Έ(𝐿)=1=3πœ†π‘’βˆ’πœ†βˆ’4πœ†2π‘’βˆ’2πœ†+1/2πœ†3π‘’βˆ’3πœ†,𝛽Pr0ξ€Έ(𝐿)=2=2πœ†2π‘’βˆ’2πœ†βˆ’1/2πœ†3π‘’βˆ’3πœ†,𝛽Pr0ξ€Έ(𝐿)=3=1/6πœ†3π‘’βˆ’3πœ†,𝛽Pr0(𝐿)>3=0.(5.5) Those expressions are simple, and they have at most four terms, since 𝐿=4πœ–. We plot these functions in Figure 7. The critical points on those plots at πœ†=1/πœ– are confirmed for the fact that, in function of πœ† for every 𝑛, Pr(πœ’(𝐿)=𝑛) can be represented as a sum: 𝑗𝑖=0π‘žπ‘–,π‘—ξ€·πœ†π‘’βˆ’πœ†πœ–ξ€Έπ‘–,(5.6) where the coefficients π‘žπ‘–,𝑗 are constant in relation to πœ†. However, (πœ†π‘’βˆ’πœ†πœ–)𝑖 has a critical point at πœ†=1/πœ– for all 𝑖>0, so this should be also a critical point of Pr(πœ’(𝐿)=𝑛). If πœ† is small, we should expect that Pr(πœ’(𝐿)=0) is close to one, since it is likely to 𝑁 have no points. For this reason, in this region, Pr(πœ’(𝐿)=𝑛) for 𝑛>0 is small. When πœ† is large, we expect to have very large clusters, likely to be larger than 𝐿, so it is unlikely to have a complete cluster in the interval, and, again, Pr(πœ’(𝐿)=0) approaches to the unity, while Pr(πœ’(𝐿)=𝑛) for 𝑛>0 become again small.

Acknowledgment

The authors would like to thank the anonymous referee whose constructive remarks helped us to improve the presentation of this paper.