Abstract

The point process, a sequence of random univariate random variables derived from correlated bivariate random variables as modeled by Arnold and Strauss, has been examined. Statistical properties of the time intervals between the points as well as the probability distributions of the number of points registered in a finite interval have been analyzed specifically in function of the coefficient of correlation. The results have been applied to binary detection and to the transmission of information. Both the probability of error and the cut-off rate have been bounded. Several simulations have been generated to illustrate the theoretical results.

1. Introduction

It is known that the detection of an optical field at a low level of power is a sequence of events which is a set of distinct time instants {𝜗𝑗},0≤𝑗<∞, such that 𝜗𝑗+1≥𝜗𝑗,forall𝑗. These {𝜗𝑗}𝑠, which are the time instants of interaction between the photons and the detector device, for example, a photomultiplier, constitute a random point process (RPP) for which a positive instantaneous density 𝜆(𝜗𝑗) can be defined. We here consider only simple and homogeneous stationary processes.

Reciprocally, the existence of a physical optical field, given the knowledge of all of the properties of such an RPP, is not an easy problem to solve. For example, given an RPP of nonclassical properties [1] does not necessarily correspond to a nonclassical physical optical field, although its feasibility can be demonstrated [2]. This problem, mainly due to the quantum nature of the interaction between radiation and matter is not examined here.

Nevertheless, using a parameterized RPP, whose special values of such a parameter (denoted here 𝑐) correspond to a physical field, for example, a coherent or a thermal state, then it is reasonable to admit that intermediate values of 𝑐 corresponds to realistic optical fields. Despite these limitations, the properties of the RPPs studied here are important in statistical optics.

As already pointed out, there are two types of processing that can be utilized to characterize such RPP: the time interval distributions (TIDs) and the probability of number distributions (PNDs) [3]. If we choose to characterize the RPP by the TID, we can define the time interval between points 𝜃𝑗=𝜗𝑗+1−𝜗𝑗, called residual time (or lifetime sometimes). Its probability distribution function (PDF) 𝑤(𝜃) will be called here a triggered PDF. When 𝜗𝑗=𝑡0 is arbitrary, that is to say not a point of the (RPP), the 𝑣(𝜃), PDF of the corresponding 𝜃𝑗 will be called a relaxed PDF.

If we now choose to characterize the RPP by the PND, we can as above, define the relaxed PND 𝑃(𝑛,𝑡) of the random variable (RV) 𝑁(𝑡0,𝑡0+𝑡) which is the number of instants occurring within the time interval [𝑡0,𝑡0+𝑡],𝑡0and𝑡≠𝜗𝑗,forall𝑗. As previously we may as well define 𝑄(𝑛,𝑡) a triggered PND of the RV 𝑁(𝜗𝑗,𝜗𝑗+𝑡),𝑡≠𝜗𝑗, the counting process being started by a point of the RPP.

The purpose of this paper is firstly to calculate the statistical properties of the RPPs, both in terms of the TIDs and the PNDs. Secondly we apply the results to the calculation of the performances, namely, the probability of error in binary detection and the cut-off rate in the binary transmission of the information, and their variations with respect to the coefficient of correlation 𝑐. The one-dimensional processes, such as the Poisson and the geometric PNDs, are finally utilized for establishing bounds to the performances.

In Section 2, the notations are defined and the basic equations are briefly recalled. Section 3 establishes the main results of the PDFs both those dealing with TIDs and PNDs depending on extreme values of 𝑐: low values and high values. Finally, Section 4 presents theoretical results and curves that explain and illustrate the results of the numerical simulation.

2. Basic Equations

We need to define the Laplace transform of 𝑓(𝑡)𝜌(𝑠)=ℒ𝑓(𝑡)≜∞0e−𝑠𝑡𝑓(𝑡)d𝑡(1) and the functions𝜌1(𝑠)=𝜌(𝑠),𝜌𝑠(1−𝜌(𝑠))2𝜌(𝑠)=(𝑠)(1+𝜌(𝑠)).𝑠(1−𝜌(𝑠))(2) The PDF of the number of events registered between 0 and 𝑡, denoted by PND, reads𝑃𝑡(𝑛;𝑡)=Prob𝑡∈𝑇=𝑛,𝑡𝑛+1ℐ𝑡=𝖤𝑛<𝑡<𝑡𝑛+1,(3) where ℐ is the indicator function ℐ(𝑡∈𝑇)=1(0), in case the event occurs or not within 𝑡∈𝑇. Therefore𝑃(𝑛;𝑡)=ℒ−1𝑃(𝑛;𝑠)=ℒ−11−𝜌(𝑠)𝑠𝜌(𝑠)𝑛,(4) where the symbol ℒ−1 denotes the inverse Laplace transform. From (4), it is easily seen that (see e.g., [4, 5])𝖤[𝑁](𝑡)=ℒ−1𝜌1𝖤𝑁(𝑠),2(𝑡)=ℒ−1𝜌2(𝑠).(5) The symbol 𝖤 denotes the mathematical expectation. In the following 𝑃(𝑛;𝑡) will be called the relaxed PDF. Another interesting PDF that can be derived from (4),𝑄(𝑛;𝑡)=𝑛+1𝖤[𝑁]𝑃(𝑛+1;𝑡),(6) will be called the triggered PDF. In fact, it is known [3, 6] that these PDFs can be expressed using the moments of the time-integrated density𝒥(𝑡)=𝑡0𝑡+𝑡0𝜆(𝜗)d𝜗,(7)𝒥(𝑡)=𝜆𝑡 for the Poisson RPP. Therefore1𝑃(𝑛;𝑡)=𝖤𝒥𝑡𝑛!0+𝑡𝑛e−𝒥(𝑡0+𝑡),1𝑄(𝑛;𝑡)=𝖤[𝑁]1𝖤𝒥𝑡𝑛!0+𝑡𝑛+1e−𝒥(𝑡0+𝑡).(8) For all RPPs that we deal with here, all the PDFs depend only on 𝑡 (not on 𝑡0) due to the stationary property. When 𝑡 is a parameter, 𝑃(𝑛;𝑡) and 𝑄(𝑛;𝑡) are simply denoted 𝑃(𝑛) and 𝑄(𝑛).

In terms of TIDs, we just recall the basic formulas of 𝑤(𝑡) and 𝑣(𝑡) (see [6])1𝑤(𝑡)=𝖤[𝒥]𝖤𝒥𝑡0𝒥𝑡0e+𝑡−𝒥(𝑡0+𝑡),𝒥𝑡𝑣(𝑡)=𝖤0e+𝑡−𝒥(𝑡0+𝑡).(9) Notice that 𝑔(0)≜𝑤(𝑡=0)/𝑣(𝑡=0) the ratio of the values at 𝑡=0 is related to ğœŽ2𝒥, the variance of 𝒥, such that 𝑔(0)−1=ğœŽ2𝒥/𝖤[𝒥]2, where ğœŽ2𝒥 is the variance of the time-integrated density.

3. Model for Correlated Variables

Among the several models proposed to deal with correlated variables, we consider the Arnold-Strauss model [7, 8]𝑓(𝑡,𝑢)=𝐾e−(ğ‘Žğ‘¡+𝑏𝑢+𝑐𝑡𝑢),(10) where ğ‘Ž,𝑏,𝑐>0. The time intervals 𝑡,𝑢 are of course positive RVs in [0,∞[. The constant of normalization is𝐾=𝑐eâˆ’ğ‘Žğ‘/ğ‘âŸ¶âŽ§âŽªâŽ¨âŽªâŽ©ğ¾Ei(1,ğ‘Žğ‘/𝑐)ℓ𝑐=ğ‘Žğ‘+𝑐−2ğ¾ğ‘Žğ‘for𝑐≪1,ℎ=𝑐1−𝛾−log(ğ‘Žğ‘)for𝑐≫1,(11) where Ei(𝑚,𝑥) are the exponential integral functions [9] for 𝑚=1,2,…, and 𝑥∈ℝ+. They verify the approximate expressions given in (A.1a)–(A.2a) in Appendix A.1. The constant 𝛾=0.5772157 is the Euler constant.

The marginal distribution of 𝑡 is deduced from (10) and is given by𝑓(𝑡)=∞0e𝑓(𝑡,𝑢)d𝑢=ğ¾âˆ’ğ‘Žğ‘¡.𝑏+𝑐𝑡(12) In the following, we report some calculations that can be obtained under closed forms depending on the value of the parameter 𝑐. Let us first consider the case where 𝑐≪1.

3.1. 𝑐≪1

In what follows, most of the calculations are done up to 𝑂(𝑐3).

3.1.1. TID

It can be shown that (see, e.g., [6])𝑤e(𝑡)=ğ¾âˆ’ğ‘Žğ‘¡,𝑏+𝑐𝑡𝑣(𝑡)â‰œğœ†âˆžğ‘¡ğ‘¤(𝜃)d𝜃.(13) Given 𝖤[𝑡], the first moment of 𝑡, the average value of the density of the process 𝜆=1/𝖤[𝑡] is given by ğ‘ğœ†â‰ƒğ‘Ž+𝑏𝑐−22ğ‘Žğ‘2.(14) Equation (13) yields 𝜈(0)=𝜆 and d𝜈/d𝑡=−𝜆𝑤(𝑡) which prove that 𝜈(𝑡) is a monotonic continuously decreasing function of 𝑡. More precisely, we can show that𝑤(𝑡)â‰ƒğ‘Ž3𝑏2−𝑏𝑐𝑡+𝑐2𝑡2ğ‘Ž2𝑏2âˆ’ğ‘Žğ‘ğ‘+2𝑐2eâˆ’ğ‘Žğ‘¡,ğ‘Ž(15)𝜈(𝑡)â‰ƒğ‘Ž2𝑏2âˆ’ğ‘Žğ‘ğ‘+2𝑐2âˆ’ğ‘Žğ‘(ğ‘Žğ‘âˆ’2𝑐)𝑡+ğ‘Ž2𝑐2𝑡2ğ‘Ž2𝑏2−2ğ‘Žğ‘ğ‘+6𝑐2eâˆ’ğ‘Žğ‘¡.(16) From (15), we deduceğœŽ2𝑡≃1ğ‘Ž21−2𝑐+ğ‘Žğ‘9𝑐2ğ‘Ž2𝑏2.(17) On the other hand, we can see thatâ„Žğ‘¡ğ¾(0,𝑐≪1)=𝑤(0)−𝑣(0)=ℓ𝑏𝑐−𝜆≃2ğ‘Žğ‘2,(18) which positivity is a characteristic of classical processes [10].

3.1.2. PND

For a simple approximation, at a first order of 𝑐, we may use the following:1𝑓(𝑡)âˆğ‘Žğ‘ğ‘Žeâˆ’ğ‘Žğ‘¡âˆ’ğ‘ğ‘Ž2𝑏2ğ‘Ž2𝑡eâˆ’ğ‘Žğ‘¡,𝑃(𝑛,𝑡)∝(ğ‘Žğ‘¡)𝑛e𝑛!âˆ’ğ‘Žğ‘¡âˆ’ğ‘ğ‘Ž2𝑏2×(ğ‘Žğ‘¡)2𝑛+1(2𝑛)!2(ğ‘Žğ‘¡)2𝑛+1+(2𝑛+1)!(ğ‘Žğ‘¡)2𝑛−1e(2𝑛−1)!î‚¶î‚¶âˆ’ğ‘Žğ‘¡,(19) where 𝑃(0,𝑡)=(1+ğ‘Žğ‘¡/2)eâˆ’ğ‘Žğ‘¡ (see [1] pages 789–792).

When only a few values of 𝑃(𝑛,𝑡) or only the moments are needed, it is however, better, to use (3) and (5). Based on their expansion as series in 𝑠, we obtain, at the second order of 𝑐,𝖤[𝑁]â‰ƒğ‘Ž2ğ‘Žğ‘Žâˆ’ğœ”ğ‘¡+22(ğ‘Žâˆ’ğœ”)2𝜔2𝑡2,𝖤𝑁2î€»â‰ƒğ‘Ž2ğ‘Žğ‘Žâˆ’ğœ”ğ‘¡+22ğ‘Ž2+𝜔2(ğ‘Žâˆ’ğœ”)2𝜔2𝑡2,ğœŽ2𝑛1â‰ƒğ‘Ž+𝜔+2+1ğ‘Žî‚ğœ”âˆ’ğ‘Ž2,(𝑡=1),(20) where we denote 𝜔=𝑐/𝑏.

Seeking the exact expressions of the PDFs of the number seems difficult to obtain. However, as just seen, approximations of closed expressions are simple. Thus, using (4), it is easy to calculate approximate expressions of the PDFs of the number for 𝑛=0,1,2 and 𝑡=1, as given in (A.3) in Appendix A.2. Therefore, we can prove thatâ„Žğ‘›(0,𝑐≪1)=𝑃(0)−𝑃(1)≃eâˆ’ğ‘Žî‚€1âˆ’ğ‘Žâˆ’2âˆ’ğ‘Žğ‘î‚=𝑐,ğ‘Ž=1,𝑏=1−1𝑒2𝑐−𝑐3𝑒2,(21) which is negative forall𝑐≪1. We will numerically illustrate this property by simulations. When we choose the triggered processing, we obtain𝑘𝑛1(0,𝑐≪1)=𝑄(0)−𝑄(1)≃−𝑒1𝑐−𝑐4𝑒2,(22) which is here again negative forall𝑐.

The case 𝑐≫1 (but finite) is perhaps more interesting although closed forms of the moments are difficult to obtain.

3.2. 𝑐≫1

The calculations are now done up to 𝑂(1/𝑐3).

3.2.1. TID

We can show that the normalized 𝑤(𝑡) is given by𝑤(𝑡)∼𝑐eâˆ’ğ‘Žğ‘/𝑐eEi(1,ğ‘Žğ‘/𝑐)âˆ’ğ‘Žğ‘¡âŸ¶ğ‘ğ‘¡+𝑏𝑡≠0eâˆ’ğ‘Žğ‘¡î‚€ğ‘ğ‘¡(𝜒−log𝑐)1−𝑐1ğ‘Ž+𝑡,(23) and the unnormalized PDF, evaluated up to 𝑂(1/𝑐2) and 𝑂(𝑡),𝑣(𝑡)∼𝜆Ei(1,ğ‘Ž(𝑡+𝑏/𝑐)),⟶Ei(1,ğ‘Ž+𝑏/𝑐)0<𝑡≪1𝜆Ei(1,ğ‘Žğ‘¡)𝑏log(𝑐/ğ‘Žğ‘)−𝛾−𝜆𝑐1elog(ğ‘Žğ‘/𝑐)+ğ›¾ğ‘Žğ‘¡ğ‘¡âˆ’ğœ†ğ‘Žğ‘ğ‘Ei(1,ğ‘Žğ‘¡)(log(ğ‘Žğ‘/𝑐)+𝛾)2,∼−𝜆𝛾+log(ğ‘Žğ‘¡),log(ğ‘Žğ‘/𝑐)−𝛾(24) leading toâ„Žğ‘¡ğ¾(0,𝑐≫1)=𝑤(0)−𝑣(0)=â„Žğ‘ğ‘âˆ’ğœ†â‰ƒğ‘(1−𝜒)âˆ’ğ‘Žlog𝑐,(25) where we denote 𝜒=𝛾+log(ğ‘Žğ‘) and assume that ğœ†â‰ƒğ‘Ž(log𝑐−𝜒). It is seen that â„Žğ‘¡(0,𝑐≫1) given by (25) is now positive and increases more slowly than â„Žğ‘¡(0,𝑐≪1)∼𝑐2 given by (18). From (23), we can also deduce that𝖤[𝑡]∼1ğ‘Ž1,ğœŽlog𝑐−𝜒2𝑡∼1ğ‘Ž21=𝖤[𝑡]logğ‘âˆ’ğœ’ğ‘Ž.(26) Both moments tend to 0 when ğ‘â†’âˆž.

3.2.2. PND

The approximate expressions given in (A.4) in Appendix A.2  lead toâ„Žğ‘›(0,𝑐≫1)=𝑃(0)−𝑃(1)≃1+𝛾−log𝑐𝑐.(27) We can calculate the first two moments of the number for the specific case ğ‘Ž=1 and 𝑏=1, 𝖤[𝑁]ğœŽâˆlog(𝑐),(28a)2𝑛[𝑁]∝𝖤2.(28b)As for the previous case, it can be shown that 𝑘𝑛(𝑐)≤0,forall𝑐.

4. Simulation and Results

We have used the algorithm recently described [11] for several values of 𝑐.

In Figure 1, results of the simulated data of the TIDs are plotted. The 𝑤(𝑡), the triggered PDF follows very well (23). The 𝑣(𝑡), the relaxed one has been well fitted by the expression𝑣(𝑡)=𝜈0𝜆Ei(1,𝑡)⟶Ei(1,1+1/𝑐)0<𝑡≪1𝜈1𝜆log𝑡,log(1/𝑐)(29) for ğ‘Ž=1,𝑏=1,𝑐=30 and where 𝜈0=0.07 and 𝜈1=0.2.

By the way, it is interesting to remark that the PDF of 𝑆ℓ=∑ℓ𝑗=0𝜃𝑗,ℓ≫1, which is the addition of several correlated identically distributed positive random time interval, deviates from the Gaussian profile. We may conclude that the application of the central limit theorem requires the addition of a very high number of correlated random variables.

On the other hand, for high values of 𝑐, the theoretical approximations of the PNDs are given by1𝑃(𝑛)=𝒩𝑏1eâˆ’ğ‘Ž1𝑛+𝑏2eâˆ’ğ‘Ž2ğ‘Žğ‘›2,𝑛!(30) where𝒩=𝑏2+𝑏11−eâˆ’ğ‘Ž1,𝖤[𝑁]=1(31a)ğ’©î‚µğ‘Ž2𝑏2+𝑏1eğ‘Ž1(eğ‘Ž1)−12,(31b) leading to𝑄(𝑛)=𝑛+1𝖤[𝑁]𝑃(𝑛+1).(32) The values of the parameters that correctly fit the simulated results of the PDFs, as seen in Figure 2, are 𝑏1=0.2,ğ‘Ž1=1,𝑏2=0.62,ğ‘Ž2=5.2,and𝖤[𝑁]=3.64.

The variations of the moments with respect to 𝑐 are derived from the expressions recalled in the Section 2. We obtained𝖤[𝑁]1≃1+𝛾+2ğœŽlog𝑐,(33a)2𝑛1≃1+𝛾+4(1+log𝑐)2.(33b)Regarding the reduced moments â„Žğ‘› and 𝑘𝑛, an excellent fit of the simulated results of Figure 3 is carried out with 𝛼1=4,𝛽1=0.17 and 𝛼2=3,𝛽1=0.24â„Žğ‘›â‰ƒ1−𝛾2+12𝛼𝛾−log1𝑐𝛼1𝑐−𝛽1,𝑘(34a)𝑛≃1−𝛾2+12𝛼𝛾−log2𝑐𝛼2𝑐−𝛽2,(34b)leading to â„Žğ‘›â‰ƒğ‘2/2log𝑐 and 𝑘𝑛≃−𝑐4/24(log𝑐)3 which are only in qualitative agreement.

As an application of these results to communications, we consider a system of communication processed with a direct threshold detector. The decision device operates such that𝑛≦𝑛𝑠(𝑐)⟶H0⟶𝑃𝑐=0(𝑛)=eâˆ’ğ‘Žğ‘Žğ‘›,𝑛!(35a)𝑛>𝑛𝑠(𝑐)⟶H1⟶𝑃𝑐≠0(𝑛),(35b)based on the binary hypotheses: H0 (no correlation) and H1 (correlation with the parameter 𝑐). The threshold is obtained from the likelihood ratio Λ(𝑛),𝑃Λ(𝑛)=𝑐≠0(𝑛)𝑃𝑐=0(𝑛)>1⟶𝑛𝑠(𝑐).(36) We can also utilize the TIDs as a useful tool of processing [12]. The threshold would be based on the likelihood ratio Θ(𝑡)=𝑤𝑐(𝑡)/𝜆e−𝜆𝑡>1→𝑡𝑠(𝑐) and the decision would operate as follows𝑡≤𝑡𝑠(𝑐)⟶H1⟶𝑤𝑐e(𝑡)=ğ¾âˆ’ğ‘Žğ‘¡,𝑏+𝑐𝑡𝑡≥𝑡𝑠(𝑐)⟶H0âŸ¶ğ‘Žeâˆ’ğ‘Žğ‘¡,(37) where 𝑡𝑠(𝑐)=(1/c)(𝐾/a−𝑏), 𝐾 being given by (11).

Here, we focus on the method based on PNDs because it is generally more efficient. To simplify the calculations, the decision is not randomized [13].

Now, the probability of error in detection when processing with the relaxed PND is given by𝑃err=121−𝑛𝑠𝑛=0𝑃𝑐=0(𝑛)+𝑛s𝑛=0𝑃𝑐≠0.(𝑛)(38) Similarly, we have for the processing with the triggered PND𝑄err=121−𝑛𝑠𝑛=0𝑃𝑐=0(𝑛)+𝑛𝑠𝑛=0𝑄𝑐≠0.(𝑛)(39) It is first seen that 𝑄err≤𝑃err, the inequality which is demonstrated for a special case in the Appendix B. Furthermore, both probabilities of error in detection decrease with 𝖤[𝑁], then with 𝑐 because 𝖤[𝑁]∝log𝑐 as it is calculated in (28a), (33a) and is shown in Figure 3. This is within the range of 𝑐∶0≤𝑐≲100. For instance, with the help of (30)–(31b) and (38)-(39) calculated for 𝑛𝑠=1 and 𝑐=100, we obtained 𝑃err≃0.28 and 𝑄err≃0.16, the values which are in excellent agreement with the simulated results of Figure 4.

On the other hand, the bounds to 𝑄err and 𝑃err can easily be calculated. In fact, denoting𝖤[𝑁]𝜈=[𝑁]1+𝖤,(40) where 𝖤[𝑁]=ğ‘Ž+𝜉. Now, taking into account (35a), we have up to 𝑂(𝜉2),𝑃1(𝑛)=(1−𝜈)𝜈𝑛,𝑃(1)err1≃1−𝑒−𝜈221≃1−𝑒−𝜉8,(41) for 0≤𝜈≠1. Therefore 𝑃(1)err is the approximation of the exact 𝑃err for 𝑛𝑠=1 plotted as the curve quoted “1” in Figure 4 where ğ‘Ž=1. Similarly, taking into account (35a), we have𝑃1(𝑛)=e−𝖤[𝑁]𝖤[𝑁]𝑛,𝑃𝑛!(2)err≃12−1𝑒+[𝑁]1+𝖤2e−𝖤[𝑁]≃12−1𝑒𝜉.(42) Again, 𝑃(2)err is the approximation of the exact 𝑃err for 𝑛𝑠=1 plotted as the the curve quoted “2” in Figure 4 for ğ‘Ž=1. Finally, we have𝑃(2)err≤𝑄err≤𝑃err≤𝑃(1)err.(43) Let us conclude this analysis with a brief comment on information. We will concentrate on the cut-off rate which is known as a useful criterion for evaluating the performances of a channel. Thus, for a binary noiseless channel, when the transmission of messages “0” and “1” is done via the probabilities 𝑃𝑐=0 and 𝑃𝑐≠0 respectively (the probabilities a priori are taken equal to 1/2) the cut-off rate expresses as2𝑅=log∑1+âˆžğ‘›=0√𝑃𝑐=0(𝑛)𝑃𝑐≠0,(𝑛)(44) which is generally interpreted as a lower bound to the channel capacity [14]. The 𝑃𝑐=0(𝑛) is the Poisson PND of parameter ğ‘Ž=1 and 𝑃𝑐≠0(𝑛) will be, as above, either the relaxed or the triggered PND yielding the cut-off rates 𝑅𝑝 and ğ‘…ğ‘ž, respectively. Because exact closed expressions seem difficult to attain, the bounds are very useful and can easily be established. The first bound is obtained using geometric PNDs𝑃0(𝑛)=1−𝜈0𝜈𝑛0𝑅,(45a)12=log1+1−𝜈0√(1−𝜈)/1−𝜈0𝜈≃0≤𝜉≲2𝜉2321−3𝜉4+𝜉22,(45b)where 𝜈 is given by (40) and 𝜈0=ğ‘Ž/(1+ğ‘Ž). This is a lower bound to 𝑅𝑝.

The second bound is obtained using the Poisson PNDs𝑅22=log1+e√−1/2(√𝖤[𝑁]âˆ’ğ‘Ž)2≃0≤𝜉≲2𝜉2𝜉161−2+9𝜉2,32(46) which is an upperbound to 𝑅𝑝.

In Figure 5, the results of the simulation 𝑅𝑝 and ğ‘…ğ‘ž and the bounds (curves quoted “1” and “2”) given by (45b) and (46) are plotted versus 𝖤[𝑁]. Here again, it is seen that the processing with the triggered PND performs much better that the processing with the relaxed PND𝑅1≤𝑅𝑝≤𝑅2â‰¤ğ‘…ğ‘ž,(47) in the limit of 0≤𝑐≤100. However, because the cut-off rate is a monotonic increasing function with respect to 𝖤[𝑁], the inequalities (47) may be extrapolated to for all 𝑐.

In conclusion, the binary performances, as summarized by the inequalities (43) and (47), show that the processing with the triggered PND is the preferable mode of operation. Both performances, in detection and information transmission, are improved with the coefficient of correlation.

Appendices

A. Approximate Expressions

A.1. The Exponential Integral Functions

The exponential integral functions can be expressed, up to 𝑂(𝑥3), 𝑥Ei(1,𝑥)≃−𝛾−log𝑥+𝑥−24,𝑥(A.1a)Ei(2,𝑥)≃1+(𝛾+log𝑥−1)𝑥−22,1(A.1b)Ei(3,𝑥)≃21−𝑥−21𝛾+23log𝑥−4𝑥2,(A.1c)where 𝛾 is the Euler constant. For ğ‘¥â†’âˆžâ€‰eEi(1,𝑥)≃−𝑥𝑥11−𝑥,e(A.2a)Ei(2,𝑥)≃−𝑥𝑥21−𝑥e(A.2b)Ei(3,𝑥)≃−𝑥𝑥31−𝑥.(A.2c)

A.2. Probability of Number Distribution

Depending on the values of 𝑐, we have the following approximations of the first values of 𝑃(𝑛):𝑃(0)≃eâˆ’ğ‘Žî‚€11âˆ’ğœ”âˆ’ğ‘Žğœ”2,𝑃(1)≃eâˆ’ğ‘Žî‚€î‚€1ğ‘Ž+(1âˆ’ğ‘Ž)𝜔+ğ‘Ž+ğ‘Ž6−32𝜔2,𝑃(2)≃eâˆ’ğ‘Žî‚µğ‘Ž22î‚€ğ‘Ž+ğ‘Ž1−2î‚î‚µğ‘Žğœ”+28+32𝜔−2ğ‘Ž2,(A.3) for 𝑐≪1 and where 𝜔=𝑐/𝑏. Similarly, it can be shown that𝑃(0)≃1+𝛾−log𝑐𝑐,𝑃(1)≃log𝑐−𝛾𝑐,𝛾𝑃(2)≃2−2(1+𝛾)log𝑐𝑐2,(A.4) for 𝑐≫1.

B. Inequality between Probabilities of Error in Binary Detection

To demonstrate that the triggered processing yields better performance than the relaxed processing, Δ𝑒=𝑄err−𝑃err≤0, we begin with (8)1𝑃(𝑛;𝑡)=𝖤𝑛!𝒥(𝑡)𝑛e−𝒥(𝑡),1𝑄(𝑛;𝑡)=𝖤[𝑁]1𝖤𝑛!𝒥(𝑡)𝑛+1e−𝒥(𝑡),(B.1) and show that for𝑃err=121−𝑛𝑠𝑛=0𝑃𝑐=0(𝑛)+𝑛𝑠𝑛=0𝑃𝑐≠0,𝑄(𝑛)err=121−𝑛𝑠𝑛=0𝑃𝑐=0(𝑛)+𝑛𝑠𝑛=0𝑄𝑐≠0,(𝑛)(B.2) we haveΔ𝑒=𝑄err−𝑃err,=12𝑛𝑠𝑛=0𝖤e−𝒥𝒥𝑛+1𝖤[𝒥]−𝒥𝑛=12𝖤𝑛𝑠𝑛=0e−𝒥𝒥𝑛+1𝖤[𝒥]−𝒥𝑛≤12𝖤𝑛𝑠𝑛=0𝒥𝑛+1𝖤[𝒥]−𝒥𝑛=12𝑛𝑠𝑛=0𝖤𝒥𝑛+1𝖤[𝒥]𝒥−𝖤𝑛.(B.3) where we used e−𝒥≤1,𝒥≥0.

For 𝑛𝑠=1, noticing that 𝖤[𝒥2]≥𝖤[𝒥]2, we have Δ𝑒≤0.

For higher values of 𝑛𝑠, this method seems not useful because it requires to prove that 𝖤[𝒥𝑛+1]/𝖤[𝒥]≥𝖤[𝒥𝑛],forall𝑛, which is not so easy although the inequality is true for several types of density distributions of interest in statistical optics.

Disclosure

Laboratoire des Signaux et Systèmes is a joint laboratory (UMR 8506) of CNRS. and École Supérieure d'Électricité is and associated with the Université Paris-Orsay, France.