Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 1731802 | 11 pages |

Size of the Largest Component in a Critical Graph

Academic Editor: Julien Bruchon
Received11 May 2019
Revised04 Jul 2019
Accepted01 Aug 2019
Published10 Sep 2019


In this paper, by the branching process and the martingale method, we prove that the size of the largest component in the critical random intersection graph is asymptotically of order and the width of scaling window is .

1. Introduction and Main Result

Random intersection graph , which was introduced by Singer-Cohen [1] and Karoński et al. [2], is defined as follows. Let V be a set of n vertices and M a set of m elements. For each vertex , it is assigned a random subset of , where each element of M is included in randomly and independently with probability . There is an edge between vertices u and if and only if . Denote the largest component in by and its size by . In , Behrisch proved that [3] when and , there is a phase transition of in the vicinity of the point (here, we point out that when is not an integer, means with being the largest integer less than ). When with high probability (for a given graph property , we say that graph possesses with high probability if the probability that possesses tends to 1 as ) (w.h.p. for brevity), jumps from logarithmic order to linear order in ; Lagerås and Lindholm [4] also got that in , with and , exhibits a jump from logarithmic order to linear order in n around the point that . Recently, Wang et al. [5] studied the component evolution in critical and got that when , w.h.p. The order of the largest component in some critical random intersection graphs is and the width of scaling window around the critical probability is ; while when w.h.p. The order of the largest component and the width of the scaling window are and , respectively. It is natural to ask what is the order of and what is the width of scaling window in critical , which is proposed in [5]. The aim of the present paper is to study the critical . Now our main result, which is not surprising but still interesting, can be stated as follows.

Theorem 1. Let be a positive function on n such that and as . In with , the following statements hold.(1)Weakly subcritical case: if , then there are two positive constants and such that w.h.p.(2)Critical case: if for some constant , then there are a positive function , which tends to infinity as , and a constant such that w.h.p.(3)Weakly supercritical case: if , then there are two positive constants and such that w.h.p.

The following notation will be used. Write , , and for the probability, expected value, and variance of a random event and a random variable, respectively. For any two positive functions and of a natural-valued parameter , denote if there is a positive constant C such that when n is large enough, if and and if . Furthermore, all logarithms in this article have the natural base, all inequalities or asymptotic statements have to be understood in terms of n being large enough yet fixed. And for clarity of presentation, floor and ceiling signs are omitted whenever they are not essential.

2. Auxiliary Lemmas

To get the bound, we need some properties of random variable with compound binomial distribution. Let X be a compound binomial distribution with generating function:

Then, we have the following lemma.

Lemma 1. For any natural numbers , let be random variables with generating function , . Suppose , then

Here, means is stochastically smaller than . It holds if and only if there exists a coupling of and such that .

Proof. Note can be expressed as , where s has the binomial distribution and N has the binomial distribution which is independent of s, . Then, it is obvious by the coupling method.

Besides the above lemma, we still need the following existing theorems.

Theorem 2 ([6], Theorem 1). Let , , , and be an increasing graph property. If , thenwhere denotes that G has property , and is the Erdös–Rényi random graph which is obtained by retaining each edge of the complete graph on n vertices independently with probability .

By Theorem 2, to give a lower bound for , the following theorem on the size of the largest component in the critical Erdös–Rényi random graph is useful.

Theorem 3 ([7], Theorem 5.23, [8, 9]). Let denote the size of the largest component in . For any which tends to infinity as , the following statements hold.(a)In , if , then w.h.p. (b)In , if , then w.h.p. (c)In , if , then w.h.p.

3. Proof of Theorem 1

Note that given a function on the size of largest component at the least is an increasing property in with respect to . And when , and , as .when and for some constant ,

Therefore, keeping Theorems 2-3 in mind, we can determine the lower bounds directly for the weakly subcritical, critical, and weakly supercritical cases. So we only need to prove the upper bounds in the rest of this section. We mainly combine the martingale argument [8] and branching process method [10] to obtain the upper bound. We will make use of the following component exploration process on random bipartite graph (see the footnote (note that usually is constructed through the bipartite graph which is a graph with bipartition , the vertices set , and the elements set ; any vertex in V and any element in U is connected by an edge in independently with probability ; an edge between and in is present if both and are adjacent to some element in ) for its definition), not on , to get the size of components.

Exploration process . Let be the component containing vertex in . In this procedure, we explore the vertices of V on the . During the exploration, vertices will be active, inactive, or neutral, and the elements of U are neutral all the time. At the beginning, assuming all the vertices are neutral, we choose a vertex uniformly and make it active. At each time , we choose a vertex uniformly from the active vertices and check the vertices such that the distance between and is two in , that is to check all pairs where u runs over all the neutral elements in . If is an edge in , then check where runs over all the neutral vertices. If there is an element u such that and are present in then make active; otherwise, keep it neutral. After checking all the neutral vertices in , let be inactive. When there is no active vertex, the component , which is the set of inactive vertices, is explored. Then, we choose a neutral vertex uniformly from the rest of neutral ones and proceed on.

Let be the number of vertices which become active due to the exploration of active vertex , which is the number of vertices at distance 2 from in , and denote the total number of active vertices at step , where . It is easy to see that for any ,

Define T to be the least t for which , i.e.,

Then, at the time , the set of explored vertices is precisely , which means .

3.1. Weakly Subcritical Case

We only need to give an upper bound on T as . For this, we first define a random walk and a stopping time which is stochastically larger than , then bound the stopping time to determine the desired result. To this end, we need the following lemma.

Lemma 2 ([11], Lemma 8). Let η be an integer random variable with such that, for any integer , we have . Let be a sequence of i.i.d. Random variables distributed as η andwhere is an integer constant. Define t to be its hitting time of , i.e.,

If satisfiesthen for any integer ,where and constants in depend only on n and but not on

Proof of the upper bound. To make use of Lemma 2, let be a sequence of i.i.d. Random variables distributed as . SetTake . Note and set for . For the simplicity of notation, write in the rest of the paper. By these notations, when and , where and as ,DefineNext, we will prove that there is satisfyingIn fact, let and . Note that for any natural number i and , . Then, when n is large enough we can obtainHere, the last inequality holds as, noting ,Also, we can get thatThird, it is easy to check that is continuous in θ when . Hence, the intermediate value theorem implies the claim holds. So far, by (16) and (17) and the inequalities , for any ; when n is large enough, we can deduce thatNow, we are in the position of exploring components of . Suppose that k vertices have been explored at step i in the exploration process. We are in the position of exploration through a vertex, say . Since the number of neighbors of the initially picked vertex has distribution and the fact that vertices only can be newly identified one time, the number of newly explored vertices through has the distribution , by Lemma 1, , where distributed as . That is to say, is dominated above by for all , where T is defined to be in the exploration process. Let denote total inactive (explored) vertices which are explored from time 0 to t by the exploration process on starting from vertex which is chosen uniformly from vertex set Define . Notice thatHence, there is a coupling of the processes and such thatwhich means τ is stochastically larger than So by Lemma 2, we getFor a positive constant , set . Then, by the inequality , we obtain thatDenote . Then,

3.2. Inside the Critical Window

As the size of largest component at the least for a given function on n is an increasing property in with respect to , to give an upper bound, we assume that . Recall which is a sequence of the i.i.d. Random variables distributed as , and

For any real number , define

Let . Then, it is easy to check that is a martingale with . By the optimal stopping theorem, we have . Let . Then, by the equality for any positive , we can obtain

Therefore,which means . So, for any constant , we have

As in the proof of (27), we can determine that

Recall . Hence,

3.3. Weakly Supercritical Case

In this subsection, we appeal to the branching process method [10] to get the upper bounds in the weakly supercritical regime of . This method is also used by Kang et al. [12] to study other random graphs.

Theorem 4([13], Theorem 3.7). Consider the Galton–Watson branching process with supercritical offspring distribution . Let X be the random variable with the distribution of the number of offsprings. Define

Assume ; let be the unique positive solution to which means the supercritical branching process dies out with probability , and let

Then and

That is, is an offspring distribution of a subcritical branching process which is often called the dual process of Furthermore, if denotes the branching tree for and is the corresponding object for , then for any finite tree ,

Now we can obtain the following Lemma 3 about the supercritical branching process with offspring distribution and its dual process.

Lemma 3. Let denote the branching process with offspring distribution , be its total size, and be its surviving probability, where , , and . Denote its dual process by and the total size of by . Then, the following holds.(1)The surviving probability is(2)There is a positive constant such that(3)For any integer ,

Proof. (1) It is well known that for the Galton–Watson branching process with offspring distribution , it survives with the probability which is the unique solution in to , where is the corresponding probability generating function; (see [14], Theorem 1.6.1). Therefore, we only need to check that for the branching process with distribution ,whereIn fact, is continuous for , and by the inequality that for small enough ,we can obtain thatHence, the intermediate value theorem implies the claim holds.(2)To estimate , we notice the following equalities or inequalities. First, (4) impliesSecondly, by Theorem 4,And, we haveNow, we are in position of estimating . Let be the number of children for each vertex in the dual processes . Then,Hence, there is a positive constant such that the expectation of the total number of the vertices in the dual branching process is