Abstract

A Markov chain approach to the study of randomly grown graphs is proposed and applied to some popular models that have found use in biology and elsewhere. For most randomly grown graphs used in biology, it is not known whether the graph or properties of the graph converge (in some sense) as the number of vertices becomes large. Particularly, we study the behaviour of the degree sequence, that is, the number of vertices with degree in large graphs, and apply our results to the partial duplication model. We further illustrate the results by application to real data.

1. Introduction

Over the past decade, networks have played a prominent role in many different disciplines including theoretical physics, technology, biology, and sociology [19]. Particularly in biology, networks have become fundamental for the description of complex data structures. The appeal of networks may, at least partly, be due to the fact that in addition to being based on a rigorous mathematical base [1014], they also provide a convenient graphical representation of the data which allows for visual interpretation. Examples of complex data structures that can be described by networks include food webs in ecology, sexual partner networks in sociology, and protein interaction networks in biology.

The canonical model in random graph theory has been Erdös-Renyi random graphs, where each of a fixed number of vertices has a Poisson-distributed number of links to other vertices. A Poisson number of links have turned out barely to be realistic for many empirically observed networks, and other models have been suggested to accomodate the discrepancies between theory and observation. Barabási and Albert [2] proposed a simple stochastic model, the preferential attachment (PA) model, whereby the network gradually is built up by adding one vertex at a time until the network reaches the desired size. This model is able to account for the scale-free degree distribution that is observed in some empirical networks, but not many of the other features and motifs that are found in real networks (e.g., [1518]). Therefore, for mathematical and statistical analysis of network data, many other stochastic models have been proposed, in particular models that fall in the class of randomly grown graphs (RGGs; see next section for a definition) which share the property of the PA model of gradual growth. Overviews of different models and their properties can be found in [13, 16, 19, 20].

While the PA model has been under close mathematical scrutiny (e.g., [20]), other RGGs have been treated less extensively (e.g., [19, 21]) and mostly in the context of considering a continuous time approximation to the original discrete time Markov process (e.g., [13, 22, 23]). In this paper, we specifically address the question of the behavior of the vertex degrees as the number of vertices grows large. For a class of RGGs (including the PA model), the existence of a limiting degree distribution has been proven and its analytical form has been derived [21]. However, for most RGGs applied in biology, it is not known whether a limiting distribution exists, letting alone its form.

Biologically, it is of great interest to know whether the network stabilizes as it grows, or whether the degree distribution is a function of the size of the network, even for large network sizes. It relates to the question whether the network in an evolutionary perspective reaches an equilibrium, such that adding new vertices does not change the overall connectivity of the network. For example, in relation to protein interaction networks where vertices represent proteins and edges represent physical interactions between proteins, both scenarios seem a priori possible. Proteins may be able to engage in an unlimited number of interactions, or the number of interactions may be restricted by a number of factors such as space, time, and protein production rates. With the increasing statistical interest in analyzing complex biological networks with respect to evolutionary and functional properties [1, 5, 9, 13, 14, 24], it is also becoming of interest to understand the mathematical properties of the models.

We study a large class of RGGs that allows the construction of a simple, but time-inhomogeneous, Markov chain. For a given RGG, the corresponding Markov chain can be used to address questions about the RGG, for example, questions about the degree distribution. In particular, we focus on a special RGG, the partial duplication model, which has recently been used in the study of biological protein interaction networks [16, 18, 25, 26] and has formed the basis for new and more biologically realistic models (e.g., [16, 27]). The partial duplication model has two parameters ( and ) and we give conditions under which the chain is ergodic or transient. Further, based on the time-inhomogeneous Markov chain, we define a time-homogeneous Markov chain and a continuous time, time-homogeneous Markov process, and demonstrate that these, in general, are easier to study and apply than the original chain. Proofs rely on general theory of discrete Markov processes, which makes it easy to prove similar results for other RGGs.

Finally, we apply our results to a collection of real protein interaction data.

2. RGGs

An RGG is a Markov chain on undirected graphs such that has exactly vertices, and the set of vertices of is a subset of the set of vertices of for all . Note that we do not require the set of edges of to be a subset of the set of edges of .

Denote by the expected number of vertices of degree at time . Since, by assumption, the graph has exactly vertices, the expected relative frequency of vertices of degree at time is . For many RGGs, one can derive a recursive formula for , often referred to as the master equation [13]. Here, we consider a very general class of master equations given bywhere for all is an infinite real matrix with for , and such that all columns sum to the same number . Furthermore, assume for suitable real numbers thatwith for . The latter condition guarantees that the vertex degree can increase by at most one. By construction, for , and hence is effectively a matrix. We assume that the entries (2.2) in this submatrix are positive.

One particular example of a model fulfilling the conditions above is the partial duplication model (details are found in Section 4). The master equation is given byFor several other models, the master equation takes a similar form. Among these models are the duplication-divergence model [16], an approximation to the duplication-mutation model [22, 23], and the models discussed in [21] after a suitable modification (see Section 5.2). Generally, (2.1) is fulfilled whenever the expected degree change in a vertex depends on the degree only, and not on the degrees of the other vertices.

It follows immediately from (2.1) thatwhere is the transpose of , and by assumption all rows of sum to . It follows thatthat is, and the matrices describe a Markov chain with time-dependent transition probabilities.

Proposition 2.1. Assume that for all . If pointwise for all , then satisfies

Proof. The second part of the proposition is a simple application of Fatou's lemma. By using (2.4), the definition of , and , it follows thatfor some real number , and it remains to prove that . Note thatand by using Cesaro's lemma, we get

Consider the jump chain corresponding to the Markov chain , that is, the Markov chain with transition probabilities for , unless in which case the probability is put to 0. The jump chain has time-independent transition probabilities given byand for all . If , then . Occasionally, we consider a slightly modified jump chain (still with time-independent transition probabilities) which is allowed to stay in the same state with positive probability.

If a stationary distribution for the jump chain exists, it fulfillsAssume that and put . Then we obtain thatand hence is a solution to the equation in Proposition 2.1. Furthermore, we may normalize to get a distribution, and hence (2.11) and (2.12) may be used to transfer a stationary distribution for the jump chain to the limit of the time-inhomogeneous Markov chain and vice versa.

In our main example, the partial duplication model (see Section 4 for details), we have andand hence the assumption is fulfilled if .

3. A Continuous Time Approximation

In this section, we show that the time-inhomogeneous Markov chain converges to a continuous time, time-homogeneous Markov process after a suitable time transformation.

Denote by the time of the th jump in the time-inhomogeneous chain after a given time , and let be the state to which it jumps. Set and , the state of the chain at time . To simplify notation further, introduce and .

Note that at time , the probability of staying in state is . In particular, if we let , thenfor large and . Now consider the transformation . It follows thatas . That is, in the limit, the transformed waiting time is exponentially distributed with parameter .

Proposition 3.1. Let , , take the value of the time-inhomogeneous Markov chain at time , where and denotes the integer part of . At time 0, . For fixed , the process converges to a continuous time, time-homogeneous Markov process as .

Proof. Clearly, the process , , is Markovian by definition. Let be the time of the th jump, that is, and in the notation above. It follows from (3.2) thatfor . (Subscript in is used to underline the implicit dependency of .) Recall the transition probabilities (2.10) in the original jump chain. It follows immediately thatwhere . Combined with (3.3) this shows that, in the limit as , the rate of jumping to from is . More precisely, it demonstrates that , , converges to a continuous time, time-homogeneous Markov process with transition rate matrix given by for , and . This sum is indeed finite because by assumption for (see Section 2).

Note that a stationary equation for the continuous-time Markov chain fulfills the equation in Proposition 2.1 with replaced by .

4. The Partial Duplication Model

Consider the model , where is a simple graph with vertices, and where is obtained from in the following way: introduce a new vertex and choose uniformly. With probability , connect and . Independently of each other, connect each neighbor of to with probability .

In this section, we follow the path outlined in the previous section. That is, we first find the jump chain corresponding to the partial duplication model. As already stated in Section 1, the master equation is given by

It can be seen in the following way: the first term corresponds to the case where a vertex of degree keeps its degree, and this is the case unless one of two things happens: (i) the vertex is copied and receives a link to the new vertex, or (ii) it receives a link because one of its neighbors is copied. The probabilities of these two events are and , respectively. Similarly, the third term corresponds to the case where a vertex of degree gets a new link in one of the above-mentioned ways. The two remaining terms correspond to the cases where the new vertex has degree . The new vertex has degree when a vertex of degree is copied and receives exactly links to the neighbors of the copied vertex and no link to the copied vertex, or if a vertex of degree is copied and receives a link to the copied vertex and exactly links to the neighbors of the copied vertex.

The cases and have been studied in [19, 26], respectively. Note, however, that the master equation given in [26] is incorrect. For general , the model has been discussed in [18]. It follows immediately that where we, in order to simplify notation, defineFrom (4.2), we may read off the description of the matrix . Its entries satisfy thatand otherwise. An easy calculation shows thatfrom which it follows that the probability of jumping from state isMotivated by this formula, we allow the jump chain to stay in state with probability , and it follows that the transition probabilities in the modified jump chain satisfy thatand otherwise.

In particular, the chain is irreducible if and only if . If , the state 0 is absorbing, and if , the state 0 is not reachable from any other state. If state 0 is ignored, the resulting chain is irreducible for .

4.1. Classification of States

We first recall a theorem from [28]. The theorem is reformulated in [29], and we will use that formulation. If , then we ignore the state 0, and since in this case all are zero, the conditions stated in theorems below stay the same.

Theorem 4.1. Let be a Markov chain. If there exist a sequence of non-negative real numbers and an integer such thatthen the chain is ultimately recurrent.

Applied to the partial duplication model the theorem states that if there is a sequence of nonnegative real numbers with such thatthen, if , the probability of ultimate absorption in 0 is 1. If , the conclusion of the theorem is that all states are persistent.

The solution of , where log denotes the natural logarithm, is known as the omega constant, and we denote it by . We have .

Proposition 4.2. Let in the partial duplication model. If , the probability of ultimate absorption in 0 is 1, and if , the Markov chain is persistent.

In [26] it is claimed that for there exists a limiting distribution different from the one we find.

Proof. Let . Then is a nonnegative sequence of real numbers with , and hence it suffices to show that, for the choices of and in the proposition, the sequence satisfies (4.9). Since is a concave function, Jensen's inequality implies that for a positive random variable . In particular, using this for binomially distributed random variables, we getand hence we need only prove that the right-hand side of this inequality is less than or equal to for . This may, for , be rewritten asand here the first two -terms converge to , while the two remaining terms converge to and , respectively. Here we have used thatNote that since by assumption, we have , and hence the inequality in (4.11) holds for all .

Since zero is the only absorbing state, it follows that for , a limiting distribution takes the form , with . To infer the behaviour of the Markov chain for other values of , we first recall a result proved in [30].

Theorem 4.3. Let be an irreducible, aperiodic Markov chain. If there exist a sequence of positive real numbers and an integer withthen the chain is transient.

Let denote the golden ratio conjugate. That is, is the unique positive real number satisfying that . We have .

Proposition 4.4. Let in the partial duplication model. Then the Markov chain is transient for all .

Proof. Put for all . Then for all , and . Thus, in order to apply Theorem 4.3, we only need to verify that is a solution to the inequalities in (4.9).
It follows from a straightforward calculation thatsuch that is a solution if the right-hand side of this inequality is less than or equal to for . This is equivalent toand the left-hand side converges to as . Since , it follows that , and hence the inequality in (4.15) holds for all .

Let such that the chain is irreducible. One may ask for which the chain is ergodic. By Proposition 4.4, a necessary condition is . However, as we will see, this may not be sufficient. To see this, we first recall another theorem from [28].

Theorem 4.5. Let be an irreducible, aperiodic Markov chain. If there exist an and a nonnegative sequence of real numbers such thatthen the chain is ergodic.

In the partial duplication model, the second condition in the theorem is always fulfilled since for . Let denote the state of the chain at time . If there exists and such thatthen this , together with , will work in Theorem 4.5. This is pointed out in [28].

Proposition 4.6. Let . Then the Markov chain is ergodic for all .

Proof. We findNote that implies , and hence for all sufficiently small . That is, for a large , (4.17) is fulfilled.

In general, it is not an easy task to actually find the stationary distribution of the jump chain or the time-inhomogeneous Markov chain. For , an attempt to solve (2.4) has been made in [19]. They assume that converges and show that, under this assumption, the limit (for ) has a power-law tail. However, this does not establish the existence of a stationary distribution. Further, the power-law they provide for is in fact not a distribution. In the special case , the stationary distribution is for .

It is natural to ask what happens for the values of not covered in the propositions above. In general, this is difficult. However, if is not the maximal upper bound in Proposition 4.2, the culprit must be the particular choice of . Indeed, the damage provided by the use of Jensen's inequality is not severe. This may be seen in the following way: denote by the th central moment of a binomially distributed random variable with parameters and . From [31], we get , and by expanding as a Taylor series around , it follows that .

4.2. Application to Protein Interaction Networks

We used the computer program developed for [18] to estimate the parameters under the partial duplication model for different protein interaction networks. The Plasmodium falciparum (P. falciparum) dataset is obtained from [32], and the remaining datasets are downloaded from the Database of Interacting Proteins (http://dip.doe-mbi.ucla.edu). Curiously, we note that according to Proposition 4.6, all pairs of and correspond to ergodic Markov chains, indicating that the networks stabilize as the number of vertices becomes large.

For one of the networks, P. falciparum, we conducted some further experiments where 50 networks were simulated with the same number of vertices as in P. falciparum (1271) and the degree distribution was computed. All simulations were started from an initial network of two vertices connected by an edge. Furthermore, 1271 runs of the corresponding Markov chain were performed, and the degree distribution was calculated and compared to the degree distribution obtained from the simulated networks. Here, the initial state of the Markov chain is 1. The length of the runs was varied, as shown in Figure 1.

The simulations indicate that the Markov chain approach may be used to approximate the degree distribution. This is particularly useful for simulation of large networks in terms of memory usage; storing the connections between vertices requires memory capacity proportional to the number of vertices times the average number of connections. Simulation of the corresponding Markov chain requires memory capacity proportional to the current value of the chain.

The empirical degree distribution for P. falciparum shows that the partial duplication model does not provide a perfect fit. For example, no zero-degree vertices are included in the dataset (experimenter's choice), and this needs to be incorporated into the model.

5. Other Models

We have applied the Markov chain approach to other models, and in this section we briefly present some of the results.

5.1. The Duplication-Divergence Model

The duplication-divergence model is an extension of the partial duplication model, and it has been used for analysis of protein interaction networks as well [15, 16, 27, 33]. However, the model is slightly more complicated than the partial duplication model, and it has three parameters , , and . A step in this model is as follows: pick a vertex in the graph uniformly, and add a new vertex . Connect and with probability , and create an edge between and whenever there is an edge between the vertices and . Now modify the pairs independently of each other in the following way: with probability , keep both edges; otherwise, with probability , keep and delete , and with probability , keep and delete .

One can derive a master equation and go through the construction of the modified jump chain. In this case, the transition probabilities satisfy thatand otherwise. Here , and is the binomial probability from (4.3) with replaced by .

In order to apply Theorem 4.1, we put . It follows from simple calculations, again using Jensen's inequality, that is a solution to (4.9) if and satisfy thatNote that in the special cases and , the left-hand side of the inequality reduces to , the same inequality as seen earlier. Actually, for the model is the partial duplication model. It follows that if or , a solution of (5.2) must satisfty that . For an exact upper bound on is harder to derive. For these values of , the solution is less than and attains a minimum for .

5.2. Another Class of Models

We believe that the Markov chain approach presented in this paper may be used to infer the behavior of other classes of models. In [21], simple models with master equations on the formwhere and are nonnegative numbers, are studied. The resulting master equation for the relative frequencies may be written in matrix form aswhere , and and are the column vectors consisting of all the numbers and , respectively. The matrix is given by

Note that columns of the partitioned matrix in (5.4) sum to . That is, when divided by , the transpose of this matrix represents a Markov chain with time-dependent transition probabilities. We identify the countable set of states with where the artificial state accounts for the first row and the first column in the partitioned matrix.

We may compute the corresponding jump process, and again it turns out that its transition probabilities are time-independent. We may get rid of the state by simply forgetting the time we spend there. That is, for , we replace by the sum , and this leads to a Markov chain with transition probabilities given by

These jump chains are in fact all ergodic, and the stationary distribution of the time-inhomogeneous Markov chains has been derived in [21].

5.3. Other Extentions

Still other models do not fall under the conditions and assumptions introduced in this paper. For example, the master equation of the most general form of the duplication-mutation model [22, 23] depends on terms , and the columns of do not sum to the same number because of terms, and because the requirement for is not fulfilled.

Some of these problems may be circumvented at the cost of a more technical and elaborate exposition, but often the results need to be stated as limiting results. For example, if the columns of do not sum to the same number, the jump chain in (2.10) should be considered as emerging in the limit .

Furthermore, one may choose to ignore terms of order in the master equation. As , the influence from higher-order terms often becomes insignificant, justifying such an approximation. This is, for example, the case for the duplication-mutation model.

Acknowledgments

M. Knudsen is supported by the Centre for Theory in the Natural Sciences, University of Aarhus. C. Wiuf is supported by the Danish Cancer Society and the Danish Research Councils. They would like to thank an anonymous reviewer for valuable suggestions that improved the clarity of the paper.