Abstract

We consider stochastic population processes (Markov jump processes) that develop as a consequence of the occurrence of random events at random time intervals. The population is divided into subpopulations or compartments. The events occur at rates that depend linearly on the number of individuals in the different described compartments. The dynamics is presented in terms of Kolmogorov Forward Equation in the space of events and projected onto the space of populations when needed. The general properties of the problem are discussed. Solutions are obtained using a revised version of the Method of Characteristics. After a few examples of exact solutions we systematically develop short-time approximations to the problem. While the lowest order approximation matches the Poisson and multinomial heuristics previously proposed, higher-order approximations are completely new. Further, we analyse a model for insect development as a sequence of developmental stages regulated by rates that are linear in the implied subpopulations. Transition to the next stage competes with death at all times. The process ends at a predetermined stage, for example, pupation or adult emergence. In its simpler version all the stages are distributed with the same characteristic time.

1. Introduction

Stochastic population dynamics deals with the (stochastic) time evolution of groups of similar individuals (called populations or subpopulations) immersed in a habitat. The goal is to describe the evolution of population numbers and the number of events associated with changes in the populations.

Most of the attention has been paied in the past to the populations, considering events only by their extrinsic value as jumps in populations. However, this perspective must be revised. In any study of vector-transmitted diseases, such as in [1, 2], the most relevant statistics corresponds to the interactions between hosts and vectors. Such interactions are associated with events in the life cycle of the vector. In the case of arthropods transmitting diseases to mammals, blood feeding to complete oogenesis is the opportunity to transmit pathogens. In other problems, the development of insects is experimentally described by the sequences of transformations in their life cycle (such as moulting) [35]: each transformation must be considered an event. To our knowledge, explicit use of the event structure to study population changes in Markov Jump processes has been introduced in [6], along with a short-time approximation based on self-consistent Poisson processes.

The present research originates in our efforts to model vector-transmitted diseases, such as Dengue, starting from a description of the life cycle of Aedes aegypti, the vector [79]. In modeling insect development in an aquatic phase (such as the case of Aedes aegypti and Drosophila melanogaster) a crucial point is to represent accurately the statistics of the duration of the aquatic phase (also known as the time of emergence statistics) as it determines the response of the population to climatic events such as rains [10]. A deterministic model for development was proposed in [11] as a sequential process with associated linear rates. The final sections of this paper describe a stochastic development model.

While [6] deals with the stochastic approximation of general problems (achieving a method with error ), this paper specifically addresses the problem of formulating and solving population dynamics problems in event space with linear rates, that is, rates that depend linearly on the populations (in the chemical literature they are known as first-order reactions [12]). We provide both exact solutions and general approximation techniques with error . Results from [6] for the linear case are a modified version of one of the approximation schemes in this paper (Poisson approximation, Section 5). The linear problem is mathematically simpler than the general case and a few general results can be established before resorting to approximations.

Linear processes are an important part of most biological dynamical descriptions. Using an analogy from classical population dynamics, linear (Malthusian) population growth [13] describes the evolution of a population at low densities (e.g., where the local environment is essentially unaffected by the population growth and the individuals do not interfere with each other). More generally, processes that consider only individuals in a somewhat isolated basis can be satisfactorily described with linear rates. However, whether to use linear descriptions or not is ultimately determined by the nature of the process (or subprocess). For example, finite environments will sooner or later force individuals to interact in growing populations and nonlinear rates become necessary such as in Verhulst’s description of population growth [14].

Perhaps the oldest application of linear processes to population dynamics regards birth and death processes. Historically, according to Kendall [15], the first to propose birth and death equations in population dynamics was Furry [16], for a system of physical origin. Currently, such an approach is being used in several sciences: linear stochastic processes have been proposed for the development of tumors [17, 18]; drug delivery and cell replication are also described in these terms [19]. Concerning the tumors, the model proposed in [17, 18] is based on linear individual processes, a detailed treatment of which would require a paper of its own. The accumulation of individual processes is finally approximated with a “tunnelling process” corresponding to a maturation cascade that can be easily described with the tools of this paper (see (71)). Concerning the question of drug delivery, we will come back in detail to [19] in Section 7.

A substantial use of population dynamics driven by jump processes has been done also in chemistry, particularly after the rediscovery by Gillespie [20] of Kendall’s simulation method [21, 22]. Poisson [23, 24], binomial [25, 26], and multinomial [27] heuristics for the short-time integrals have been proposed. There is, however, an important difference between chemistry and biology concerning the issues of this paper. On the one hand, linear rates are seldom the case in chemical reactions except perhaps in the case of irreversible decompositions or isomeric recombination. Attention to linear rates in chemistry is relatively recent [12] when compared to over 200 years of Malthusian population growth. On the other hand, detailed event statistics does not appear to be fundamental either: highly diluted chemical reactions on a test tube may easily generate events of each type. Consequently, attention to event based descriptions appears to be absent from the chemistry literature. This might, however, change in the future, since the technical possibilities given by for example, “in-cell chemistry” and “nanochemistry” allow dealing with a lower number of events where tiny differences in event count could be relevant for the general outcome.

In the mathematical literature attention to jump processes starts with Kolmogorov’s foundational work [28] and its further elaboration by Feller [29]. A substantial effort to relate the stochastic description to deterministic equations was performed by Kurtz [3034]. However, attention has always focused on the dynamics in population space, rather than event space.

This work focuses on the event statistics associated with linear processes. Even for cases where the detailed event statistics is not of central interest, such an approach offers an alternative to the more traditional formulation in population space.

Further, we discuss in detail the stochastic version of a linear developmental model. Specifically, we deal with a subprocess in the life cycle of insects that admits both a linear description and an exact solution, namely, the development of immature stages, which is a highly individual subprocess. From egg form to adult form, the development of insects goes through several transformations at different levels. For example, at the most visible level, insects undergoing complete metamorphosis evolve in the sequence egg, larva, pupa, and adult. Further, different substages can be recognised by inspection in the development of larvae (instars) and even more stages when observed by other methods [35]. The immature individual may die along the process or otherwise complete it and exit as adult.

In Section 2 we formulate the dynamical population problem in event space. In Section 2.1 we present the Kolmogorov Forward Equations (for Markov jump processes) for the most general linear systems of population dynamics in the form appropriated to probabilities and generating functions. The general solution of these equations will be written using the Theorem of the Characteristics, reformulated with respect to the standard geometrical presentation [36] into a dynamical presentation. We will analyse the general structure of the equations and in particular their first integrals of motion and the projection onto population space (Sections 2 and 3). Section 4 contains two classical examples. In Section 5, Poisson and multinomial approximations (both with error ) are derived and a higher-order approximation with error (with as an arbitrary positive integer) is elaborated. The larval development model is developed in Sections 6 and 7, treating the two general cases. We also revisit in Section 7 the question of drug delivery introduced in [19], in terms of the present approach. Concluding remarks are left for the final Section.

2. Mathematical Formulation

The main tools of stochastic population dynamics are a list of (nonnegative, integer) populations , where a list of events an (integer) incidence matrix describing how event modifies population , and a list of transition rates describing the probability rate per unit time of occurrence of each event.

An important feature of the present approach is that the dynamical description will be handled in event-space. In this sense, the “state” of the system is given by the array indicating how may events of each class have occurred from time up to time . Hence, event-space is, in this setup, a nonnegative integer lattice of dimension . The connection with the actual population values at time is ( denotes the initial condition)

Definition 1 (linear independency). One says that the populations are linearly dependent if there is a vector such that for all one has . Otherwise, one says that the populations are linearly independent and abbreviate it as LI.
In similar form, we say that the populations have linearly dependent increments if there is so that for all it holds that .
The populations are said to have linearly independent increments (abbreviated LII) if they do not have linearly dependent increments.

Along this work we will assume the following.

Assumption 2. The populations have linearly independent increments; that is, all involved populations are necessary to provide a complete description of the problem; none of them can be expressed as a linear combination of the remaining ones for all times.

Lemma 3. (a) If the populations have linearly independent increments, then the populations are linearly independent.
(b) The matrix has no nonzero vectors such that for all if and only if the populations have linearly independent increments.

Proof. (a) The assertion is equivalent to the following: if the populations are linearly dependent, then they have linearly dependent increments, which is immediate after the definition of linearly dependent increments.
(b) Assume that for all there exists such that ; then and the populations have linearly dependent increments.
Conversely, assume that the populations have linearly dependent increments and select time intervals such that and for ; then, we have that Statement (b) is the negative statement of these facts.

Strictly speaking the last step in the above proof takes for granted that there exist time intervals where only one event occurs. This is assured by the next assumption (third item). Also the initial conditions and the problem description should be such that all defined events are relevant for the dynamics (e.g., the event “contagion” is irrelevant for an epidemic problem with zero infectives as initial condition).

2.1. Dynamical Equation—Basic Assumptions

Population numbers evolve in “jumps” given by the occurrence of each event (birth, death, etc.). In the sequel, we will use Latin indices , , , and for populations and Greek indices , , and for events.

The ultimate goal of stochastic population dynamics is to calculate the probabilities of having events in each class up to time , given an initial condition .

The stochastic dynamics we are interested in correspond to a Density Dependent Markov Jump Process [37, 38]. For a sufficiently small time interval , we consider the following.

Assumption 4. For each event, (i)event occurrences in disjoint time intervals are independent;(ii)the Chapman-Kolmogorov equation [28, 29] holds;(iii); (iv); (v), . Here is the elapsed time since the time and are the events of type accumulated up to .

It goes without saying that probabilities are assumed to be differentiable functions of time. denotes the probability rate for event , which is assumed in the following lemmas to be a smooth function of the populations, not necessarily linear (yet). Two well-known general results describe the dynamical evolution under these assumptions.

Lemma 5 (see Theorem   in [29]). Under Assumptions 2 and 4, the waiting time to the next event is exponentially distributed.

Theorem 6 (see [28] and Theorem   in [29]). Under Assumptions 2 and 4, the dynamics of the process in the space of events obeys the Kolmogorov Forward Equation, namely;

Proofs of these results are given in Appendix A.

2.2. Linear Rates

We will assume that the transition rates depend—in a linear fashion—on the populations only. To distinguish from general rates (denoted above by the letter ), we will use the letter to denote linear transition rates. Hence, Since rates are nonnegative, we have that the proportionality coefficient between event and subpopulation satisfies . These proportionality coefficients convey the environmental information and as such they may be time-dependent since the environment varies with time. Whenever possible, we do not write this time dependency explicitly to lighten the notation.

Lemma 7. For the case of linear rates, population space is positively invariant under the time evolution given by (1) and the previous assumptions if and only if and only if (no sum).

Proof. The probability of occurrence of event in a small time interval is After an occurrence, population is modified to . Population space is positively invariant under time evolution if and only if the probability of occurrence of event is zero for those population values such that (we are just saying that events pushing the populations to negative values do not exist). Since this should hold for arbitrary , we have that .
Assume that . Then, for . Letting we realise that regardless of the values of , and hence it holds that for and therefore (no sum). This proves the second statement. If we have that also for . Hence as well; and the event can be ignored since it does not influence the dynamics.

3. Kolmogorov Forward Equation

Let us recast the dynamical equation given by Theorem 6 in terms of the generating function [39] in event-space, defined as For the case of linear rates, we define As a consequence, Using Theorem 6 to rewrite and using further (a matter of replacing in the definition of ), we obtain Since the generating function reflects a probability distribution, we have the probability condition . Further, the solutions of (10) can be uniquely translated into those given by Theorem 6 for corresponding initial conditions: corresponding to , where the integers denote how many events of each class had already occurred at . The natural initial condition becomes , corresponding to .

3.1. The Method of Characteristics Revisited

Lemma 8. Equation (10) with and with the initial condition admits the solution where is the flow of the following ODE with initial condition , :

Proof. It is convenient to rename as , introduce , and recall that depends only on the extended variable array . However, when necessary, we will write explicitly as . We have that Further, consider the monodromy matrix [40] transforms the initial velocity into the final velocity; hence Thus, we get From the definition of monodromy matrix above it follows that and by (11) and the chain rule,

Lemma 9. Equation (10) with and with the initial condition admits the solution where is a solution of the homogeneous problem with and

Proof. The proof is nothing more than an application of the method of the variation of the constants. The details are as follows.
Introducing the proposed solution into (10) and using Lemma 8 to eliminate we obtain that satisfies (10) with initial condition . Computing directly from (20) we have that after retracing our steps from the previous Lemma.

The result presented in Lemma 9 was constructed using the method of variation of constants, which provides an alternative (constructive) proof.

3.2. Basic Properties of the Solution

Definition 10. A nonzero vector such that , for , is called a structural zero of . Other zero eigenvalues of are called nonstructural.

Lemma 11. (a) There are structural zeroes if and only if . (b) The only structurally stable zero eigenvalues of the matrix are the structural zeroes.

Proof. (a) The “if” part is immediate since . The “only if” part follows from the assumption of independent population increments (and therefore independent populations). For (b) put . If for nonzero and , then . This relation is not structurally stable since by adding arbitrarily small to all diagonal elements of we can assure for this modified and for any .

Lemma 12. First integrals of motion for the characteristic equations. Let be a structural zero of , then is a constant of motion for the characteristic equations (12).

Proof. Rewrite the first equation (12) as (indices renamed) Then, Since , the expression in the lemma is the exponential of the integral obtained from

Given relationship (1) between initial conditions, events, and populations, from the probability generating function in event space, a corresponding generating function in population space can be computed, namely, letting , be shorthand for , .

Lemma 13 (projection lemma). Equation realises the transformation from event space to population space. Moreover,

Proof. Taking logarithms on and it is clear that whenever there exist structural zeroes the transformation is noninvertible. In this sense we call it a projection. Concerning the statement, we have The statement is proved using (1) and rearranging the sums as where the prime denotes sum over such that .

Corollary 14. The integrals of motion of Lemma 12 project under the projection Lemma to .

Proof. The above substitution operated on gives

3.3. Reduced Equation

Assumption 15. There exist integers , such that (cf. (9))

This assumption amounts to considering that there exists a way to mimic the initial condition on the rates with the same matrix involved in the time evolution.

Theorem 16. Under Assumptions 2, 4, and 15, equation (10) with natural initial condition has the solution where in terms of the solutions of (12) of Lemma 8. Hence, satisfies (after eliminating the auxiliary time )

Proof. The differences between and are (a) factoring out the initial condition and (b) rearranging the time arguments since the argument in is natural to the ad hoc autonomous ODE, (12), while the argument in is natural to the (nonautonomous) PDE, (10).
The proof is just a matter of rewriting the previous equations under the present assumptions. Because of the natural initial condition, the desired solution to (10) is given by (20) in Lemma 9 which can be rewritten using Assumption 15, as (we use as shorthand for ) Recalling that we defined , the parenthesis can be recognised as the formal solution of (34) integrated from to .

Corollary 17. For a structural zero of Lemma 12, the generating function is unmodified upon the substitution ().

Remark 18. Having structural zeroes, it is a matter of choice to address the problem (a) using the variables and imposing the additional constraints given by structural zeroes or (b) shifting to population coordinates where these constraints are automatically incorporated. Because of Assumption 2 and Lemma 12, the transformation mapping one description to the other corresponds to However, option (b) proves to be more efficient when it comes to deal with approximate solutions (see Section 5).

4. Examples of Exact Solutions

4.1. Example I: Pure Death Process

In the case , , we have , , and ( is the initial population value). Equation (34) reads , to be integrated in , with . Hence, for constant death rate we obtain and

4.2. Example II: Linear Birth and Death Process

We have , , and , while denotes the birth rate and denotes the death rate. is the initial population. There exists one structural zero, with vector , while the corresponding constant of motion results in the relations and . The differential equation (34) for (to be integrated in ) becomes with solution (for which will be the case after projection and assuming time-independent birth rates) where . Finally, which after projecting in population space using Lemma 13 and Corollary 14 gives (cf. [41])

Remark 19. A pure death linear process corresponds to a binomial distribution, a pure birth process corresponds to a negative binomial, and a birth-death process corresponds to a combination of both as independent processes.

5. Consistent Approximations

When (34) of Theorem 16 cannot be solved exactly, we have to rely on approximate solutions. It is desirable to work with consistent approximations, namely, those where basic properties of the problem are fulfilled exactly, rather than “up to an error of size ”. For example, since our solutions should be probabilities, we want the coefficients of to be always nonnegative and sum up to one. Ideally, we want approximations to be constrained to satisfy Lemma 12 and to preserve positive invariance in population space (no jumps into negative populations). Whenever consistency is not automatic, it is mandatory to analyse the approximate solution before jumping into conclusions.

5.1. Poisson Approximation

Let us replace in the RHS of (34) (this amounts to propose that is constant and equal to its initial condition). The solution of the modified equation will be a good approximation to the exact solution for sufficiently short times such that is not significantly different from one.

For constant rates the rhs of (34) does not depend on time. The approximate solution reads and

In other words, the generating function is approximated as a product of individual Poisson generating functions.

This approximation has an error of . Simple as it looks, this approach gives a good probability generating function that respects structural zeroes, but it does not guarantee positive invariance: a Poisson jump could throw the system into negative populations. However, the probability of such a jump is also . Fortunately, there is a natural way to impose positive invariance and also a natural way to improve the error up to in the broader framework of general transition rates [6]. In this way, the approximation can be safely used, with error control, to address problems that resist exact computation one way or the other.

5.2. Consistent Multinomial Approximations

A systematic approach to obtain a chain of approximations of increasing accuracy to (34) can be devised via Picard iterations. Firstly, it is convenient to recast the problem in population coordinates (cf. Remark 18), so that the structural zeroes are automatically taken into account, regardless of other properties of the adopted approximation. From we obtain and (34) can be recast as Because of the linear independence of population increments (and hence of populations), Lemma 3(b), the counterpart of (34) in population coordinates reads

Remark 20. In the case of linearly dependent population increments, (34) still holds, while it is possible to elaborate a more general form in terms of the instead of (43). This is, however, outside the scope of this paper.

Since , we obtain , which is, so far, an exact result. Let us produce some approximated values for the ’s, and correspondingly for .

To avoid further notational complications, we will assume in the sequel that is time independent. Hence, will depend only on the time difference . Equation (43) can be recast as now to be integrated along the interval . We will even set in what follows.

Remark 21. For the case of time-independent proportionality factors , a corresponding modification can be introduced in (34), namely, overall change of sign, integration in , and setting to reduce the notational burden.

Equations (44) satisfy the Picard-Lindelöf theorem since the RHS is a Lipschitz function. Then the Picard iteration scheme converges to the (unique) solution. Taking advantage of the initial condition, the th order truncated version of the Picard iterations reads where denote the Maclaurin polynomial of order in .

Lemma 22. For , being sufficiently small, and for each order of approximation , in (44) is a polynomial generating function for one individual of the population; that is, it has the following properties:(a) is a polynomial of degree in , where the coefficient of is , with ;(b);(c);(d)the coefficients of regarded as a polynomial in are nonnegative functions of time.

Proof. (a) The statement holds for and . Assuming that it holds up to order , we have that is also a polynomial of degree in with time-dependent coefficients of type since each power of carries along a power in (as well as lower -powers). A Picard iteration step gives The first sum contributes with one time integration and one -power, which exactly preserves the polynomial structure and the order of the coefficients. The result is a polynomial of degree in , where the coefficients are polynomials of highest degree at most in , each one of lowest order (as in the statement). The second integral does not alter these properties since it generates again a polynomial of degree in , with time-dependent coefficients of order or higher, up to (they are just the coefficients of integrated once in time).
(b) Letting , Picard iteration gives trivially to all orders, again by induction.
(c) The statement holds for . Assuming it holds up to , we compute Both expressions in the first integral have the same Maclaurin polynomial up to order . Hence, the difference in the first integral contains only terms of order . This is also the case for the second integral. After integration, we obtain .
(d) The statement holds for and and in general for the zero-order coefficient, because it is unity for , as a consequence of the initial condition. For the higher-order coefficients, we use induction again. If all -coefficients in are nonnegative, then the same holds automatically for up to order , since the contribution adding to each inherited coefficient from the previous step does not alter its sign, for sufficiently small. It remains to show that the coefficients corresponding to are nonnegative. These contributions arise from the th order of after time integration. If all are positive, then the expression above is a product of -polynomials with nonnegative coefficients and the statement follows. Suppose that some . Then by Lemma 7, for . Hence, However, for ; that is, they are nonnegative, and hence the expression has only nonnegative coefficients.

5.2.1. Basic Properties of the First- and Second-Order Approximations

Let us analyse the generating function . The simplest nontrivial applications of the approximation scheme are the first- and second-order approximations. Expanding explicitly up to second-order, we get

We begin by noticing that the first-order approximation is obtained by neglecting all terms in in the above equation. Whenever is linear in , which is always the case for the first-order approximation, the resulting generating function corresponds to the product of (independent) multinomial distributions for the events affecting each subpopulation.

Let us now consider computed with the second-order approximated ’s to perform one simulation step of size .

Counting powers of on each we have (a) the probability of no events occurring in time-step (b) the probability of only one () event occurring in time-step and (c) the probability of another event () occurring after the first one:

5.2.2. Simulation Algorithm

A simulation step can be operated in the following way. Let be the probability of one () event occurring alone or followed by a second event. It follows that The second factor above is the conditional probability for the second event being given that there was a first () event.

The algorithm proceeds in two steps. First a multinomial deviate is computed with probabilities (the probability of no event occurring is still ). This gives the number of occurrences of each event. In the second step, for each we compute the deviates for the occurrence of a second event or no more events, with the multinomial given by the array of probabilities defined above (the probability of no second event occurring is here .

6. Implementation: Developmental Cascades without Mortality

In this and the following Sections we apply the tools developed up to now to the description of a subprocess in insect development, namely, the development of immature larvae, which is, in principle, a highly individual subprocess. We assume that the evolution of an insect from egg to pupa occurs via a sequence of maturation stages. Biologists recognise some substages by inspection in the development of larvae (instars) and even more stages when observed by other methods [35]. The immature individual may die along the process or otherwise complete it and exit as pupa or adult. The actual value of apparent maturation stages will depend on environmental and experimental conditions and will ultimately be specified when analying experimental data in Section 7.1.

Empirical evidence based on existing and new experimental results as well as biological insight produced by the present description will be the subject of an independent work [42]. We discuss however a couple of examples from the literature in Section 7.1.

The events for this process are maturation events promoting individuals from one subpopulation into to the next and death events for all subpopulations. In other words, each subpopulation represents a given (intermediate) level of development and maturation events correspond to progress in the development. Development at level promotes one individual to level and the event rate is linear in the -subpopulation. Hence, in , the matrix is square and diagonal, for (we have exactly subpopulations, counted from to ). All of the environmental influence is, at this point, incorporated through .

Moreover, level is the matured pupa and we may set since at pupation stage the present mechanism ends and new processes take place. Subpopulation is the exit point of the dynamics and we assume it to be determined by the stochastic evolution of the immature stages .

In the same spirit, the action of each event on the populations modifies just the actual population and the next one in the chain. Hence, Following Section 3, and Finally, letting we can write . In this framework the index pair is highly interconnected and we can achieve a complete description using just one index. We use throughout this section.

The procedure of Section 3 gives , while the dynamical system for reads, recalling the recasting in Remark 21; and . The overall initial condition is . Let and Note that . Rewriting the equations, with initial condition for all . With this notation the generating function reads , and we formulate hence an ODE for the ’s: By defining the auxiliary quantity , the solution for the -differential equations can be written in recursive form: In Appendix B we give the details of the explicit solution of this problem in terms of Laplace transforms. Finally, we obtain

The solution looks more compact when all are assumed to take the common value :

7. Developmental Cascades with Mortality

We add to the previous description subpopulation specific death events, , where . We do not consider death of the pupa as an event for the same reasons as before: the pupa subpopulation leaves the present framework of study. We have actually event pairs for each subpopulation and hence the analysis can still be done with just one index , since each event is population specific. The calculations get however more involved. It is practical to organise the events as follows: , and . Then, . We have Recall that the indices in run from to . It is also practical to order the variables and the in pairs and as and . Also for notational convenience we will assume that there exist when necessary. Defining we have (recall again Remark 21 and note that ), The overall initial condition is still .

Since there are more events than subpopulations, according to Definition 1 and Corollary 14, has structural zeroes corresponding to constants of motion in the associated dynamical system. The zero eigenvectors of are , nonzero in positions , , and , for (for the last one, recall that there is no position “"). Definitions similar to those used in the previous section will be useful as well; namely, and , changing and the initial conditions correspondingly. Hence, The structural zeroes and constants of motion allow solving all equations related to the -variables and only a chain of -equations is left, very much in the spirit of the previous section. satisfies . Finally, . We now compute .

Let , . Defining the auxiliary variable , the dynamical equations for in terms of read with solution In Appendix B we give the details of the explicit solution of this problem in terms of Laplace transforms. Finally, we obtain Again we have that . Specialising for the case where all and all coefficients are equal, the solution reads

7.1. Examples from the Literature

In [19], Section 3, the drug delivery process by oral ingestion of a medicine consisting of microspheres containing the active principle is discussed. The process is modeled considering that microspheres enter the stomach () and either dissolve (passing subsequently to the circulatory system) or proceed to the next stage of the GI tract, namely, the duodenum. Both processes are assumed to obey linear transition rates. In the same way, the undissolved duodenal microspheres pass to the next intestinal stage—the jejunum—and later to the ilium. The remaining undissolved microspheres entering the colon are eliminated without further dissolution. This is a neat example of a cascade, where dissolution corresponds to mortality in (71), while maturation corresponds to progress through the different stages. Remaining particles exit the system upon arrival to the fifth and last stage (). Using the data listed in page 239 of [19] for the dissolution and passage coefficients, the results of that paper can be directly read from (71). The probabilities of staying or dissolving at a given stage, function of time, appear as coefficients of for different powers of and . As an example, we compute the probabilities of permanence at a given stage (as a function of time), using the - and -notation from page  239 in [19] (corresponding in (71) to and , resp.). Define first , ; , , for each integer and finally , . With these ingredients the probabilities are , while for we have .

8. Concluding Remarks

The description of stochastic population dynamics at event level has several advantages with respect to the description at population level. While the projection onto population space is straightforward, the description of the number of events carries biological information that can be lost in the projection. From a practical, mathematical, point of view, the description in terms of events is more regular and makes room for a general discussion, since the number of events always increases by one.

When individual populations' processes are considered, the event rates become linear and, in addition, if the result of the event is to decrease one (sub)population, then they can only be proportional to the decreased population.

The probability generating function for the number of events, , conditioned to an initial state described by population values , has been obtained as a function of the solutions of a set of ODE known as the equations of the characteristics but presented in a form oriented towards numerical calculus rather than the usual presentation oriented towards geometrical methods.

We have introduced short-time approximations that, to the lowest order, are in coincidence with previously considered heuristic proposals. Our presentation is systematic and can be carried out up to any desired order. In particular, we have shown how the second-order approximation can be implemented.

The general solution represents independent individuals identically distributed within their (sub)population compartment.

Additionally, we believe that obtaining general solutions of the linear problem is a sound starting point when the more complicated approximations for nonlinear rates are considered.

Further, we have solved a stochastic individual developmental model, that is, a model that at the population level is described by rates that are linear in the (sub)populations. The model not only describes just the populations but also describes the events that change the populations.

Appendices

A. Proofs of Traditional Results

The original version of these proofs (in slightly different flavours) can be found on page 428-9 (equations (48) and (50)) in [28] and equations (29) and (30) on page 498 in [29].

A.1. Proof of Lemma 5

Proof. We assume the population and number of occurred events up to time to be known. Let be sufficiently small. It is clear that, if no events occur during the time interval , then , .
For the Chapman-Kolmogorov equation implies that Note that since is constant along the time interval in consideration, the rates are well-defined. Hence, and finally Given the natural initial condition , the solution to this equation is

A.2. Proof of Theorem 6

Proof. Again, the Chapman-Kolmogorov equation gives Repeating the previous procedure, taking the limit , we finally obtain Kolmogorov Forward Equation:

B. Solution via Laplace Transform

After multiplying (62) by the Heaviside function , we realise that the recursion involves a convolution. Hence, it is practical to use Laplace transforms. Indeed, defining , the equation can be rewritten as since convolution product becomes standard product after transforming. After some manipulation, the recursion is solved by induction: Inverse Laplace transform leads us back to (63). Note as well that when all are taken to be equal, the corresponding still gives the Laplace transform of the desired solution, while the inverse transform involves integrals related to the Gamma function (see (64)).

The case with mortality is similar, although slightly more complicated. Equation (70) reads, after Laplace transform as above (), with solution Again, inverse Laplace transform leads to (71).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors thank Jörg Schmeling and Victor Ufnarovski for a valuable suggestion. Hernán G. Solari acknowledges support from Universidad de Buenos Aires under Grant 20020100100734. Mario A. Natiello acknowledges grants from Vetenskapsrådet and Kungliga Fysiografiska Sällskapet.