Abstract

We consider the dynamics of a stochastic cobweb model with linear demand and a backward-bending supply curve. In our model, forward-looking expectations and backward-looking ones are assumed, in fact we assume that the representative agent chooses the backward predictor with probability , and the forward predictor with probability , so that the expected price at time is a random variable and consequently the dynamics describing the price evolution in time is governed by a stochastic dynamical system. The dynamical system becomes a Markov process when the memory rate vanishes. In particular, we study the Markov chain in the cases of discrete and continuous time. Using a mixture of analytical tools and numerical methods, we show that, when prices take discrete values, the corresponding Markov chain is asymptotically stable. In the case with continuous prices and nonnecessarily zero memory rate, numerical evidence of bounded price oscillations is shown. The role of the memory rate is studied through numerical experiments, this study confirms the stabilizing effects of the presence of resistant memory.

1. Introduction

The cobweb model is a dynamical system that describes price fluctuations as a result of the interaction between demand function, depending on current price, and supply function, depending on expected price.

A classic definition of the cobweb model is the one given by Ezekiel [1] who proposed a linear model with deterministic static expectation. The least convincing elements of this initial formulation are the linearity of the functions describing the market and their simple expectations. For these reasons, several attempts have been made over time to improve the original model. In a number of papers, nonlinearity has been introduced in the cobweb model (see Holmes and Manning [2]) while other authors have considered different kinds of price expectations (see, among others, Nerlove [3], Chiarella [4], Hommes [5], Gallas and Nusse [6]). In Balasko and Royer [7], Bischi and Naimzada [8] and Mammana and Michetti [9, 10] an infinite memory learning mechanism has been introduced in the nonlinear cobweb model. A more sophisticated cobweb model is the one proposed by Brock and Hommes [11] where heterogeneity is introduced via the assumption that agents have different beliefs, that is, rational and naive expectations. The authors assume that different types of agents have different beliefs about future variables and provide an important contribution to the literature evaluating heterogeneity.

Research into the cobweb model has a long history, but all the previous papers have studied deterministic cobweb models. The dynamics of the cobweb model with a stochastic mechanism has not yet been studied. In this paper, we consider a stochastic nonlinear cobweb model that generalizes the model of Jensen and Urban [12] assuming that the representative entrepreneur chooses between two different predictors in order to formulate their expectations:

(1) backward predictor: the expectation of future price is the weighted mean of past observations with decreasing weights given by a (normalized) geometrical progression of parameter () called memory rate; (see Balasko and Royer [7]);(2) forward predictor: the formation mechanism of this expectation takes into account the market equilibrium price and assumes that, in the long run, the current price will converge to it. Our study tries to answer the criticism of the economists regarding the total lack of rationality in the expectations introduced in dynamical price-quantity models. In fact, we assume that agents are aware of the market equilibrium price and therefore we associate forward-looking expectations to backward-looking ones.

At each time, the representative entrepreneur chooses the backward predictor with probability and the forward predictor with probability . This corresponds to introducing heterogeneity in beliefs, in fact we are assuming that, on average, a fraction of agents uses the first predictor, while a fraction of agents chooses the second one.

In recent years, several models in which markets are populated by heterogeneous agents have been proposed as an alternative to the traditional approach in economics and finance, based on a representative (and rational) agent. Kirman [13] argues that heterogeneity plays an important role in the economic model and summarizes some of the reasons why the assumption of heterogeneous agents should be considered. Nevertheless, it is obvious that heterogeneity implies a shift from simple analytically tractable models with a representative, rational agent to a more complicated framework so that a computational approach becomes necessary.

The present work represents a contribution to this line of research: as in Brock and Hommes [11] we assume different groups of agents even if no switching between groups is possible. Besides, our case can be related to the deterministic limit case studied in Brock and Hommes [11], when the intensity of choice of agents goes to zero and agents are equally shared between two groups. The new element with respect to such a limit case is that we admit random changes to the fractions of agents around the mean.

Moreover, even though our assumption is the same as considering (on average) fixed time proportions of agents, the fraction of agents employing trade rules based on past prices increases as increases. In fact the parameter can be understood as a sort of external signal of the market price fluctuations. More specifically, increasing values of correspond to greater irregularity of the market. In our framework, this means that for high values of , a greater fraction of agents expect that the price will follow the trend implied by previous prices instead of going toward its fundamental value, and they will prefer to use trading rules based on past observed events.

In the model herewith proposed, the time evolution of the expected price is described by a stochastic dynamical system. (Recent works in this direction are those by Evans and Honkapohja [14] and Branch and McGough [15].) More precisely, since for simplicity we start considering a discrete time dynamical system, the expected price is a discrete time stochastic process. In particular the expected price is a random variable at any time.

We note that the successful development of ad hoc stochastic cobweb models to describe the time evolution of the prices of commodities, will make possible to use these models to describe fluctuations in price derivatives having the commodities, has underlying assets. The stochastic cobweb model presented here can be considered as a first step in the study of a more general class of models.

The paper is organized as follows. In Section 2, we formulate the model in its general form. In Section 3, we consider the case where the memory rate is equal to zero, that is, the case with naive versus forward-looking expectations, so that the agent remembers only the last observed price. In this case we proceed as follows. First, we approximate the initial model with a new one having discrete states. Consequently we obtain a finite states stochastic process without memory, that is, a Markov chain. We determine the probability distribution of the random variable of the process solution of the Markov chain and, using a mixture of analytical tools and numerical methods, we show that its asymptotic behavior depends on the parameter of the logistic equation describing the price evolution that we call . Second, we extend the analysis to the corresponding continuous time Markov process and we obtain the Chapman-Kolmogorov forward equation. In Section 4, we propose an empirical study of the initial model (i.e., the model with continuous states), that is, we do the appropriate statistics of a sample of trajectories of the model generated numerically. In particular, we obtain the probability density function of the random variable describing the expected price as a function of time and we study these densities in the limit when time goes to infinity. Numerical evidence of bounded price oscillations is shown and the role of the memory rate, that is, the role of backward-looking expectations, is considered. The results obtained on the stochastic cobweb model confirm that the system becomes less and less complex as the memory rate increases, this behaviour is similar to the one observed in the deterministic cobweb model (see Mammana and Michetti [10]). Note that the model considered in Section 4 is not a Markov chain.

2. The Model

We consider a cobweb-type model with linear demand and a backward-bending supply curve (i.e., a concave parabola). (This formulation for the supply function was proposed in Jensen and Urban [12].) A supply curve of this type is economically justified, for instance, by the presence of external economies, that is by the advantage that businesses do not gain from their individual expansion, but rather from the expansion of the industry as a whole (see Sraffa [16]).

According to the previous considerations, demand and supply are given bywhere , are, respectively, the amount of goods demanded by consumers and the amount of goods supplied by the entrepreneurs at time and are real constants such that , and . In our model , are, respectively, the price of the goods at time and the expectation at time of the price at time .

The market clearing equation gives a quadratic and convex relationship between and , that is:that can be rearranged to obtain the well-known logistic map(the logistic map can be obtained from (2.2) by a linear transformation in the variable ):This last formulation is the same as the one reached by Jensen and Urban [12]. As a matter of fact, we will consider , for all where interesting dynamics occurs.

The map is a function of expectations so that we are considering two different formulations for the expectation formation mechanism. Despite the fact that in an uncertain context, economic agents do not have a perfect forecasting ability and therefore need to apply some sort of extrapolative method to formulate hypotheses on the future level of prices, we assume that they are aware of the market equilibrium level.

2.1. Backward-Looking Expectation

We consider the backward-looking component to extract the future expected market price value from the prices observed in the past through an infinite memory process (this iterative scheme is known as Mann iteration, see Mann [17]). The use of this type of learning mechanism is rather “natural” if the agent uses all the information available, that is, all the historical price values. In our model is the weighted mean of the previous values taken by the real variable , measured with decreasing weights given by the terms of a (normalized) geometric progression of parameter (), called memory rate:Note that when , naive or static expectations are considered; while when (2.4) becomes the arithmetic mean of past prices. This kind of expectations have been used by several authors such as Balasko and Royer [7], Bischi and Naimzada [8], Invernizzi and Medio [18], and Aicardi and Invernizzi [19]. They have also been studied in Michetti [20] and in Mammana and Michetti [10] where the model studied here has been considered in the deterministic contest. (Equation (2.3) with backward expectation as in (2.4) has an equivalent autonomous limit form describing the expectation prices dynamics, that is that can be used to study the asymptotic behaviour of the sequence and consequently of the sequence through . See Aicardi and Invernizzi [19] and Bischi et al. [21].)

2.2. Forward-Looking Expectation

We consider a forward-looking expectation creation mechanism. In order to introduce a more sophisticated predictor, we assume that the representative supplier of the goods knows the market equilibrium price, namely , but at the same time he knows that the process leading to the equilibrium is not instantaneous. In other words, we do not assume that prices come back to their equilibrium value instantaneously, as was assumed in Mammana and Michetti [10].

Our assumption is consistent with the conclusion reached in many dynamical cobweb models (see, e.g., Hommes [5] and Gallas and Nusse [6]) where it is proved that prices converge to the steady state, that is, to the equilibrium price, only in the long-run. According to such considerations we use the following equation to describe the forward-looking expectation:where is the long-run equilibrium price. We note that is independent of the expectation mechanism introduced. According to (2.5), the agent expects a weighted mean between the last observed price and the equilibrium price. The economic intuition behind the choice made in (2.5) is the following: if () the agent expects that the price decreases (increases) toward its equilibrium value; in other words, the expected price moves in the direction of the equilibrium value.

2.3. Choosing between Expectations

We assume that the representative agent chooses between the two predictors (the backward and the forward predictors) as follows:that is, is a random variable and consequently the dynamical system (2.3) describing the price evolution in time is a stochastic dynamical system, in fact the sequence of the prices , is a sequence of random variables, that is, it is a discrete time stochastic process obtained through the repeated application of to as shown in (2.3). Via (2.6), heterogeneity in beliefs is introduced, more specifically (2.6) translates the assumption that on average, a fraction of the agents is “backward-looking” while a fraction of the agents is “forward-looking.”

In this paper, we want to study the stochastic process , both from the theoretical and the numerical points of view (in this last case we will use elementary numerical methods and statistical simulations). In particular we want to describe the probability distribution of the discrete time stochastic process as a function of the parameters defining the model.

3. Naive versus Forward-Looking Expectations

3.1. The Discrete Time Stochastic Process

We consider the discrete time stochastic process , with memory rate equal to zero, that is, static and forward-looking expectations are assumed. Lack of memory is the crucial assumption to obtain the price dynamics described by a Markov process.

Let be a possible value of the price, then our stochastic process evolves according to the conditional probability of going from state at time to state at time . Let this Markovian conditional probability be denoted by .

According to the Markov property, sometime called memoryless property, the state of the system at the future time depends only on the system state at time and does not depend on the state at earlier times. The Markov property can be stated as follows: for all possible choices of the states and of the time . In order to obtain a Markov process given by a random variable which takes only discrete values at time , we first discretize model (2.3) assumingso that the price assumes values . Consider the following new variables resulting from a discretization of the interval in the way such that

(i) if and if , while(ii) if and if . Then

Finally we associate each value of the price with a stateso that we obtain the state set .

Note that the corresponding process may be treated as a discrete-time Markov chain, whose state space is (we use a state space made of eleven states numbered from to for simplicity). The transition matrix is given by , where , is the transition probability of going from state to state . We have and .

It is easy to see that the transition probabilities of models (2.3), (3.1), (3.3) depend only on the value of the current state and the value of the following state , regardless of the time when the transition occurs, consequently the Markov chain considered is homogeneous. The homogeneity property implies that the -step state transition probabilityis also independent of and it can be defined asso that the -step transition matrix is given by

We are interested in the probability that is in state at time , when at time it is in state with probability one.

Let the probability vector be denoted by , where is the probability that is in state after steps, . The probability distribution can be obtained throughNote that and that where is the Kronecher delta.

Recall that a subset is closed if , for all , while is irreducible if for all pairs there exists a positive integer such that . The subset is reducible if it is not irreducible. Finally we recall the notions of recurrent and transient states. A state is called recurrent (transient) if . (Regarding these definitions see, e.g., Feller [22].)

We now assume and we prove a general result about the Markov chain for some values of .

Proposition 3.1. For all the transition matrix is reducible.

Proof. Consider . If no state exists such that then is reducible. This last inequality has no solution if hence cannot be reached by another state. The proposition is proved.

We now fix the value of the parameter in (3.3), (3.1) in order to determine the transition matrix and to obtain the invariant distribution. In fact, some numerical insight will be helpful to draw some general conclusions on the qualitative properties of the process considered.

In a first experiment, we assume and , in this case we obtain a transition matrix whose nonzero entries are , and .

As previously proved is reducible, moreover by simply rearranging the order of the states in , it is possible to see that is the unique closed set of the chain and that it is an absorbing state. This result holds for other values of as stated in the following proposition.

Proposition 3.2. Assume . For all , , where and , the state is an absorbing state.

Proof. Assume , .
Lethence
First consider that with probability also hence . Notice that is increasing in then . Trivial computations show thatAs a consequence and consequently implying that .
Second, with probability , we havehence ; again is increasing in and, trivially, we haveUsing the same arguments employed before, we conclude that the state is mapped into the state .
Hence .

As proved in Proposition 3.2, we can conclude that our chain admits an absorbing state if , confirming the result obtained with our simulations.

Let us recall some mathematical results concerning the stability of Markov chains (for further details see, among others, Feller [22] and Meyn and Tweedie [23]). A stationary distribution is a probability distribution verifying the equation . If there exists one, and only one, probability distribution such that as for every initial probability distribution , we say that the Markov chain is asymptotically stable .

In our case, for , the Markov chain is asymptotically stable. In fact the limit distribution , where , is the distribution , and this result holds for every initial probability distribution .

The asymptotic behaviour of the probability distribution changes if we consider a different value of , for example (notice that for this value of Proposition 3.2 does not hold). In this case we obtain a transition matrix whose nonzero entries are as follows: , and .

Rearranging the order of the states in it is easy to deduce that is reducible (as proved in Proposition 3.1): the set is closed and irreducible, consequently the states and are recurrent while all the other states are transient. Moreover, considering the irreducible matrixwe obtain the invariant distribution given by (the nonzero elements of the invariant distribution are obtained by looking at the left eigenvectors of matrix ) The following simulations support our analysis. We choose and we calculate the probability distribution of the state variable after -steps for several initial conditions.

First of all we consider the case . In Figure 1 we start from the initial condition with probability one, that is from . After the first step two states can be reached with different probabilities. After the third step we have that the state is reached with probability equal to one as supported by the previous considerations. This situation does not change if the number of steps taken is increased since is the unique asymptotic state. It should be kept in mind that for a given initial distribution we define an asymptotic state as a state such that .

In Figure 2 we calculate the probability distribution for and when . Our simulation proves that two equilibrium prices will be approached in the long run with different probabilities.

Moreover, considering simulations for different values of it seems that as increases the process becomes more complicated, this was already known in the case of the deterministic logistic map (see, e.g., Devaney [24]). In fact several asymptotic states can be reached as increases. More precisely, starting from the same initial condition we have found different numbers of asymptotic states corresponding to different values of .

The diagram of Figure 3 represents a sort of final-states diagram: for chosen in the interval , the probability distribution after -time steps, with big enough (we have considered ) and an arbitrarily chosen initial state (we have chosen with probability one, that is ), has been calculated and depicted versus the correspondent value of . Moreover, after a large number of simulations, we have observed that the final-states diagram does not change when different initial conditions are considered so that Figure 3 seems to hold for every initial distribution . The empirical result described here is in agreement with the result proved in Proposition 3.2, furthermore we can conclude that the absorbing state existing for is also an asymptotic state.

The situation is quite different for greater values of . For example, our calculation shows that there exists a value such that the process converges to a unique asymptotic state if while five asymptotic states are possible as soon as the parameter value is crossed. According to these considerations, can be understood as a sort of bifurcation value or critical value since the number of long-run states changes going across . In Figure 4, we simulate the two cases and using a large number of time steps , in order to show how the asymptotic probability distribution obtained changes.

This study allows us to conclude that, in the case with naive versus forward-looking expectations, there exists a unique asymptotic distribution whose behaviour becomes more complicated as increases, according to the fact that nonlinearity implies nontrivial dynamics.

Obviously, the quantitative results are not independent of the number of states considered, in any case it is possible to verify that the qualitative results (i.e., the increase in complexity as increases) still hold. As a matter of fact, note that a similar scenario occurs when we consider the model in a deterministic contest with naive expectations. In fact, in this case, the price sequence converges to a fixed point or to a -period cycle depending on the value of , and the process as increases produces orbits tending towards high-period cycles (see Devaney [24]).

3.2. The Continuous-Time Stochastic Process

In this section we move on to the continuous-time Markov process, we calculate the probability distribution solving the appropriate system of ordinary differential equations and we compare it with the probability distribution obtained by statistical simulation of the appropriate continuous-time limit of the stochastic process defined by (2.3), (3.1), (3.3). (About the mathematical concepts related to continuous Markov processes see Ethier and Kurtz [25].)

In particular, we start looking at the transition matrix over the time interval . Using (3.7) and taking time steps of length we obtain that the following relation holds .

Assuming , we have and when goes to zero and goes to infinity in order to guarantee with fixed, it is easy to deduce the following system of ordinary differential equations called Chapman-Kolmogorov's forward equations:where the initial condition of (3.14) is where is a given probability distribution and is called infinitesimal generator matrix, whose entries are the transition rates.

It is well known that the solution of (3.14) is given bywhere . In order to define the transition matrix over the time interval , , in terms of the infinitesimal generator matrix , we consider that the probability vector can be expressed as follows:where is the Landau symbol.

Since the interval is divided into -time steps of length , we haveso that we haveThen, the transition matrix is given by

From (3.15) we have that calculating the exponential of the generator matrix times ; and acting with this “exponential matrix” on we obtain the probability vector for an arbitrary value of . Moreover, we observe that the matrix defined as , where is the one-step transition matrix defined in the previous section and is the identity matrix, is an infinitesimal generator matrix. In fact it is easy to verify that the following properties hold (see Inamura [26]):

(1);(2);(3), with .

We present some simulations to compare the numerical solution obtained from computing (3.15) with the statistical distribution obtained from simulations of the appropriate continuous time limit of the stochastic process defined by (2.3), (3.1), (3.3).

In Figure 5, we consider the case and we observe that as and in such a way that the product is equal to a constant value , the statistical distribution approaches the solution obtained solving numerically (3.14) with the appropriate initial condition, that is computing (3.15).

Figure 6 confirms that the same conclusion holds if we consider the parameter value .

4. The Role of the Memory Rate: Numerical Simulations

We now come back to the initial model with discrete time, continuous states, is not necessarily zero and performs some numerical simulations for several choices of the parameters. In this way it is possible to compare different markets made up by agents applying naive and infinite-memory expectations, each of them versus forward-looking ones.

Our simulations have been performed as follows:

(i) we want to depict a trajectory starting at time zero from an initial condition , concentrated with probability one in one state, from time to time . We extract an -dimensional random vector whose th component is denoted with made of random numbers independent and uniformly distributed in the interval . At step of the trajectory simulation we compare to the given value of and we choose consequently the expectation-mechanism formation and we apply the map (2.3). Repeating this procedure at each time step, we depict the trajectory of the stochastic dynamical system in the plane . Obviously, different extractions of the random vector correspond, in general, to different trajectories of the stochastic dynamical system;(ii) we repeat the previous procedure several times, that is, we extract a large number of vectors of random numbers so that we obtain a sufficiently large number of trajectories of the stochastic dynamical system. In this way for any given time we have constructed a sample of the possible outcomes for . It is then straightforward to draw an approximate probability distribution of the random variable .

This procedure is in agreement with the consideration of a market made of a large number of agents and with the heterogeneity of beliefs, that is, with the assumption that, on average, agents distribute between the two predictors in the fractions and respectively.

We first assume as done in the previous section. Our goal is to find numerically the distribution of the random variable at time and to consider the behaviour of this distribution as .

We represent the evolution in time of for and . In Figure 7, in panel (a), three different trajectories are depicted for ; in panel (b) the probability distributions are presented for several values of the time variable . Note that when is big enough, the equilibrium price is approached by the stochastic cobweb model.

Completely different behaviours are observed if we change the value of the parameter . For example, let us choose while the other parameters are the same as those used in the previous simulations. In Figure 8, we show for these new parameter values the results of the same simulation shown in Figure 7.

When the trajectories do not approach the equilibrium price but they continue to move into a bounded interval even in the long run.

This behaviour is closely related to that exhibited by the deterministic cobweb model with naive expectations and with the dynamics of the logistic map. In fact as increases the deterministic cobweb model produces more and more complex dynamics. Similarly, in the stochastic contest, prices no longer converge to the equilibrium value, but fluctuate infinitely many times. Consider now . We want to understand how the memory rate affects the asymptotic probability distribution of our stochastic model. Assume as in the previous simulation and let increase from zero toward one. An interesting observation is that as the memory rate increases, the unique equilibrium price is reached after a number of time steps which decrease, thus confirming the stabilizing effects of the presence of resistant memory. The same effect has been observed in the deterministic contest. This consideration is confirmed by the simulation shown in Figure 9 representing the trajectories depicted for several increasing values of .

Similar results have been observed considering several other parameter values. All the experiments confirm the well-known result obtained for the deterministic cobweb model with infinite memory expectation, that is the fact that the presence of resistant memory contributes to stabilizing the price dynamics.

5. Conclusions

We studied a nonlinear stochastic cobweb model with a parabolic demand function and two price predictors, called backward-looking (based on a weighted mean of past prices) and forward-looking (based on a convex combination of actual and equilibrium price), respectively. The representative agent chooses between expectations, this fact may be interpreted as a population of economic agents such that, on average, a fraction chooses a kind of expectation while a different kind of expectation is chosen by the remaining fraction of agents. Since a random choice between the two price expectations is allowed (a possible motivation is the assumption of heterogeneity in beliefs among agents), we have considered a new element that is the stochastic term in the well-known cobweb model. In fact, even if research into the cobweb model has a long history, the existing literature is limited to the deterministic contest. As far as we know, the dynamics of a cobweb model with expectations decided on the basis of a stochastic mechanism have not been studied in the literature, so our paper may be seen as a first step towards this direction, and our results may trigger further studies in this field.

In order to describe the features of the model, we have concentrated on the case in which the backward predictor is simply a static expectation, so that the stochastic dynamical system is a Markov process, considered both in discrete and continuous time.

By using an appropriate transformation, it has been possible to study a new discrete time model with discrete states. In this way we have been able to apply some well-known results regarding Markov chains with finite states and to prove some general analytical results about the reducibility of the chain and the existence of an absorbing state for some values of the parameters. We have also presented numerical simulations confirming our analytical results. From an empirical point of view, we have observed that the absorbing state is also an asymptotic state if parameter is small enough. On the other hand, for increasing values of , the chain is still reducible although its closed set may be composed of more than one state providing that cyclical behaviour is admitted in the stochastic cobweb model, as in the deterministic one. Other evidence is related to the sensitivity of the invariant distribution to little changes of the parameter . The study done for the discrete time stochastic model with finite states enables us to conclude that as increases the process becomes more complicated confirming the well-known results of the deterministic logistic map.

In a following step we moved to the continuous-time Markov process in order to use analytical tools on differential equations that made it possible to obtain the exact probability distribution at any time . By solving the appropriate system, the probability distribution has been obtained and some numerical simulations have been presented by calculating the time limit of the discrete-time Markov chain. We have shown that as the time discretization step goes to zero, the result of the statistical simulation approaches the probability distribution obtained analytically.

Finally, we come back to the case with backward expectations with memory. Since the model become quite difficult to be treated analytically, we presented some numerical simulations that enable us to consider the role of the memory rate in the stochastic cobweb model. We have found that the presence of resistant memory affects the asymptotic probability distribution: it contributes to reducing fluctuations and to stabilizing the price dynamics thus confirming the standard result in economic literature.

Acknowledgment

The authors wish to thank all the anonymous referees for their helpful comments and suggestions.