Abstract

What role have theoretical methods initially developed in mathematics and physics played in the progress of financial economics? What is the relationship between financial economics and econophysics? What is the relevance of the “classical ergodicity hypothesis” to modern portfolio theory? This paper addresses these questions by reviewing the etymology and history of the classical ergodicity hypothesis in 19th century statistical mechanics. An explanation of classical ergodicity is provided that establishes a connection to the fundamental empirical problem of using nonexperimental data to verify theoretical propositions in modern portfolio theory. The role of the ergodicity assumption in the ex post/ex ante quandary confronting modern portfolio theory is also examined.

“As the physicist builds models of the movement of matter in a frictionless environment, the economist builds models where there are no institutional frictions to the movement of stock prices.”
E. Elton and M. Gruber, Modern Portfolio Theory and Investment Analysis (1984, p. 273)

1. Introduction

At least since Markowitz [1] initiated modern portfolio theory (MPT), it has often been maintained that the tradeoff between systematic risk and expected return is the most important theoretical element of financial economics, for example, Campbell [2]. Extending Mirowski [3], the static equilibrium methods used to develop propositions in MPT such as the capital asset pricing model can be traced to mathematical concepts developed from the deterministic “rational mechanics” approach to 19th century physics. In the years since Markowitz [1], financial economics has also adopted alternative mathematical methods from more recent contributions to physics, especially the diffusion processes employed by Black and Scholes [4] to determine option prices. The emergence of econophysics during the last decade of the twentieth century, for example, Roehner [5] and Jovanovic and Schinckus [6], has provided a variety of theoretical and empirical methods adapted from physics, ranging from statistical mechanics to chaos theory, to analyze financial phenomena. Yet, despite considerable overlap in method, contributions to econophysics have gained limited attention in financial economics. In contrast, econophysicists generally consider financial economics to be primarily concerned with a core theory that is inconsistent with the empirical orientation of physical theory.

Physical theory has evolved considerably from the constrained optimization, static equilibrium approach of rational mechanics which underpins MPT. In detailing historical developments in physics since the 19th century, it is conventional to jump from the determinism of rational mechanics to quantum mechanics to recent developments in chaos theory, overlooking the relevance of the initial steps toward modeling stochastic behavior of physical phenomena by Ludwig Boltzmann (1844–1906), James Maxwell (1831–1879), and Josiah Gibbs (1839–1903). As such, there is a point of demarcation between the intellectual prehistories of MPT and econophysics that can, arguably, be traced to the debate over energistics around the end of the 19th century. While the evolution of physics after energistics involved the introduction and subsequent stochastic generalization of ergodic concepts, fueled by the emergence of MPT following Markowitz, financial economics incorporated ergodicity into empirical methods aimed at generalizing and testing the capital asset pricing model and other elements of MPT.1 Significantly, stochastic generalization of the static equilibrium approach of MPT required the adoption of a restricted class of ergodic processes, that is, “time reversible” probabilistic models, especially the unimodal likelihood functions associated with certain stationary distributions.

In contrast, from the early ergodic models of Boltzmann to the fractals and chaos theory of Mandlebrot, physics has employed a wider variety of ergodic and nonergodic stochastic models aimed at capturing key empirical characteristics of various physical problems at hand. Such models typically have a mathematical structure that varies from the constrained optimization techniques underpinning MPT, restricting the straightforward application of many physical models. Yet, the demarcation between the use of ergodic notions in physics and financial economics was blurred substantively by the introduction of diffusion process techniques to solve contingent claims valuation problems. Following contributions by Sprenkle [7] and Samuelson [8], Black and Scholes [4] and Merton [9] provided an empirically viable method of using diffusion methods to determine, using Ito’s lemma, a partial differential equation that can be solved for an option price. Use of Ito’s lemma to solve stochastic optimization problems is now commonplace in financial economics, for example, Brennan and Schwartz [10], including the continuous time generalizations of MPT, for example, Epstein and Ji [11]. In spite of the considerable progression of certain mathematical techniques employed in physics into financial economics, overcoming the difficulties of applying the wide range of models developed for physical situations to fit the empirical properties of financial data is still a central problem confronting econophysics.

Schinckus [12, p. 3816] accurately recognizes that the positivist philosophical foundation of econophysics depends fundamentally on empirical observation: “The empiricist dimension is probably the first positivist feature of econophysics.” Following McCauley [13] and others, this concern with empiricism often focuses on the identification of macrolevel statistical regularities that are characterized by the scaling laws identified by Mandelbrot [14] and Mandlebrot and Hudson [15] for financial data. Unfortunately, this empirically driven ideal is often confounded by the “nonrepeatable” experiment that characterizes most observed economic and financial data. There is quandary posed by having only a single observed ex post time path to estimate the distributional parameters for the ensemble of ex ante time paths needed to make decisions involving future values of financial variables. In contrast to natural sciences, such as physics, in the human sciences there is no assurance that ex post statistical regularity translates into ex ante forecasting accuracy. Resolution of this quandary highlights the usefulness of employing a “phenomenological” approach to modeling stochastic properties of financial variables relevant to MPT.

To this end, this paper provides an etymology and history of the “classical ergodicity hypothesis” in 19th century statistical mechanics. Subsequent use of ergodicity in financial economics, in general, and MPT, in particular, is then examined. A modern interpretation of classical ergodicity is provided that uses Sturm-Liouville theory, a mathematical method central to classical statistical mechanics, to decompose the transition probability density of a one-dimensional diffusion process subject to regular upper and lower reflecting barriers. This “classical” decomposition divides the transition density of an ergodic process into a possibly multimodal limiting stationary density which is independent of time and initial condition and a power series of time and boundary dependent transient terms. In contrast, empirical theory aimed at estimating relationships from MPT typically ignores the implications of the initial and boundary conditions that generate transient terms and focuses on properties of a particular class of unimodal limiting stationary densities with finite parameters. To illustrate the implications of the expanded class of ergodic processes available to econophysics, properties of the bimodal quartic exponential stationary density are considered and used to assess the ability of the classical ergodicity hypothesis to explain certain “stylized facts” associated with the ex post/ex ante quandary confronting MPT.

2. A Brief History of Classical Ergodicity

The Encyclopedia of Mathematics [16] defines ergodic theory as the “metric theory of dynamical systems, the branch of the theory of dynamical systems that studies systems with an invariant measure and related problems.” This modern definition implicitly identifies the birth of ergodic theory with proofs of the mean ergodic theorem by von Neumann [17] and the pointwise ergodic theorem by Birkhoff [18]. These early proofs have had significant impact in a wide range of modern subjects. For example, the notions of invariant measure and metric transitivity used in the proofs are fundamental to the measure theoretic foundation of modern probability theory [19, 20]. Building on a seminal contribution to probability theory by Kolmogorov [21], in the years immediately following it was recognized that the ergodic theorems generalize the strong law of large numbers. Similarly, the equality of ensemble and time averages—the essence of the mean ergodic theorem—is necessary to the concept of a strictly stationary stochastic process. Ergodic theory is the basis for the modern study of random dynamical systems, for example, Arnold [22]. In mathematics, ergodic theory connects measure theory with the theory of transformation groups. This connection is important in motivating the generalization of harmonic analysis from the real line to locally compact groups.

From the perspective of modern mathematics, statistical physics, or systems theory, Birkhoff [18] and von Neumann [17] are excellent starting points for a modern history of ergodic theory. Building on the modern ergodic theorems, subsequent developments in these and related fields have been dramatic. These contributions mark the solution to a problem in statistical mechanics and thermodynamics that was recognized sixty years earlier when Ludwig Boltzmann introduced the classical ergodic hypothesis to permit the theoretical phase space average to be interchanged with the measurable time average. For the purpose of contrasting methods from physics and econophysics with those used in MPT, the selection of the less formally correct and rigorous classical ergodic hypothesis of Boltzmann is a more auspicious beginning. Problems of interest in mathematics are generated by a range of subjects, such as physics, chemistry, engineering, and biology. The formulation and solution of physical problems in, say, statistical mechanics or particle physics will have mathematical features which are inapplicable or unnecessary for MPT. For example, in statistical mechanics, points in the phase space are often multidimensional functions representing the mechanical state of the system; hence the desirability of a group-theoretic interpretation of the ergodic hypothesis. From the perspective of MPT, such complications are largely irrelevant. The history of classical ergodic theory captures the etymology and basic physical interpretation providing a more revealing prehistory of the relevant MPT mathematics. This arguably more revealing prehistory begins with the formulation of theoretical problems that von Neumann and Birkhoff were later able to solve.

Mirowski [3] [23, esp. ch. 5] establishes the importance of 19th century physics in the development of the neoclassical economic system advanced by W. Stanley Jevons (1835–1882) and Leon Walras (1834–1910) during the marginalist revolution of the 1870s. Being derived using principles from neoclassical economic theory, MPT also inherited essential features of mid-19th century physics: deterministic rational mechanics; conservation of energy; and the nonatomistic continuum view of matter that inspired the energetics movement later in the 19th century.2 More precisely, from neoclassical economics MPT inherited a variety of static equilibrium techniques and tools such as mean-variance utility functions and constrained optimization. As such, failings of neoclassical economics identified by econophysicists also apply to central propositions of MPT. Included in the failings is an overemphasis on theoretical results at the expense of identifying models that have greater empirical validity, for example, Roehner [5] and Schinckus [12].

It was during the transition from rational to statistical mechanics during the last third of the 19th century that Boltzmann made contributions leading to the transformation of theoretical physics from the microscopic mechanistic models of Rudolf Clausius (1835–1882) and James Maxwell to the macroscopic probabilistic theories of Josiah Gibbs and Albert Einstein (1879–1955).3 Coming largely after the start of the marginalist revolution in economics, this fundamental transformation in theoretical physics had little impact on the progression of financial economics until the appearance of diffusion equations in contributions on continuous time finance that started in the 1960s and culminated in Black and Scholes [4]. The deterministic mechanics of the energistic approach was well suited to the axiomatic formalization of neoclassical economic theory which culminated in the following: the von Neumann and Morgenstern expected utility approach to modeling uncertainty; the Bourbaki inspired Arrow-Debreu general equilibrium theory, for example, Weintraub [24], and, ultimately, MPT. In turn, empirical estimation and the subsequent extension of static equilibrium MPT results to continuous time were facilitated by the adoption of a narrow class of ergodic processes.

Having descended from the deterministic rational mechanics of mid-19th century physics, defining works of MPT do not capture the probabilistic approach to modeling systems initially introduced by Boltzmann and further clarified by Gibbs.4 Mathematical problems raised by Boltzmann were subsequently solved using tools introduced in a string of later contributions by the likes of the Ehrenfests and Cantor in set theory, Gibbs and Einstein in physics, Lebesgue in measure theory, Kolmogorov in probability theory, and Weiner and Levy in stochastic processes. Boltzmann was primarily concerned with problems in the kinetic theory of gases, formulating dynamic properties of the stationary Maxwell distribution, the velocity distribution of gas molecules in thermal equilibrium. Starting in 1871, Boltzmann took this analysis one step further to determine the evolution equation for the distribution function. The mathematical implications of this classical analysis still resonate in many subjects of the modern era. The etymology for “ergodic” begins with an 1884 paper by Boltzmann, though the initial insight to use probabilities to describe a gas system can be found as early as 1857 in a paper by Clausius and in the famous 1860 and 1867 papers by Maxwell.5

The Maxwell distribution is defined over the velocity of gas molecules and provides the probability for the relative number of molecules with velocities in a certain range. Using a mechanical model that involved molecular collision, Maxwell [25] was able to demonstrate that, in thermal equilibrium, this distribution of molecular velocities was a “stationary” distribution that would not change shape due to ongoing molecular collision. Boltzmann aimed to determine whether the Maxwell distribution would emerge in the limit, whatever the initial state of the gas. In order to study the dynamics of the equilibrium distribution over time, Boltzmann introduced the probability distribution of the relative time a gas molecule has a velocity in a certain range while still retaining the notion of probability for velocities of a relative number of gas molecules. Under the classical ergodic hypothesis, the average behavior of the macroscopic gas system, which can objectively be measured over time, can be interchanged with the average value calculated from the ensemble of unobservable and highly complex microscopic molecular motions at a given point in time. In the words of Weiner [26, p. 1] “Both in the older Maxwell theory and in the later theory of Gibbs, it is necessary to make some sort of logical transition between the average behavior of all dynamical systems of a given family or ensemble, and the historical average of a single system.”

3. Use of the Ergodic Hypothesis in Financial Economics

At least since Samuelson [27], it has been recognized that empirical theory and estimation in economics, in general, and financial economics, in particular, relies heavily on the use of specific unimodal stationary distributions associated with a particular class of ergodic processes. As reflected in the evolution of the concept in economics, the specification and implications of ergodicity have only developed gradually. The early presentation of ergodicity by Samuelson [27] involves the addition of a discrete Markov error term into the deterministic cobweb model to demonstrate that estimated forecasts of future values, such as prices, “should be less variable than the actual data.” Considerable opaqueness about the definition of ergodicity is reflected in the statement that a “‘stable’ stochastic process… eventually forgets its past and therefore in the far future can be expected to approach an ergodic probability distribution” [27, p. 2]. The connection between ergodic processes and nonlinear dynamics that characterizes present efforts in economics goes unrecognized, for example, [27, p. 1, 5]. While some explicit applications of ergodic processes to theoretical modeling in economics have emerged since Samuelson [27], for example, Horst and Wenzelburger [28] and Dixit and Pindyck [29], financial econometrics has produced the bulk of the contributions.

Initial empirical estimation for the deterministic models of neoclassical economics proceeded with the addition of a stationary, usually Gaussian, error term to produce a discrete time general linear model (GLM) leading to estimation using ordinary least squares or maximum likelihood techniques. In the history of MPT, such early estimations were associated with tests of the capital asset pricing model such as the “market model,” for example, Elton and Gruber [30]. Iterations and extensions of the GLM to deal with complications arising in empirical estimates dominated early work in econometrics, for example, Dhrymes [31] and Theil [32], leading to application of generalized least squares estimation techniques that encompassed autocorrelated and heteroskedastic error terms. Employing vector space methods with stationary Gaussian-based error term distributions ensured these early stochastic models implicitly assumed ergodicity. The generalization of this discrete time estimation approach to the class of ARCH and GARCH error term models by Engle and Granger was of such significance that a Nobel prize in economics was awarded for this contribution, for example, Engle and Granger [33]. By modeling the evolution of the volatility, this approach permitted a limited degree of nonlinearity to be modeled providing a substantively better fit of MPT models to observed financial time series, for example, Beaulieu et al. [34].

The emergence of ARCH, GARCH, and related empirical models was part of a general trend toward the use of inductive methods in economics, often employing discrete, linear time series methods to model transformed economic variables, for example, Hendry [35]. At least since Dickey and Fuller [36], it has been recognized that estimates of univariate time series models for many financial times series reveals evidence of “non-stationarity.” A number of approaches have emerged to deal with this apparent empirical quandary.6 In particular, transformation techniques for time series models have received considerable attention. Extension of the Box-Jenkins methodology led to the concept of economic time series being I—stationary in the level—and I—nonstationary in the level but stationary after first differencing. Two I economic variables could be cointegrated if differencing the two series produced an I process, for example, Hendry [35]. Extending early work on distributed lags, long memory processes have also been employed where the time series is only subject to fractional differencing. Significantly, recent contributions on Markov switching processes and exponential smooth transition autoregressive processes have demonstrated the “possibility that nonlinear ergodic processes can be misinterpreted as unit root nonstationary processes” [37, p. 620]. Bonomo et al. [38] illustrates the recent application of Markov switching processes in estimating the asset pricing models of MPT.

The conventional view of ergodicity in economics, in general, and financial economics, in particular, is reflected by Hendry [35, p. 100]: “Whether economic reality is an ergodic process after suitable transformation is a deep issue” which is difficult to analyze rigorously. As a consequence, in the limited number of instances where ergodicity is examined in economics a variety of different interpretations appear. In contrast, the ergodic hypothesis in classical statistical mechanics is associated with the more physically transparent kinetic gas model than the often technical and targeted concepts of ergodicity encountered in modern economics, in general, and MPT, in particular. For Boltzmann, the classical ergodic hypothesis permitted the unobserved complex microscopic interactions of individual gas molecules to obey the second law of thermodynamics, a concept that has limited application in economics.7 Despite differences in physical interpretation, there is similarity to the problem of modeling “macroscopic” financial variables, such as common stock prices, foreign exchange rates, “asset” prices, or interest rates. By construction, when it is not possible to derive a theory for describing and predicting empirical observations from known first principles about the (microscopic) rational behavior of individuals and firms. By construction, this involves a phenomenological approach to modeling.8

Even though the formal solutions proposed were inadequate by standards of modern mathematics, the thermodynamic model introduced by Boltzmann to explain the dynamic properties of the Maxwell distribution is a pedagogically useful starting point to develop the implications of ergodicity for MPT. To be sure, von Neumann [17] and Birkhoff [18] correctly specify ergodicity using Lebesgue integration, an essential analytical tool unavailable to Boltzmann, but the analysis is too complex to be of much value to all but the most mathematically specialized economists. The physical intuition of the kinetic gas model is lost in the generality of the results. Using Boltzmann as a starting point, the large number of mechanical and complex molecular collisions could correspond to the large number of microscopic, atomistic liquidity providers, and traders interacting to determine the macroscopic financial market price. In this context, it is variables such as the asset price or the interest rate or the exchange rate, or some combination, that is being measured over time and ergodicity would be associated with the properties of the transition density generating the macroscopic variables. Ergodicity can fail for a number of reasons and there is value in determining the source of the failure. In this vein, there are two fundamental difficulties associated with the classical ergodicity hypothesis in Boltzmann’s statistical mechanics, reversibility and recurrence, that are largely unrecognized in financial economics.9

Halmos [39, p. 1017] is a helpful starting point to sort out the differing notions of ergodicity that are of relevance to the issues at hand: “The ergodic theorem is a statement about a space, a function and a transformation.” In mathematical terms, ergodicity or “metric transitivity” is a property of “indecomposable,” measure preserving transformations. Because the transformation acts on points in the space, there is a fundamental connection to the method of measuring relationships such as distance or volume in the space. In von Neumann [17] and Birkhoff [18], this is accomplished using the notion of Lebesgue measure: the admissible functions are either integrable (Birkhoff) or square integrable (von Neumann). In contrast to, say, statistical mechanics where spaces and functions account for the complex physical interaction of large numbers of particles, economic theories such as MPT usually specify the space in a mathematically convenient fashion. For example, in the case where there is a single random variable, then the space is “superfluous” [20, p. 182] as the random variable is completely described by the distribution. Multiple random variables can be handled by assuming the random variables are discrete with finite state spaces. In effect, conditions for an “invariant measure” are assumed in MPT in order to focus attention on “finding and studying the invariant measures” [22, p. 22], where in the terminology of financial econometrics, the invariant measure usually corresponds to the stationary distribution or likelihood function.

The mean ergodic theorem of von Neumann [17] provides an essential connection to the ergodicity hypothesis in financial econometrics. It is well known that, in the Hilbert and Banach spaces common to econometric work, the mean ergodic theorem corresponds to the strong law of large numbers. In statistical applications where strictly stationary distributions are assumed, the relevant ergodic transformation, , is the unit shift operator: ; ; and with being an integer and the strictly stationary distribution for that in the strictly stationary case is replicated at each .10 Significantly, this reversible transformation is independent of initial time and state. Because this transformation can be achieved by imposing strict stationarity on , will only work for certain ergodic processes. In effect, the ergodic requirement that the transformation is measure preserving is weaker than the strict stationarity of the stochastic process sufficient to achieve . The practical implications of the reversible ergodic transformation are described by Davidson [40, p. 331]: “In an economic world governed entirely by [time reversible] ergodic processes… economic relationships among variables are timeless, or ahistoric in the sense that the future is merely a statistical reflection of the past [sic].11

Employing conventional econometrics in empirical studies, MPT requires that the real world distribution for , for example, the asset return, is sufficiently similar to those for both and ; that is, the ergodic transformation is reversible. The reversibility assumption is systemic in MPT appearing in the use of long estimation periods to determine important variables such as the “equity risk premium.” There is a persistent belief that increasing the length or sampling frequency of a financial time series will improve the precision of a statistical estimate, for example, Dimson et al. [41]. Similarly, focus on the tradeoff between “risk and return” requires the use of unimodal stationary densities for transformed financial variables such as the rate of return. The impact of initial and boundary conditions on financial decision making is generally ignored. The inconsistency of reversible processes with key empirical facts, such as the asymmetric tendency for downdrafts in prices to be more severe than upswings, is ignored in favor of adhering to “reversible” theoretical models that can be derived from first principles associated with constrained optimization techniques, for example, Constantinides [42].

4. A Phenomenological Interpretation of Classical Ergodicity

In physics, phenomenology lies at the intersection of theory and experiment. Theoretical relationships between empirical observations are modeled without deriving the theory directly from first principles, for example, Newton’s laws of motion. Predictions based on these theoretical relationships are obtained and compared to further experimental data designed to test the predictions. In this fashion, new theories that can be derived from first principles are motivated. Confronted with nonexperimental data for important financial variables, such as common stock prices, interest rates, and the like, financial economics has developed some theoretical models that aim to fit the “stylized facts” of those variables. In contrast, the MPT is initially derived directly from the “first principles” of constrained expected utility maximizing behavior by individuals and firms. Given the difficulties in economics of testing model predictions with “new” experimental data, physics and econophysics have the potential to provide a rich variety of mathematical techniques that can be adapted to determining mathematical relationships among financial variables that explain the “stylized facts” of observed nonexperimental data.12

The evolution of financial economics from the deterministic models of neoclassical economics to more modern stochastic models has been incremental and disjointed. The preference for linear models of static equilibrium relationships has restricted the application of theoretical frameworks that capture more complex nonlinear dynamics, for example, chaos theory; truncated Levy processes. Yet, important financial variables have relatively innocuous sample paths compared to some types of variables encountered in physics. There is an impressive range of mathematical and statistical models that, seemingly, could be applied to almost any physical or financial situation. If the process can be verbalized, then a model can be specified. This begs the following questions: are there transformations, ergodic or otherwise, that capture the basic “stylized facts” of observed financial data? Is the random instability in the observed sample paths identified in, say, stock price time series consistent with the ex ante stochastic bifurcation of an ergodic process, for example, Chiarella et al. [43]? In the bifurcation case, the associated ex ante stationary densities are bimodal and irreversible, a situation where the mean calculated from past values of a single, nonexperimental ex post realization of the process is not necessarily informative about the mean for future values.

Boltzmann was concerned with demonstrating that the Maxwell distribution emerged in the limit as for systems with large numbers of particles. The limiting process for requires that the system run long enough that the initial conditions do not impact the stationary distribution. At the time, two fundamental criticisms were aimed at this general approach: reversibility and recurrence. In the context of financial time series, reversibility relates to the use of past values of the process to forecast future values. Recurrence relates to the properties of the long run average which involves the ability and length of time for an ergodic process to return to its stationary state. For Boltzmann, both these criticisms have roots in the difficulty of reconciling the second law of thermodynamics with the ergodicity hypothesis. Using Sturm-Liouville methods, it can be shown that classical ergodicity requires the transition density of the process to be decomposable into the sum of a stationary density and a mean zero transient term that captures the impact of the initial condition of the system on the individual sample paths; irreversibility relates to properties of the stationary density and nonrecurrence to the behavior of the transient term.

Because the particle movements in a kinetic gas model are contained within an enclosed system, for example, a vertical glass tube, classical Sturm-Liouville (S-L) methods can be applied to obtain solutions for the transition densities. These classical results for the distributional implications of imposing regular reflecting boundaries on diffusion processes are representative of the modern phenomenological approach to random systems theory which “studies qualitative changes of the densites [sic] of invariant measures of the Markov semigroup generated by random dynamical systems induced by stochastic differential equations” [44, p. 27].13 Because the initial condition of the system is explicitly recognized, ergodicity in these models takes a different form than that associated with the unit shift transformation of unimodal stationary densities typically adopted in financial economics, in general, and MPT, in particular. The ergodic transition densities are derived as solutions to the forward differential equation associated with one-dimensional diffusions. The transition densities contain a transient term that is dependent on the initial condition of the system and boundaries imposed on the state space. Path dependence, that is, irreversibility, can be introduced by employing multimodal stationary densities.

The distributional implications of boundary restrictions, derived by modeling the random variable as a diffusion process subject to reflecting barriers, have been studied for many years, for example, Feller [45]. The diffusion process framework is useful because it imposes a functional structure that is sufficient for known partial differential equation (PDE) solution procedures to be used to derive the relevant transition probability densities. Wong [46] demonstrated that with appropriate specification of parameters in the PDE, the transition densities for popular stationary distributions such as the exponential, uniform, and normal distributions can be derived using classical S-L methods. The S-L framework provides sufficient generality to resolve certain empirical difficulties arising from key stylized facts observed in the nonexperimental time series from financial economics. More generally, the framework suggests a method of allowing MPT to encompass the nonlinear dynamics of diffusion processes. In other words, within the more formal mathematical framework of classical statistical mechanics, it is possible to reformulate the classical ergodicity hypothesis to permit a useful stochastic generalization of theories in financial economics such as MPT.

The use of the diffusion model to represent the nonlinear dynamics of stochastic processes is found in a wide range of subjects. Physical restrictions such as the rate of observed genetic mutation in biology or character of heat diffusion in engineering or physics often determine the specific formalization of the diffusion model. Because physical interactions can be complex, mathematical results for diffusion models are pitched at a level of generality sufficient to cover such cases.14 Such generality is usually not required in financial economics. In this vein, it is possible to exploit mathematical properties of bounded state spaces and one-dimensional diffusions to overcome certain analytical problems that can confront continuous time Markov solutions. The key construct in the S-L method is the ergodic transition probability density function which is associated with the random (financial) variable at time    that follows a regular, time homogeneous diffusion process. While it is possible to allow the state space to be an infinite open interval , a finite closed interval or the specific interval are applicable to financial variables.15 Assuming that is twice continuously differentiable in and once in and vanishes outside the relevant interval, then obeys the forward equation (e.g., [47, p. 102–4]):where   (=) is the one half the infinitesimal variance and the infinitesimal drift of the process. is assumed to be twice and once continuously differentiable in . Being time homogeneous, this formulation permits state, but not time, variation in the drift and variance parameters.

If the diffusion process is subject to upper and lower reflecting boundaries that are regular and fixed , the classical “Sturm-Liouville problem” involves solving (1) subject to the separated boundary conditions:16And the initial condition is as follows:and is the continuous density function associated with where . When the initial starting value, , is known with certainty, the initial condition becomes the Dirac delta function, , and the resulting solution for is referred to as the “principal solution.” Within the framework of the S-L method, a stochastic process has the property of classical ergodicity when the transition density satisfies the following:17Important special cases occur for the principal solution () and when is from a specific class such as the Pearson distributions. To be ergodic, the time invariant stationary density is not permitted to “decompose” the sample space with a finite number of indecomposable subdensities, each of which is time invariant. Such irreversible processes are not ergodic, even though each of the subdensities could be restricted to obey the ergodic theorem. To achieve ergodicity, a multimodal stationary density can be used instead of decomposing the sample space using subdensities with different means. In turn, multimodal irreversible ergodic processes have the property that the mean calculated from past values of the process are not necessarily informative enough about the modes of the ex ante densities to provide accurate predictions.

In order to more accurately capture the ex ante properties of financial time series, there are some potentially restrictive features in the classical S-L framework that can be identified. For example, time homogeneity of the process eliminates the need to explicitly consider the location of .18 Time homogeneity is a property of and, as such, is consistent with “ahistorical” MPT. In the subclass of denoted as , a time homogeneous and reversible stationary distribution governs the dynamics of . Significantly, while is time homogeneous, there are some consistent with irreversible processes. A relevant issue for econophysicists is to determine which concept—time homogeneity or reversibility—is inconsistent with economic processes that capture collapsing pricing conventions in asset markets; liquidity traps in money markets; and collapse induced structural shifts in stock markets. In the S-L framework, the initial state of the system is known and the ergodic transition density provides information about how a given point shifts units along a trajectory.19 In contrast, applications in financial econometrics employ the strictly stationary , where the location of is irrelevant while incorporates as an initial condition associated with the solution of a partial differential equation.

5. Density Decomposition Results20

In general, solving the forward equation (1) for subject to (2), (3) and some admissible form of (4) is difficult, for example, Feller [45] and Risken [48]. In such circumstances, it is expedient to restrict the problem specification to permit closed form solutions for the transition density to be obtained. Wong [46] provides an illustration of this approach. The PDE (1) is reduced to an ODE by only considering the strictly stationary distributions arising from the Pearson system. Restrictions on the associated are constructed by imposing the fundamental ODE condition for the unimodal Pearson system of distributions:The transition probability density for the ergodic process can then be reconstructed by working back from a specific closed form for the stationary distribution using known results for the solution of specific forms of the forward equation. In this procedure, the , , , , and in the Pearson ODE are used to specify the relevant parameters in (1). The for important stationary distributions that fall within the Pearson system, such as the normal, beta, central , and exponential, can be derived by this method.

The solution procedure employed by Wong [46] depends crucially on restricting the PDE problem sufficiently to apply classical S-L techniques. Using S-L methods, various studies have generalized the set of solutions for to cases where the stationary distribution is not a member of the Pearson system or is otherwise unknown, for example, Linetsky [49]. In order to employ the separation of variables technique used in solving S-L problems, (1) has to be transformed into the canonical form of the forward equation. To do this, the following function associated with the invariant measure is introduced:Using this function, the forward equation can be rewritten in the formwhereEquation (8) is the canonical form of (1). The S-L problem now involves solving (8) subject to appropriate initial and boundary conditions.

Because the methods for solving the S-L problem are ODE-based, some method of eliminating the time derivative in (1) is required. Exploiting the assumption of time homogeneity, the eigenfunction expansion approach applies separation of variables, permitting (8) to be specified aswhere is only required to satisfy the easier-to-solve ODE:Transforming the boundary conditions involves substitution of (10) into (2) and (3) and solving to getSignificant analytical advantages are obtained by making the S-L problem “regular” which involves assuming that is a closed interval with , , and being real valued and having a continuous derivative on and , at every point in . “Singular” S-L problems arise where these conditions are violated due to, say, an infinite state space or a vanishing coefficient in the interval . The separated boundary conditions (2) and (3) ensure the problem is self-adjoint [50, p. 91].

The classical S-L problem of solving (8) subject to the initial and boundary conditions admits a solution only for certain critical values of , the eigenvalues. Further, since (1) is linear in , the general solution for (10) is given by a linear combination of solutions in the form of eigenfunction expansions. Details of these results can be found in Hille [51, ch. 8], Birkhoff and Rota [52, ch. 10], and Karlin and Taylor [53]. When the S-L problem is self-adjoint and regular the solutions for the transition probability density can be summarized in the following (see the appendix for proof).

Proposition 1 (ergodic transition density decomposition). The regular, self-adjoint Sturm-Liouville problem has an infinite sequence of real eigenvalues, withTo each eigenvalue there corresponds a unique eigenfunction . Normalization of the eigenfunctions producesThe eigenfunctions form a complete orthonormal system in . The unique solution in to (1), subject to the boundary conditions (2)-(3) and initial condition (4) is, in general form,where Given this, the transition probability density function for at time can be reexpressed as the sum of a stationary limiting equilibrium distribution associated with the eigenvalue, that is linearly independent of the boundaries and a power series of transient terms associated with the remaining eigenvalues, that are boundary and initial condition dependent:whereUsing the specifications of , , and , the properties of are defined aswith

This proposition provides the general solution to the regular, self-adjoint S-L problem of deriving when the process is subject to regular reflecting barriers. Taking the limit as in (15), it follows from (16) and (17) that the transition density of the stochastic process satisfies the classical ergodic property. Considerable effort has been given to determining the convergence behavior of different processes. The distributional impact of the initial conditions and boundary restrictions enter through . From the restrictions on in (17), the total mass of the transient term is zero so the mean ergodic theorem still applies. The transient only acts to redistribute the mass of the stationary distribution, thereby causing a change in shape which can impact the ex ante calculation of the expected value. The specific degree and type of alteration depends on the relevant assumptions made about the parameters and initial functional forms. Significantly, stochastic generalization of static and deterministic MPT almost always ignores the impact of transients by only employing the limiting stationary distribution component.

The theoretical advantage obtained by imposing regular reflecting barriers on the diffusion state space for the forward equation is that an ergodic decomposition of the transition density is assured. The relevance of bounding the state space and imposing regular reflecting boundaries can be illustrated by considering the well known solution (e.g., [54, p. 209]) for involving a constant coefficient standard normal variate over the unbounded state space . In this case the forward equation (1) reduces to . By evaluating these derivatives, it can be verified that the principal solution for isand as or then and the stochastic process is nonergodic because it does not possess a nontrivial stationary distribution. The mean ergodic theorem fails: if the process runs long enough, then will evolve to where there is no discernible probability associated with starting from and reaching the neighborhood of a given point . The absence of a stationary distribution raises a number of questions, for example, whether the process has unit roots. Imposing regular reflecting boundaries is a certain method of obtaining a stationary distribution and a discrete spectrum [55, p. 13]. Alternative methods, such as specifying the process to admit natural boundaries where the parameters of the diffusion are zero within the state space, can give rise to continuous spectrum and raise significant analytical complexities. At least since Feller [45], the search for useful solutions, including those for singular diffusion problems, has produced a number of specific cases of interest. However, without the analytical certainty of the classical S-L framework, analysis proceeds on a case by case basis.

One possible method of obtaining a stationary distribution without imposing both upper and lower boundaries is to impose only a lower (upper) reflecting barrier and construct the stochastic process such that positive (negative) infinity is nonattracting, for example, Linetsky [49] and Aït-Sahalia [56]. This can be achieved by using a mean-reverting drift term. In contrast, Cox and Miller [54, p. 223–5] use the Brownian motion, constant coefficient forward equation with , , and subject to the lower reflecting barrier at given in (2) to solve for both the and the stationary density. The principal solution is solved using the “method of images” to obtainwhere is the cumulative standard normal distribution function. Observing that again produces as , the stationary density for has the Maxwell formThough does not enter the solution, combined with the location of the boundary at , it does implicitly impose the restriction . From the Proposition, can be determined as .

Following Linetsky [49], Veerstraeten [57], and others, the analytical procedure used to determine involves specifying the parameters of the forward equation and the boundary conditions and then solving for and . Wong [46] uses a different approach, initially selecting a stationary distribution and then solving for using the restrictions of the Pearson system to specify the forward equation. In this approach, the functional form of the desired stationary distribution determines the appropriate boundary conditions. While application of this approach has been limited to the restricted class of distributions associated with the Pearson system, it is expedient when a known stationary distribution, such as the standard normal distribution, is of interest. More precisely, letIn this case, the boundaries of the state space are nonattracting and not regular. Solving the Pearson equation gives and a forward equation of the OU form:Following Wong [46, p. 268] Mehler’s formula can be used to express the solution for asGiven this, as then and as then achieves the stationary standard normal distribution.

6. The Quartic Exponential Distribution

The roots of bifurcation theory can be found in the early solutions to certain deterministic ordinary differential equations. Consider the deterministic dynamics described by the pitchfork bifurcation ODE:where and are the “normal” and “splitting” control variables, respectively (e.g., [58, 59]). While has significant information in a stochastic context, this is not usually the case in the deterministic problem so is assumed. Given this, for , there is one real equilibrium () solution to this ODE at where “all initial conditions converge to the same final point exponentially fast with time” [60, p. 260]. For , the solution bifurcates into three equilibrium solutions , one unstable and two stable. In this case, the state space is split into two physically distinct regions (at ) with the degree of splitting controlled by the size of . Even for initial conditions that are “close,” the equilibrium achieved will depend on the sign of the initial condition. Stochastic bifurcation theory extends this model to incorporate Markovian randomness. In this theory, “invariant measures are the random analogues of deterministic fixed points” [22, p. 469]. Significantly, ergodicity now requires that the component densities that bifurcate out of the stationary density at the bifurcation point be invariant measures, for example, Crauel et al. [44, sec. 3]. As such, the ergodic bifurcating process is irreversible in the sense that past sample paths (prior to the bifurcation) cannot reliably be used to generate statistics for the future values of the variable (after the bifurcation).

It is well known that the introduction of randomness to the pitchfork ODE changes the properties of the equilibrium solution, for example, [22, sec. 9.2]. It is no longer necessary that the state space for the principal solution be determined by the location of the initial condition relative to the bifurcation point. The possibility for randomness to cause some paths to cross over the bifurcation point depends on the size of volatility of the process, , which measures the nonlinear signal to white noise ratio. Of the different approaches to introducing randomness (e.g., multiplicative noise), the simplest approach to converting from a deterministic to a stochastic context is to add a Weiner process to the ODE. Augmenting the diffusion equation to allow for to control the relative impact of nonlinear drift versus random noise produces the “pitchfork bifurcation with additive noise” [22, p. 475] which in symmetric form isApplications in financial economics, for example, Aït-Sahalia [56], refer to this diffusion process as the double well process. While consistent with the common use of diffusion equations in financial economics, the dynamics of the pitchfork process captured by have been “forgotten” [22, p. 473].

Models in MPT are married to the transition probability densities associated with unimodal stationary distributions, especially the class of Gaussian-related distributions. Yet, it is well known that more flexibility in the shape of the stationary distribution can be achieved using a higher order exponential density, for example, Fisher [61], Cobb et al. [62], and Crauel and Flandoli [60]. Increasing the degree of the polynomial in the exponential comes at the expense of introducing additional parameters resulting in a substantial increase in the analytical complexity, typically defying a closed form solution for the transition densities. However, following Elliott [63], it has been recognized that the solution of the associated regular S-L problem will still have a discrete spectrum, even if the specific form of the eigenfunctions and eigenvalues in is not precisely determined [64, sec. 6.7]. Inferences about transient stochastic behavior can be obtained by examining the solution of the deterministic nonlinear dynamics. In this process, attention initially focuses on the properties of the higher order exponential distributions.

To this end, assume that the stationary distribution is a fourth degree or “general quartic” exponential:where is a constant determined such that the density integrates to one; and .21 Following Fisher [61], the class of distributions associated with the general quartic exponential admits both unimodal and bimodal densities and nests the standard normal as a limiting case where and with .  The stationary distribution of the bifurcating double well process is a special case of the symmetric quartic exponential distribution:where is the population mean and the symmetry restriction requires . Such multimodal stationary densities have received scant attention in financial economics, in general, and in MPT, in particular. To see why the condition on is needed, consider change of origin to remove the cubic term from the general quartic exponential [65, p. 480]:The substitution of for indicates the change of origin which produces the following relations between coefficients for the general and specific cases: The symmetry restriction can only be satisfied if both and . Given the symmetry restriction, the double well process further requires . Solving for the modes of gives   which reduces to for the double well process, as in Aït-Sahalia [56, Figure 6B, p. 1385].

As illustrated in Figure 1, the selection of in the stationary density defines a family of general quartic exponential densities, where is the selected value of for that specific density.22 The coefficient restrictions on the parameters and dictate that these values cannot be determined arbitrarily. For example, given that is set at 0.25, then, for , it follows that . “Slicing across” the surface in Figure 1 at reveals a stationary distribution that is equal to the double well density. Continuing to slice across as increases in size, the bimodal density becomes progressively more asymmetrically concentrated in positive values. Though the location of the modes does not change, the amount of density between the modes and around the negative mode decreases. Similarly, as decreases in size the bimodal density becomes more asymmetrically concentrated in positive values. While the stationary density is bimodal over , for large enough the density becomes so asymmetric that only a unimodal density appears. For the general quartic, asymmetry arises as the amount of the density surrounding each mode (the subdensity) changes with . In this, the individual stationary subdensities have a symmetric shape. To introduce asymmetry in the subdensities, the reflecting boundaries at and that bound the state space for the regular S-L problem can be used to introduce positive asymmetry in the lower subdensity and negative asymmetry in the upper subdensity.

Following Chiarella et al. [43], the stochastic bifurcation process has a number of features which are consistent with the ex ante behavior of a securities market driven by a combination of chartists and fundamentalists. Placed in the context of the classical S-L framework, because the stationary distributions are bimodal and depend on forward parameters, such as , , , and in Figure 1, that are not known on the decision date, the risk-return tradeoff models employed in MPT are uninformative. What use is the forecast provided by when it is known that there are other modes for values that are more likely to occur? A mean estimate that is close to a bifurcation point would even be unstable. The associated difficulty of calculating an expected return forecast or other statistical estimate such as volatility from past data can be compounded by the presence of transients that originate from boundaries and initial conditions. For example, the presence of a recent structural break, for example, the collapse in global stock prices from late 2008 to early 2009, can be accounted for by appropriate selection of , the fundamental dependence of investment decisions on the relationship between and future (not past) performance is not captured by the reversible ergodic processes employed in MPT that ignore transients arising from initial conditions. Mathematical tools such as classical S-L methods are able to demonstrate this fundamental dependence by exploiting properties of ex ante bifurcating ergodic processes to generate theoretical ex post sample paths that provide a better approximation to the sample paths of observed financial data.

7. Conclusion

The classical ergodicity hypothesis provides a point of demarcation in the prehistories of MPT and econophysics. To deal with the problem of making statistical inferences from “nonexperimental” data, theories in MPT typically employ stationary densities that are time reversible, are unimodal, and allow no short or long term impact from initial and boundary conditions. The possibility of bimodal processes or ex ante impact from initial and boundary conditions is not recognized or, it seems, intended. Significantly, as illustrated in the need to select an in Figure 1 in order to specify the “real world” ex ante stationary density, a semantic connection can be established between the subjective uncertainty about encountering a future bifurcation point and, say, the possible collapse of an asset price bubble impacting future market valuations. Examining the quartic exponential stationary distribution associated with a bifurcating ergodic process, it is apparent that this distribution nests the Gaussian distribution as a special case. In this sense, results from classical statistical mechanics can be employed to produce a stochastic generalization of the unimodal, time reversible processes employed in modern portfolio theory.

Appendix

Preliminaries on Solving the Forward Equation

Due to the widespread application in a wide range of subjects, textbook presentations of the Sturm-Liouville problem possess subtle differences that require some clarification to be applicable to the formulation used in this paper. In particular, to derive the canonical form (8) of the Fokker-Planck equation (1), observe that evaluating the derivatives in (1) givesThis can be rewritten aswhereIt follows thatThis provides the solution for the key function :This function is used to construct the scale and speed densities commonly found in presentations of solutions to the forward equation, for example, Karlin and Taylor [53] and Linetsky [49].

Another specification of the forward equation that is of importance is found in Wong [46, eq. 6-7]:This formulation occurs after separating variables, say with . Substituting this result into (1) givesUsing the separation of variables substitution and redefining givesEvaluating the derivative inside the bracket and using the condition to specify admissible givewhich is equation in Wong [46]. The condition used to define is then used to identify the specification of and from the Pearson system. The associated boundary condition follows from observing the will be the ergodic density and making appropriate substitutions into the boundary condition: Evaluating the derivative and taking values at the lower (or upper) boundary giveObserving that the expression in the last bracket is the original condition with the ergodic density serving as gives the boundary condition stated in Wong [46, eq. 7].

Proof of Proposition 1. (a) has exactly zeroes in : Hille [51, p. 398, Theorem ] and Birkhoff and Rota [52, p. 320, Theorem 5] show that the eigenfunctions of the Sturm-Liouville system with , , and (4) have exactly zeroes in the interval . More precisely, since it is assumed that , the eigenfunction corresponding to the th eigenvalue has exactly zeroes in .
(b) For , .
Proof. For the following applies:Since each satisfies the boundary conditions.
(c) For some , .
Proof. From (13)Since thenBut from part (b) this will = 0 (which is a contradiction) unless for some .
(d) Consider = 0.
Proof. From part (a), has no zeroes in . Therefore, either or .
It follows from part (b) that = 0.
(e) Consider for . This follows from part (d) and the strict inequality conditions provided in part (a).
(f) Obtaining the solution for in the Proposition, from part (d) it follows that Integrating this equation from to and using the boundary condition giveThis equation can be solved for to getTherefore,Using this definition and observing that the integral of over the state space is one it follows(g) The Proof of the Proposition now follows from parts (f), (e), and (b).

Disclosure

The authors are Professor of Finance and Associate Professor of Finance at Simon Fraser University.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

Helpful comments from Chris Veld, Yulia Veld, Emmanuel Havens, Franck Jovanovic, Marcel Boumans, and Christoph Schinckus are gratefully acknowledged.

Endnotes

  1. For example, Black and Scholes [4] discuss the consistency of the option pricing formula with the capital asset pricing model. More generally, financial economics employs primarily Gaussian-based finite parameter models that agree with the “tradeoff between risk and return.”
  2. In rational mechanics, once the initial positions of the particles of interest, for example, molecules, are known, the mechanical model fully determines the future evolution of the system. This scientific and philosophical approach is often referred to as Laplacian determinism.
  3. Boltzmann and Max Planck were vociferous opponents of energetics. The debate over energetics was part of a larger intellectual debate concerning determinism and reversibility. Jevons [66, p. 738-9] reflects the entrenched determinist position of the marginalists: “We may safely accept as a satisfactory scientific hypothesis the doctrine so grandly put forth by Laplace, who asserted that a perfect knowledge of the universe, as it existed at any given moment, would give a perfect knowledge of what was to happen thenceforth and for ever after. Scientific inference is impossible, unless we may regard the present as the outcome of what is past, and the cause of what is to come. To the view of perfect intelligence nothing is uncertain.” What Boltzmann, Planck, and others had observed in statistical physics was that even though the behavior of one or two molecules can be completely determined, it is not possible to generalize these mechanics to the describe the macroscopic motion of molecules in large, complex systems, for example, Brush [67, esp. ch. II].
  4. As such, Boltzmann was part of the larger: “Second Scientific Revolution, associated with the theories of Darwin, Maxwell, Planck, Einstein, Heisenberg and Schrödinger, (which) substituted a world of process and chance whose ultimate philosophical meaning still remains obscure” [67, p. 79]. This revolution superceded the following: “First Scientific Revolution, dominated by the physical astronomy of Copernicus, Kepler, Galileo, and Newton,… in which all changes are cyclic and all motions are in principle determined by causal laws.” The irreversibility and indeterminism of the Second Scientific Revolution replaces the reversibility and determinism of the first.
  5. There are many interesting sources on these points which provide citations for the historical papers that are being discussed. Cercignani [68, p. 146–50] discusses the role of Maxwell and Boltzmann in the development of the ergodic hypothesis. Maxwell [25] is identified as “perhaps the strongest statement in favour of the ergodic hypothesis.” Brush [69] has a detailed account of the development of the ergodic hypothesis. Gallavotti [70] traces the etymology of “ergodic” to the “ergode” in an 1884 paper by Boltzmann. More precisely, an ergode is shorthand for “ergomonode” which is a “monode with given energy” where a “monode” can be either a single stationary distribution taken as an ensemble or a collection of such stationary distributions with some defined parameterization. The specific use is clear from the context. Boltzmann proved that an ergode is an equilibrium ensemble and, as such, provides a mechanical model consistent with the second law of thermodynamics. It is generally recognized that the modern usage of “the ergodic hypothesis” originates with Ehrenfest [71].
  6. Kapetanios and Shin [37, p. 620] capture the essence of this quandary: “Interest in the interface of nonstationarity and nonlinearity has been increasing in the econometric literature. The motivation for this development may be traced to the perceived possibility that nonlinear ergodic processes can be misinterpreted as unit root nonstationary processes. Furthermore, the inability of standard unit root tests to reject the null hypothesis of unit root for a large number of macroeconomic variables, which are supposed to be stationary according to economic theory, is another reason behind the increased interest.”
  7. The second law of thermodynamics is the universal law of increasing entropy, a measure of the randomness of molecular motion and the loss of energy to do work. First recognized in the early 19th century, the second law maintains that the entropy of an isolated system, not in equilibrium, will necessarily tend to increase over time. Entropy approaches a maximum value at thermal equilibrium. A number of attempts have been made to apply the entropy of information to problems in economics, with mixed success. In addition to the second law, physics now recognizes the zeroth law of thermodynamics that “any system approaches an equilibrium state” [72, p. 54]. This implication of the second law for theories in economics was initially explored by Georgescu-Roegen [73].
  8. In this process, the ergodicity hypothesis is required to permit the one observed sample path to be used to estimate the parameters for the ex ante distribution of the ensemble paths. In turn, these parameters are used to predict future values of the economic variable.
  9. Heterodox critiques are associated with views considered to originate from within economics. Such critiques are seen to be made by “economists,” for example, Post-Keynesian economists, institutional economists, radical political economists, and so on. Because such critiques take motivation from the theories of mainstream economics, these critiques are distinct from econophysics. Following Schinckus [12, p. 3818], “Econophysicists have then allies within economics with whom they should become acquainted.”
  10.  Dhrymes [31, p. 1–29] discusses the algebra of the lag operator.
  11. Critiques of mainstream economics that are rooted in the insights of The General Theory recognize the distinction between fundamental uncertainty and objective probability. As a consequence, the definition of ergodic theory in heterodox criticisms of mainstream economics lacks formal precision, for example, the short term dependence of ergodic processes on initial conditions is not usually recognized. Ergodic theory is implicitly seen as another piece of the mathematical formalism inspired by Hilbert and Bourbaki and captured in the Arrow-Debreu general equilibrium model of mainstream economics.
  12. In this context though not in all contexts, econophysics provides a “macroscopic” approach. In turn, ergodicity is an assumption that permits the time average from a single observed sample path to (phenomenologically) model the ensemble of sample paths. Given this, econophysics does contain a substantively richer toolkit that encompasses both ergodic and nonergodic processes. Many works in econophysics implicitly assume ergodicity and develop models based on that assumption.
  13. The distinction between invariant and ergodic measures is fundamental. Recognizing a number of distinct definitions of ergodicity are available, following Medio [74, p. 70], the Birkhoff-Khinchin ergodic (BK) theorem for invariant measures can be used to demonstrate that ergodic measures are a class of invariant measures. More precisely, the BK theorem permits the limit of the time average to depend on initial conditions. In effect, the invariant measure is permitted to decompose into invariant “submeasures.” The physical interpretation of this restriction is that sample paths starting from a particular initial condition may only be able to access a part of the sample space, no matter how long the process is allowed to run. For an ergodic process, sample paths starting from any admissible initial condition will be able to “fill the sample space”; that is, if the process is allowed to run long enough, the time average will not depend on the initial condition. Medio [74, p. 73] provides a useful example of an invariant measure that is not ergodic.
  14. The phenomenological approach is not without difficulties. For example, the restriction to Markov processes ignores the possibility of invariant measures that are not Markov. In addition, an important analytical construct in bifurcation theory, the Lyapunov exponent, can encounter difficulties with certain invariant Markov measures. Primary concern with the properties of the stationary distribution is not well suited to analysis of the dynamic paths around a bifurcation point. And so it goes.
  15. A diffusion process is “regular” if starting from any point in the state space , any other point in can be reached with positive probability (Karlin and Taylor [53, p. 158]. This condition is distinct from other definitions of regular that will be introduced: “regular boundary conditions” and “regular S-L problem.”
  16. The classification of boundary conditions is typically an important issue in the study of solutions to the forward equation. Important types of boundaries include regular, exit, entrance, and natural. Also the following are important in boundary classification: the properties of attainable and unattainable, whether the boundary is attracting or non-attracting, and whether the boundary is reflecting or absorbing. In the present context, regular, attainable, reflecting boundaries are usually being considered, with a few specific extensions to other types of boundaries. In general, the specification of boundary conditions is essential in determining whether a given PDE is self-adjoint.
  17. Heuristically, if the ergodic process runs long enough, then the stationary distribution can be used to estimate the constant mean value. This definition of ergodic is appropriate for the one-dimensional diffusion cases considered in this paper. Other combinations of transformation, space, and function will produce different requirements. Various theoretical results are available for the case at hand. For example, the existence of an invariant Markov measure and exponential decay of the autocorrelation function are both assured.
  18. For ease of notation it is assumed that . In practice, solving (1) combined with (2)–(4) requires and to be specified. While and have ready interpretations in physical applications, for example, the heat flow in an insulated bar, determining these values in economic applications can be more challenging. Some situations, such as the determination of the distribution of an exchange rate subject to control bands (e.g., [75]), are relatively straight forward. Other situations, such as profit distributions with arbitrage boundaries or output distributions subject to production possibility frontiers, may require the basic S-L framework to be adapted to the specifics of the modeling situation.
  19. The mathematics at this point are heuristic. It would be more appropriate to observe that is the special case where , a strictly stationary distribution. This would require discussion of how to specify the initial and boundary conditions to ensure that this is the solution to the forward equation.
  20. A more detailed mathematical treatment can be found in de Jong [76].
  21. In what follows, except where otherwise stated, it is assumed that . Hence, the condition that be a constant such that the density integrates to one incorporates the assumption. Allowing will scale either the value of or the ’s from that stated.
  22. A number of simplifications were used to produce the 3D image in Figure 1:   has been centered about and . Changing these values will impact the specific size of the parameter values for a given but will not change the general appearance of the density plots.