Research Article | Open Access
The Statistical Origins of Quantum Mechanics
It is shown that Schrödinger's equation may be derived from three postulates. The first is a kind of statistical metamorphosis of classical mechanics, a set of two relations which are obtained from the canonical equations of particle mechanics by replacing all observables by statistical averages. The second is a local conservation law of probability with a probability current which takes the form of a gradient. The third is a principle of maximal disorder as realized by the requirement of minimal Fisher information. The rule for calculating expectation values is obtained from a fourth postulate, the requirement of energy conservation in the mean. The fact that all these basic relations of quantum theory may be derived from premises which are statistical in character is interpreted as a strong argument in favor of the statistical interpretation of quantum mechanics. The structures of quantum theory and classical statistical theories are compared, and some fundamental differences are identified.
The interpretation of quantum theory does neither influence its theoretical predictions nor the experimentally observed data. Nevertheless, it is extremely important because it determines the direction of future research. One of the many controversial interpretations of quantum mechanics is the “statistical interpretation” or “ensemble interpretation” . It presents a point of view which is in opposition to most variants of the Copenhagen interpretation  but has been advocated by a large number of eminent physicists, including Einstein. It claims that quantum mechanics is incomplete with regard to the description of single events and that all its dynamic predictions are of a purely statistical nature. This means that in general, a large number of measurements on identically prepared systems have to be performed in order to verify a (dynamical) prediction of quantum theory.
The origin of the time-dependent Schrödinger equation is of course an essential aspect for the interpretation of quantum mechanics. Recently, a number of derivations of Schrödinger's equation have been reported which use as a starting point not a particle Hamiltonian but a statistical ensemble. The basic assumptions underlying these works include special postulates about the structure of momentum fluctuations , the principle of minimum Fisher information [4, 5], a linear time-evolution law for a complex state variable , or the assumption of a classical stochastic force of unspecified form . The work reported in this paper belongs to this class of theories, which do not “quantize” a single particle but a statistical ensemble. An attempt is undertaken to improve this approach by starting from assumptions, which may be considered as simpler and more fundamental from a physical point of view. It is shown that Schrödinger's equation may be derived from a small number of very general and simple assumptions—which are all essentially of a statistical nature. In a first step, an infinite class of statistical theories are derived, containing a classical statistical theory as well as quantum mechanics. In a second step, quantum mechanics is singled out as “most reasonable statistical theory” by imposing as an additional requirement the principle of maximal disorder, as realized by the principle of minimal Fisher information.
We begin in Section 2 with a general discussion of the role of probability in physical theories. In Section 3, the central “statistical condition” (first assumption) of this work is formulated. The set of corresponding statistical theories is derived in Section 5. In Sections 4 and 7, structural differences between quantum theory and classical statistical theories are investigated. The quantum mechanical rule for calculating expectation values is derived from the requirement of conservation of energy in the mean in Section 6. In Sections 7–9, the principle of maximal disorder is implemented, and Fisher's information measure is derived in Section 10. Section 11 contains a detailed discussion of all assumptions and results and may be consulted in a first reading to obtain an overview of this work; questions of interpretation of the quantum theoretical formalism are also discussed in this section. In Section 12, open questions for future research are listed.
2. On Probability
With regard to the role of probability, three types of physical theories may be distinguished. (1)Theories of type 1 are deterministic. Single events are completely described by their known initial values and deterministic laws (differential equations). Classical mechanics is obviously such a theory. We include this type of theory, where probability does not play any role, in our classification scheme because it provides a basis for the following two types of theories. (2)Theories of type 2 have deterministic laws, but the initial values are unknown. Therefore, no predictions on individual events are possible, despite the fact that deterministic laws describing individual events are valid. In order to verify a prediction of a type 2 theory, a large number of identically prepared experiments must be performed. We have no problems to understand or to interpret such a theory because we know it is just our lack of knowledge which causes the uncertainty. An example is given by classical statistical mechanics. Of course, in order to construct a type 2 theory, one needs a type 1 theory providing the deterministic laws. (3)It is possible to go one step further in this direction increasing the relative importance of probability even more. We may not only work with unknown initial values but with unknown laws as well. In type 3 theories, there are no deterministic laws describing individual events, only probabilities can be assigned. There is no need to mention initial values for particle trajectories any more (initial values for probabilistic dynamical variables are still required).
Type 2 theories could also be referred to as classical (statistical) theories. Type 3 theories are most interesting because we recognize here characteristic features of quantum mechanics. In what follows, we will try to make this last statement more definite.
Comparing type 2 and type 3 theories, one finds two remarkable aspects. The first is a subtle kind of “inconsistency” of type 2 theories: if we are unable to know the initial values of our observables (at a particular time), why should we be able to know these values during the following time interval (given we know them at a fixed time)? In other words, in type 2 theories, the two factors determining the final outcome of a theoretical prediction—namely initial values and laws—are not placed on the same (realistic) footing. This hybrid situation has been recognized before; the term “cryptodeterministic” has been used by Moyal  to characterize classical statistical mechanics (note that the same term is also used in a very different sense to characterize hidden variable theories ). Type 3 theories do not show this kind of inconsistency.
The second observation is simply that type 2 and type 3 theories have a number of important properties in common. Both are unable to predict the outcome of single events with certainty; only probabilities are provided in both cases. In both theories, the quantities which may be actually observed—whose time dependence may be formulated in terms of a differential equation—are averaged observables, obtained with the help of a large number of single experiments. These common features lead us to suspect that a general structure might exist which comprises both types of theories.
Such a general structure should consist of a set of (statistical) conditions, which have to be obeyed by any statistical theory. In theories of this kind, observables in the conventional sense do not exist. Their role is taken over by random variables. Likewise, conventional physical laws—differential equations for time-dependent observables—do not exist. They are replaced by differential equations for statistical averages. These averages of the (former) observables become the new observables, with the time playing again the role of the independent variable. In order to construct such general conditions, one needs again (as with type 2 theories) a deterministic (type I) theory as a “parent” theory. Given such a type 1 theory, we realize that a simple recipe to construct a reasonable set of statistical conditions is the following: replace all observables (of the type 1 theory) by averaged values using appropriate probability densities. In this way, the dynamics of the problem is completely transferred from the observables to the probability distributions. This program will be carried through in the next sections, using a model system of classical mechanics as parent theory.
The above construction principle describes an unusual situation, because we are used to considering determinism (concerning single events) as a very condition for doing science. Nevertheless, the physical context, which is referred to is quite simple and clear, namely, that nature forbids for some reason deterministic description of single events but allows it at least “on the average”. It is certainly true that we are not accustomed to such a kind of thinking. But to believe or not to believe in such mechanisms of nature is basically a matter of intellectual habit. Also, the fact that quantum mechanics is incomplete does not necessarily imply that a complete theory exists; the opposite possibility, that no deterministic description of nature will ever be found, should also be taken into account.
3. Statistical Conditions
We study a simple system, a particle in an externally controlled time-independent potential , whose motion is restricted to a single spatial dimension (coordinate ). We use the canonical formalism of classical mechanics to describe this system. Thus, the fundamental observables of our theory are and , and they obey the differential equations where . We now create statistical conditions, associated with the type 1 theory (1), according to the method outlined in the last section. We replace the observables and the force field by averages and and obtain The averages in (2), (3) are mean values of the random variables or ; there is no danger of confusion here, because the symbols and will not be used any more. In (1), only terms occur, which depend either on the coordinate or the momentum, but not on both. Thus, to form the averages, we need two probability densities and , depending on the spatial coordinate and the momentum separately. Then, the averages occurring in (2), (3) are given by Note that has to be replaced by and not by . The probability densities and are positive semidefinite and normalized to unity. They are time dependent because they describe the dynamic behavior of this theory.
Relations (2), (3), with the definitions (4)–(6) are, to the best of our knowledge, new. They will be referred to as “statistical conditions”. There is obviously a formal similarity of (2), (3) with Ehrenfest's relations of quantum mechanics, but the differential equations to be fulfilled by and are still unknown and may well differ from those of quantum theory. Relations (2)–(6) represent general conditions for theories which are deterministic only with respect to statistical averages of observables and not with respect to single events. They cannot be associated to either the classical or the quantum mechanical domain of physics. Many concrete statistical theories (differential equations for the probability distributions) obeying these conditions may exist (see the next section).
These conditions should be supplemented by a local conservation law of probability. Assuming that the probability current is proportional to the gradient of a function (this is the simplest possible choice and the one realized in Hamilton-Jacobi theory; see also Section 11), this conservation law is for our one-dimensional situation given by the continuity equation The derivative of defines a field with dimension of a momentum, Equation (8) defines a unique number for each value of the random variable . In the next section, we will discuss the following question: are we allowed to identify the possible values of the random variable occurring in (5) with the values of the momentum field ?
4. On Random Variables
Introducing standard notions of probability theory, the fundamental sample space of the present theory is given by all possible results of position measurements, that is, it may be identified with the set of real numbers . This set may also be identified with the possible values of a random variable “position measurement” (whose name should strictly speaking differ from , but we will neglect such differences here). The basic probability measure which assigns a probability to each event (subspace of ) is given by . According to standard probability theory, the field defined by (8) is itself a random variable. We may consider it as a function of the random variable (denoting “position measurement”) or as a random variable defined independently on the fundamental event space ; it makes no difference. Its probability density is uniquely determined by and the function . In order to avoid confusion of names, it may be useful to denote the derivative of with respect to by instead of . Thus, and the notation indicates that a random variable defined by the function exists (the time variable will sometimes be omitted for brevity).
In order to study this important point further, we rewrite the standard result for the probability density of in a form more appropriate for physical considerations (a form apparently not easily found in textbooks on probability). For the simplest possible situation, a denumerable sample space with elements , a probability measure , and a invertible function , the probability that an event occurs is obviously given by . This result is the starting point to obtain , the probability density of a continuous random variable , which is defined by a noninvertible function . It is given by  where denotes the solutions (the number of solutions depends on ) of the equation . Using a well-known formula for Dirac’s delta function , applied to the case where the argument of is an arbitrary function, (9) may be rewritten in the form where we came back to our original notation, writing down the -dependencies of and and replacing by .
The representation (10) reveals very clearly a hybrid nature of random variables defined as (nontrivial) functions on the event space . They are partly defined by a probabilistic quantity (namely ) and partly by a deterministic relation (namely ). The deterministic nature of the latter is expressed by the singular (delta-function) shape of the associated probability. Such densities occur in classical statistics; that is, in type 2 theories, (10) may obviously be obtained by performing an integration over of the classical phase space probability density . Considered from an operational point of view, the hybrid nature of random variables may be described as follows. Deterministic predictions for random variables are impossible, as are deterministic predictions for the original variables . But once a number has been observed in an experiment, then the value of is with certainty given by the defining function . If no such relation exists, this does not necessarily imply that and are completely independent. Many other more complicated (“nonlocal” or “probabilistic”) relations between such variables are conceivable.
We formulated general conditions comprising both type 2 and type 3 theories. Thus, as far as this general framework is concerned, we can certainly not dispense with the standard notion of random variables, which are basic ingredients of type 2 theories; such variables will certainly occur as special (type 2) cases in our formalism. But, of course, we are essentially interested in the characterization of type 3 theories and the form of (10) shows that the standard notion of random variable is not necessarily meaningful in a type 3 theory. Thus, we will allow for the possibility of random variables which are not defined by deterministic relations of the standard type, as functions on the sample space.
This situation leads to a number of questions. We may, for example, ask: can we completely dispense with the the standard concept of random variables if we are dealing exclusively with a type 3 theory? The answer is certainly no; it seems impossible to formulate a physical theory without any deterministic relations. In fact, a deterministic relation, corresponding to a standard random variable , has already been anticipated in (6). If in a position measurement of a particle a number is observed, then the particle is—at the time of the measurement—with certainty under the influence of a force . Thus, an allowed class of deterministic relations might contain “given” functions, describing externally controlled influences like forces or potentials .
There may be other standard random variables. To decide on purely logical grounds which relations of a type 3 theory are deterministic and which are not is not an obvious matter. However, one would suspect that the deterministic relations should be of an universal nature; for example, they should hold both in type 2 and type 3 theories. Further, we may expect that all relations which are a logical consequence of the structure of space-time should belong to this class. Such a quantity is the kinetic energy. In fact, for the currently considered nonrelativistic range of physics, the functional form of the kinetic energy can be derived from the structure of the Galilei group both in the mathematical framework of classical mechanics  and quantum mechanics . We refer to the kinetic energy as a standard random variable insofar as it is a prescribed function of (but it is, because it is a function of , not a standard random variable with respect to the fundamental probability measure ). Combining the standard random variables “kinetic energy” and “potential”, we obtain a standard random variable “energy”, which will be studied in more detail in Section 6.
Thus, in the present framework, particle momentum will, in general, not be considered as a standard random variable. This means that an element of determinism has been eliminated from the theoretical description. It seems that this elimination is one of the basic steps in the transition from type 2 to type 3 theories. The functional form of the probability density , and its relation to , are one of the main objectives of the present study. According to the above discussion a measurement of position does no longer determine momentum at the time of the measurement. However, the set of all position measurements (represented formally by the probability density ) may still determine (in a manner still to be clarified) the set of all momentum measurements (the probability ). Interestingly, De La Torre , using a completely different approach, arrived at a similar conclusion, namely that the quantum mechanical “variables” position and momentum cannot be random variables in the conventional sense. For simplicity, we will continue to use the term random variable for and will add the attributes “standard” or “nonstandard” if required.
As a first step in our study of , we will now investigate the integral equation (2) and will derive a relation for which will be used again in Section 6. In the course of the following calculations, the behavior of and at infinity will frequently be required. We know that is normalizable and vanishes at infinity. More specifically, we will assume that and obey the following conditions: where is anyone of the following factors Roughly speaking, condition (11) means that vanishes faster than and is nonsingular at infinity. Whenever in the following an integration by parts will be performed, one of the conditions (11) will be used to eliminate the resulting boundary term. For brevity, we will not refer to (11) any more; it will be sufficiently clear in the context of the calculation which one of the factors in (12) will be referred to.
We look for differential equations for our fields which are compatible with (2)–(7). According to the above discussion, we are not allowed to identify (8) with the random variable . Using (7), we replace the derivative with respect to in (2) by a derivative with respect to and perform an integration by parts. Then, (2) takes the form Equation (13) shows that the averaged value of the random variable is the expectation value of the field . In the next section, we will insert this expression for in the second statistical condition (3). More specific results for the probability density will be obtained later (in Section 10). As an intermediate step, we now use (13) and (5) to derive a relation for , introducing thereby an important change of variables.
We replace the variables by new variables defined by We may as well introduce the imaginary unit and define the complex field . Then, the last transformation and its inverse may be written as We note that so far, no new condition or constraint has been introduced; choosing one of the sets of real variables , , or the set of complex fields is just a matter of mathematical convenience. Using the integrand on the left hand side of (13) takes the form The derivative of may be omitted under the integral sign, and (13) takes the form We introduce the Fourier transform of , defined by The constant , introduced in (14), has the dimension of an action, which means that has the dimension of a momentum. Performing the Fourier transform one finds that the momentum probability density may be written as where the integral over has to vanish. Using Parseval's formula and the fact that both and are normalized to unity, we find that the integral of has to vanish too.
Using the continuity equation (7) and the first statistical condition (2), we found two results (namely (18) and (21)), which reduce for to characteristic relations of the quantum mechanical formalism. However, the function , as well as the probability density we are finally interested in, is still unknown, because the validity of the deterministic relation (8) is not guaranteed in the present general formalism allowing for type 3 theories. In the next section, the implications of the second statistical condition will be studied without using . We will come back to the problem of the determination of in Section 7.
5. Statistical Theories
We study now the implications of the second statistical condition (3). Using the variables , it takes the form if is replaced by the integral on the l.h.s. of (13). Making again use of (7), we replace in (22) the derivative of with respect to by a derivative with respect to . Then, after an integration by parts, the left hand side of (22) takes the form Performing two more integrations by parts (a second one in (23) substituting the term with the time derivative of , and a third one on the right hand side of (22)), condition (3) takes the final form Equation (24) can be considered as an integral equation for the real function defined by Obviously, (24) admits an infinite number of solutions for , which are given by The function in (26) has to vanish at but is otherwise completely arbitrary.
Equation (26), with fixed and as defined by (25), is the second differential equation for our variables and we were looking for and defines—together with the continuity equation (7)—a statistical theory. The dynamic behavior is completely determined by these differential equations for and . On the other hand, the dynamic equation—in the sense of an equation describing the time-dependence of observable quantities—is given by (2) and (3).
From the subset of functions which do not depend explicitly on and , we list the following three possibilities for and the corresponding . The simplest solution is The second depends only on , The third depends also on the derivative of ,
We discuss first (27). The statistical theory defined by (27) consists of the continuity equation (7) and (see (25)) the Hamilton-Jacobi equation The fact that one of these equations agrees with the Hamilton-Jacobi equation does not imply that this theory is a type 1 theory (making predictions about individual events). This is not the case; many misleading statements concerning this limit may be found in the literature. It is a statistical theory whose observables are statistical averages. However, (30) becomes a type 1 theory if it is considered separately—and embedded in the theory of canonical transformations. The crucial point is that (30) does not contain ; otherwise, it could not be considered separately. This separability—or equivalently the absence of in (30)—implies that this theory is a classical (type 2) statistical theory . The function may be interpreted as describing the individual behavior of particles in the given environment (potential ). Loosely speaking, the function may be identified with the considered particle; recall that is the function generating the canonical transformation to a trivial Hamiltonian. The identity of the particles described by is not influenced by statistical correlations because there is no coupling to in (30). The classical theory defined by (7) and (30) may also be formulated in terms of the variables and (but not as a single equation containing only ; see the remark at the end of Section 5). In this form, it has been discussed in several works [14–16].
All theories with nontrivial , depending on or its derivatives, should be classified as “nonclassical” (or type 3) according to the above analysis. In nonclassical theories any treatment of single events (calculation of trajectories) is impossible due to the coupling between and . The problem is that single events are nevertheless real and observable. There must be a kind of dependence (correlation of nonclassical type) between these single events. But this dependence cannot be described by concepts of deterministic theories like “interaction”.
The impossibility to identify objects in type 3 theories—independently from the statistical context—is obviously related to the breakdown of the concept of standard random variables discussed in the last section. There, we anticipated that a standard random variable (which is defined as a unique function of another random variable) contains an element of determinism that should be absent in type 3 theories. In fact, it does not make sense to define a unique relation between measuring data—for example, of spatial position and momentum—if the quantities to be measured cannot themselves be defined independently from statistical aspects.
The theory defined by (28) is a type 3 theory. We will not discuss it in detail because it may be shown (see the next section) to be unphysical. It has been listed here in order to have a concrete example from the large set of insignificant type 3 theories.
The theory defined by (29) is also a type 3 theory. Here, the second statistical condition takes the form if the free proportionality constant in (29) is fixed according to . The two equations (7) and (31) may be rewritten in a more familiar form if the transformation (16) (with ) to variables is performed. Then, both equations are contained (as real and imaginary parts) in the single equation which is the one-dimensional version of Schrödinger's equation . Thus, quantum mechanics belongs to the class of theories defined by the above conditions. We see that the statistical conditions (2), (3) comprise both quantum mechanical and classical statistical theories; these relations express a “deep-rooted unity”  of the classical and quantum mechanical domain of physics.
We found an infinite number of statistical theories which are all compatible with our basic conditions and are all on equal footing so far. However, only one of them, quantum mechanics, is realized by nature. This situation leads us to ask which further conditions are required to single out quantum mechanics from this set. Knowing such condition(s) would allow us to have premises which imply quantum mechanics.
The above analysis shows that Schrödinger's equation (32) can be derived from the condition that the dynamic law for the probabilities takes the form of a single equation for (instead of two equations for and as is the case for all other theories). Our previous use of the variables and instead of and was entirely a matter of mathematical convenience. In contrast, this last condition presents a real constraint for the physics since a different number field has been chosen . Recently, Schrödinger’s equation including the gauge coupling term has been derived  from this condition (which had to be supplemented by two further conditions, namely, the existence of a continuity equation and the assumption of a linear time evolution law for ). Of course, this is a mathematical condition whose physical meaning is not at all clear. This formal criterion will be replaced in Section 9 by a different condition which leads to the same conclusion but may be formulated in more physical terms.
6. Energy Conservation
In the last section, we derived a second differential equation (26) for our dynamical variables and . This equation has some terms in common with the Hamilton-Jacobi equation of classical mechanics but contains an unknown function depending on and ; in principle, it could also depend on and , but this would contradict the homogeneity of space-time. We need further physical condition(s) to determine those functions which are appropriate for a description of quantum mechanical reality or its classical counterpart.
A rather obvious requirement is conservation of energy. In deterministic theories, conservation laws—and in particular the energy conservation law which will be considered exclusively here—are a logical consequence of the basic equations; there is no need for separate postulates in this case. In statistical theories, energy conservation with regard to time-dependence of single events is of course meaningless. However, a statistical analog of this conservation law may be formulated as follows: “the statistical average of the random variable energy is time independent”. In the present framework, it is expressed by the relation We will use the abbreviation for the bracket, where denotes the first and denotes the second term, respectively. Here, in contrast to the deterministic case, the fundamental laws (namely (2), (3), (7)) do not guarantee the validity of (33). It has to be implemented as a separate statistical condition. In fact, (33) is very simple and convincing; it seems reasonable to keep only those statistical theories which obey the statistical version of the fundamental energy conservation law.
In writing down (33), a second tacit assumption, besides the postulate of energy conservation, has been made, namely that a standard random variable “kinetic energy” exists; this assumption has already been formulated and partly justified in the last section. This means, in particular, that the probability density , which has been introduced in the statistical conditions (2), (3) to obtain the expectation value of may also be used to calculate the expectation value of . This second assumption is—like the requirement of energy conservation—not a consequence of the basic equations (26), (7). The latter may be used to calculate the probability density but says nothing about the calculation of expectation values of -dependent quantities. Thus, (33) is an additional assumption, as may also be seen by the fact that two unknown functions, namely, and occur in (33).
Equation (33) defines a relation between and . More precisely, we consider variables and which are solutions of the two basic equations (26) and (7), where may be an arbitrary function of , . Using these solutions, we look which (differential) relations between and are compatible with the requirement (33). Postulating the validity of (33) implies certain relations (yet to be found in explicit form) between the equations determining the probabilities and the equations defining the expectation values of -dependent quantities (like the kinetic energy).
In a first step, we rewrite the statistical average of in (33) using (21). The result is as may be verified with the help of (20). Using (34), transforming to , and performing an integration by parts, the first term of (33) takes the form If we add the time derivative of to (35), we obtain the time derivative of , as defined by the left hand side of (33). In the integrand of the latter expression, the following term occurs: The two brackets in (36) may be rewritten with the help of (7) and (26). Then, the term (36) takes the much simpler form Using (35) and (37), we find that the statistical condition (33) implies the following integral relation between and
Let us first investigate the classical solution. We may either insert the classical, “hybrid” solution (10) for directly into (33) or insert according to (21) with as given by (10) in (38), to obtain which implies . Thus, the hybrid probability density (10) leads, as expected, to a classical (the equation for does not contain terms dependent on ) statistical theory, given by the Hamilton-Jacobi equation and the continuity equation. These equations constitute the classical limit of quantum mechanics which is a statistical theory (of type 2 according to the above classification) and not a deterministic (type 1) theory like classical mechanics. This difference is very important and should be borne in mind. The various ambiguities  one encounters in the conventional particle picture both in the transitions from classical physics to quantum mechanics and back to classical physics do not exist in the present approach.
If we insert the quantum-mechanical result (29) with properly adjusted constant in (38), we obtain where is an arbitrary time-independent constant. This constant reflects the possibility to fix a zero point of a (kinetic energy) scale. An analogous arbitrary constant occurs for the potential energy. Since kinetic energy occurs always (in all physically meaningful contexts) together with potential energy, the constant may be eliminated with the help of a properly adjusted . Therefore, we see that—as far as the calculation of the expectation value of the kinetic energy is concerned—it is allowed to set . Combined with previous results, we see that may be set equal to 0 as far as the calculation of the expectation values of , for is concerned. These cases include all cases of practical importance. A universal rule for the calculation of averages of arbitrary powers of is not available in the present theory. The same is true for arbitrary powers of and . Fortunately, this is not really a problem, since the above powers cover all cases of physical interest, as far as powers of are concerned (combinations of powers of and do not occur in the present theory and will be dealt with in a future work).
It is informative to compare the present theory with the corresponding situation in the established formulations of quantum mechanics. In the conventional quantization procedure, which is ideologically dominated by the structure of particle mechanics, it is postulated that all classical observables (arbitrary functions of and ) be represented by operators in Hilbert space. The explicit construction of these operators runs into considerable difficulties  for all except the simplest combinations of and . But, typically, this does not cause any real problems since all simple combinations (of physical interest) can be represented in a unique way by corresponding operators. Thus, what is wrong—or rather ill posed—is obviously the postulate itself, which creates an artificial problem. This is one example, among several others, for an artificial problem created by choosing the wrong (deterministic) starting point for quantization.
If we start from the r.h.s. of (38) and postulate , then we obtain agreement with the standard formalism of quantum mechanics, both with regard to the time evolution equation and the rules for calculating expectation values of -dependent quantities. Thus, is a rather strong condition. Unfortunately, there seems be no intuitive interpretation at all for this condition. It is even less understandable than our previous formal postulate leading to Schrödinger's equation, the requirement of a complex state variable. Thus, while we gained in this section important insight in the relation between energy conservation, time-evolution equation, and rules for calculating expectation values, still other methods are required if we want to derive quantum mechanics from a set of physically interpretable postulates.
7. Entropy as a Measure of Disorder?
How then to determine the unknown function (and )? According to the last section, all the required information on may be obtained from a knowledge of the term in the differential equation (26) for . We will try to solve this problem by means of the following two-step strategy: (i) find an additional physical condition for the fundamental probability density and (ii) determine the shape of (as well as that of and ) from this condition.
At this point, it may be useful to recall the way probability densities are determined in classical statistical physics. After all, the present class of theories is certainly not of a deterministic nature and belongs fundamentally to the same class of statistical (i.e., incomplete with regard to the description of single events) theories as classical statistical physics, no matter how important the remaining differences may be.
The physical condition for which determines the behavior of ensembles in classical statistical physics is the principle of maximal (Boltzmann) entropy. It agrees essentially with the information-theoretic measure of disorder introduced by Shannon and Weaver . Using this principle both the microcanonical and the canonical distribution of statistical thermodynamics may be derived under appropriate constraints. Let us discuss this classical extremal principle in some detail in order to see if it can be applied, after appropriate modifications, to the present problem. This question also entails a comparison of different types of statistical theories.
The Boltzmann-Shannon entropy is defined as a functional of an arbitrary probability density . The statistical properties characterizing disorder, which may be used to define this functional, are discussed in many publications [23, 24]. Only one of these conditions will, for later use, be written down here, namely, the so-called “composition law”. Let us assume that may be written in the form , where depends only on points in a subspace of our -dimensional sample space , and let us further assume that is the direct product of and . Thus, this system consists of two independent subsystems. Then, the composition law is given by where operates only on .
For a countable sample space with events labeled by indices from an index set and probabilities , the entropy is given by where is a constant. To obtain meaningful results the extrema of (42) under appropriate constraints, or subsidiary conditions, must be found. The simplest constraint is the normalization condition . In this case, the extrema of the function with respect to the variables must be calculated. One obtains the reasonable result that the minimal value of is 0 (one of the equal to 1, all other equal to 0) and the maximal value is (all equal, ).
For most problems of physical interest the sample space is nondenumerable. A straightforward generalization of (42) is given by where the symbol denotes now a point in the appropriate (generally -dimensional) sample space. There are some problems inherent in the this straightforward transition to a continuous set of events which will be mentioned briefly in the next section. Let us put aside this problems for the moment and ask if (44) makes sense from a physical point of view. For nondenumerable problems, the principle of maximal disorder leads to a variational problem and the method of Lagrange multipliers may still be used to combine the requirement of maximal entropy with other defining properties (constraints). An important constraint is the property of constant temperature which leads to the condition that the expectation value of the possible energy values is given by a fixed number , If, in addition, normalizability is implemented as a defining property, then the true distribution should be an extremum of the functional It is easy to see that the well-known canonical distribution of statistical physics is indeed an extremum of . Can we use a properly adapted version of this powerful principle of maximal disorder (entropy) to solve our present problem?
Let us compare the class of theories derived in Section 5 with classical theories like (46). This may be of interest also in view of a possible identification of “typical quantum mechanical properties” of statistical theories. We introduce for clarity some notation, based on properties of the sample space. Classical statistical physics theories like (46) will be referred to as “phase space theories”. The class of statistical theories, derived in Section 5, will be referred to as “configuration space theories”.
The most fundamental difference between phase space theories and configuration space theories concerns the physical meaning of the coordinates. The coordinates of phase space theories are (generally time-dependent) labels for particle properties. In contrast, configuration space theories are field theories; individual particles do not exist and the (in our case one-dimensional) coordinates are points in space.
A second fundamental difference concerns the dimension of the sample space. Elementary events in phase space theories are points in phase space (of dimension 6 for a 1-particle system) including configuration-space and momentum-space (particle) coordinates, while the elementary events of configuration space theories are (space) points in configuration space (which would be of dimension 3 for a 1-particle system in three spatial dimensions). This fundamental difference is a consequence of a (generally nonlocal) dependence between momentum coordinates and space-time points contained in the postulates of the present theory, in particular in the postulated form of the probability current (see (7)). This assumption, a probability current, which takes the form of a gradient of a function (multiplied by ) is a key feature distinguishing configuration space theories, as potential quantum-like theories, from the familiar (many body) phase space theories. The existence of this dependence per se is not an exclusive feature of quantum mechanics, it is a property of all theories belonging to the configuration class, including the theory characterized by , which will be referred to as “classical limit theory”. What distinguishes the classical limit theory from quantum mechanics is the particular form of this dependence; for the former, it is given by a conventional functional relationship (as discussed in Section 4); for the latter, it is given by a nonlocal relationship whose form is still to be determined.
This dependence is responsible for the fact that no “global” condition (like (45) for the canonical distribution) must be introduced for the present theory in order to guarantee conservation of energy in the mean—this conservation law can be guaranteed “locally” for arbitrary theories of the configuration class by adjusting the relation between (the form of the dynamic equation) and (the definition of expectation values). In phase space theories, the form of the dynamical equations is fixed (given by the deterministic equations of classical mechanics). Under constraints like (45) the above principle of maximal disorder creates—basically by selecting appropriate initial conditions—those systems which belong to a particular energy; for nonstationary conditions, the deterministic differential equations of classical mechanics guarantee then that energy conservation holds for all times. In contrast, in configuration space theories, there are no initial conditions (for particles). The conditions which are at our disposal are the mathematical form of the expectation values (the function ) and/or the mathematical form of the differential equation (the function ). Thus, if something like the principle of maximal disorder can be used in the present theory, it will determine the form of the differential equation for rather than the explicit form of .
These considerations raise some doubt as to the usefulness of an measure of disorder like the entropy (44)—which depends essentially on instead of and does not contain derivatives of —for the present problem. We may still look for an information theoretic extremal principle of the general form Here, the functional attains its maximal value for the function which describes—under given constraints —the maximal disorder. But will differ from the entropy functional and appropriate constraints , reflecting the local character of the present problem, have still to be found. Both terms in (47) are at our disposal and will be defined in the next sections.
8. Fisher's Information
A second measure of disorder, besides entropy, exists which is called Fisher information . The importance of this second type of “entropy” for the mathematical form of the laws of physics—in particular for the terms related to the kinetic energy—has been stressed in a number of publications by Frieden [26, 27] and has been studied further by Hall , Reginatto  and others. The Fisher functional is defined by where denotes in the present one-dimensional case and the -component vector if . Since the time variable does not play an important role it will frequently be suppressed in this section.
The Boltzmann-Shannon entropy (44) and the Fisher information (48) have a number of crucial statistical properties in common. We mention here, for future reference, only the most important one, namely, the composition law (41); a more complete list of common properties may be found in the literature . Using the notation introduced in Section 7 (see the text preceeding (41)), it is easy to see that (48) fulfills the relation in analogy to (41) for the entropy . The most obvious difference between (44) and (48) is the fact that (48) contains a derivative while (44) does not. As a consequence, extremizing these two functionals yields fundamentally different equations for , namely, a differential equation for the Fisher functional and an algebraic equation for the entropy functional .
The two measures of disorder, and , are related to each other. To find this relation, it is necessary to introduce a generalized version of (44), the so-called “relative entropy”. It is defined by where is a given probability density, sometimes referred to as the “prior” (the constant in (44) has been suppressed here). It provides a reference point for the unknown ; the best choice for is to be determined from the requirement of maximal relative entropy under given constraints, where represents the state of affairs (or of our knowledge of the state of affairs) prior to consideration of the constraints. The quantity agrees with the “Kullback-Leibler distance” between two probability densities and .
It has been pointed out that “all entropies are relative entropies” . In fact, all physical quantities need reference points in order to become observables. The Boltzmann-Shannon entropy (44) is no exception. In this case, the “probability density” is a number of value 1, and of the same dimension as ; it describes absence of any knowledge or a completely disordered state. We mention also two other more technical points which imply the need for relative entropies. The first is the requirement to perform invariant variable transformations in the sample space , the second is the requirement to perform a smooth transition from discrete to continuous probabilities .
Thus, the concept of relative entropies is satisfying from a theoretical point of view. On the other hand, it seems to be useless from a practical point of view, since it requires—except in the trivial limit —knowledge of a new function which is in general just as unknown as the original unknown function . A way out of this dilemma is to identify with a function , which can be obtained from by replacing the argument by a transformed argument . In this way, we obtain from (50) a quantity , which is a functional of the relevant function alone; in addition it is an ordinary function of the parameters characterizing the transformation. The physical meaning of the relative entropy remains unchanged, the requirement of maximal relative entropy becomes a condition for the variation of in the sample space between the points and .
If further consideration is restricted to translations (it would be interesting to investigate other transformations, in particular if the sample space agrees with the configuration space), then the relative entropy is written as Expanding the integrand on the r.h.s. of (51) up to terms of second order in and using the fact that and have to vanish at infinity one obtains the relation This, then is the required relation between the relative entropy and and the Fisher information ; it is valid only for sufficiently small . The relative entropy cannot be positive. Considered as a function of it has a maximum at (taking its maximal value 0) provided . This means that the principle of maximal entropy implies no change at all relative to an arbitrary reference density. This provides no criterion for , since it holds for arbitrary . But if (52) is considered, for fixed , as a functional of , the principle of maximal entropy implies, as a criterion for the spatial variation of , a principle of minimal Fisher information.
Thus, from this overview (see Frieden’s book  for more details and several other interesting aspects), we would conclude that the principle of minimal Fisher information should not be considered as a completely new and exotic matter. Rather, it should be considered as an extension or generalization of the classical principle of maximal disorder to a situation where a spatially varying probability exists, which contributes to disorder. This requires, in particular, that this probability density is to be determined from a differential equation and not from an algebraic equation. We conclude that the principle of minimal Fisher is very well suited for our present purpose. As a next step, we have to set up proper constraints for the extremal principle.
9. Subsidiary Condition
It will be convenient in the course of the following calculations to write the differential equation (26) in the form where is given by (25) and is defined by In (54), it has been assumed that does not depend explicitly on , that the problem is basically of a time-independent nature, and that no higher derivatives with respect to than occur. This last assumption is in agreement with the mathematical form of all fundamental differential equations of physics; we will come back to this point later. Our task is to determine the functional form of , with respect to the variables , using a general statistical extremal principle. As a consequence of the general nature of this problem, we do not expect the solution to depend on the particular form of . For the same reason, does not depend on .
We tentatively formulate a principle of maximal disorder of the form (47) and identify with the Fisher functional (48). Then, the next step is to find a proper constraint . In accord with general statistical principles, the prescribed quantity should have the form of a statistical average. A second condition is that our final choice should be as similar to the classical requirement (45) as possible. Adopting these criteria one is led more or less automatically to the constraint where Our guideline in setting up this criterion has been the idea of a prescribed value of the average energy; the new term plays the role of an additional contribution to the energy. For and the constraint (55) agrees with (45) provided the “classical” identification of with the gradient of is performed (see (8)). The most striking difference between (45) and (55) is the fact that the quantity in (56), whose expectation value yields the constraint, is not defined independently from the statistics (like in (45)) but depends itself on (and its derivatives up to second order). This aspect of nonclassical theories has already been discussed in Section 5.
Let us try to apply the mathematical apparatus of variational calculus  to the constraint problem (47) with the “entropy” functional defined by (48) and a single constraint defined by (55) and (56) (there is no normalization condition here because we do not want to exclude potentially meaningful nonnormalizable states from the consideration). Here, we encounter immediately a first problem which is due to the fact that our problem consists in the determination of an unknown function of . This function appears in the differential equation and in the subsidiary condition for the variational problem. Thus, our task is to identify from a variational problem the functional form of a constraint defining this variational problem. Variational calculus, starts, of course, from constraints whose functional forms are fixed; these fixed functionals are used to derive differential equations for the variable . Thus, whenever the calculus of variations is applied, the function must be considered as unknown but fixed. We will have to find a way to “transform” the condition for the variation of in a corresponding condition for the variation of .
The variational calculation defined above belongs to a class of “isoperimetric” variational problems which can be solved using the standard method of Lagrange multipliers, provided certain mathematical conditions are fulfilled . Analyzing the situation, we encounter here a second problem, which is in fact related to the first. Let us briefly recall the way the variational problem (47) is solved, in particular with regard to the role of the Lagrange multipliers . Given the problem to find an extremal of under constraints of the form and two prescribed values of at the boundaries, one proceeds as follows. The Euler-Lagrange equation belonging to the functional (47) is solved. The general solution for depends (besides on ) on two integration constants, say and , and on the Lagrange multipliers . To obtain the final extremal , these constants have to be determined from the two boundary values and the constraints (which are differential equations for isoperimetric problems). This is exactly the way the calculation has been performed (even though a simpler form of the constraints has been used) in the classical case. For the present problem, however, this procedure is useless, since we do not want the constraints to determine the shape of individual solutions but rather the functional form of a term in the differential equation, which is then the same for all solutions. For that reason, the “normal” variational problem (47) does not work (we will come back to a mathematical definition of “normal” and “abnormal” shortly). This means that the classical principle of maximal entropy, as discussed in Section 7, cannot be taken over literally to the nonclassical domain.
For the same reason, no subsidiary conditions can be taken into account in the calculations reported by Frieden  and by Reginatto . In these works, a different route is chosen to obtain Schrödinger’s equation; in contrast to the present work (see below), the Fisher functional is added as a new term to a classical Lagrangian and the particular form of this new term is justified by introducing a new “principle of extreme physical information” .
A variational problem is called “normal” if an extremal of the functional (here, we restrict ourselves to the present case of a single constraint) exists which is not at the same time an extremal of the constraint functional . If this is not the case; that is, if the extremal is at the same time an extremal of , then the problem is called “abnormal” . Then, the usual derivation becomes invalid and the condition (47) must be replaced by the condition of extremal alone, which then yields as only remaining condition to determine the extremal. This type of problem is also sometimes referred to as “rigid”; the original formulation (47) may be extended to include the abnormal case by introducing a second Lagrange multiplier .
We conclude that our present problem should be treated as an abnormal variational problem, since we thereby get rid of our main difficulty, namely, the unwanted dependence of individual solutions on Lagrange multipliers ( drops actually out of (57)). A somewhat dissatisfying (at first sight) feature of this approach is the fact that the Fisher functional itself does no longer take part in the variational procedure; the original idea of implementing maximal statistical disorder seems to have been lost. But it turns out that we will soon recover the Fisher in the course of the following calculation. The vanishing of the first variation of , written explicitly as means that (for fixed ) the spatial variation of should extremize (minimize) the average value of the deviation from . This requirement is (as a condition for ) in agreement with the principle of minimal Fisher information as a special realization of the requirement of maximal disorder. Equation (58) defines actually a Lagrangian for and yields as Euler-Lagrange equations a differential equation for . When this equation is derived, the task of variational calculus is finished. On the other hand, we know that obeys also Equation (53). Both differential equations must agree, and this fact yields a condition for our unknown function , and (53) also guarantees that the original constraint (55) is fulfilled. In this way we are able to “transform” the original variational condition for in a condition for . In the next section this condition will be used to calculate and to recover the form of the Fisher .
It should be mentioned that (58) has been used many times in the last eighty years to derive Schrödinger's equation from the Hamilton-Jacobi equation. The first and most important of these works is Schrödinger's “Erste Mitteilung” . In all of these papers is not treated as an unknown function but as a given function, constructed with the help of the following procedure. First, a transformation from the variable to a complex variable is performed. Secondly, a new variable is introduced by means of the formal replacement . This creates a new term in the Lagrangian, which has exactly the form required to create quantum mechanics. More details on the physical motivations underlying this replacement procedure may be found in a paper by Lee and Zhu . It is interesting to note that the same formal replacement may be used to perform the transition from the London theory of superconductivity to the Ginzburg-Landau theory . There, the necessity to introduce a new variable is obvious, in contrast to the present much more intricate situation.
10. Derivation of
For a general , the Euler-Lagrange equation belonging to the functional (see (58)) depends on derivatives higher than second order since the integrand in (58) depends on . This is a problem, since according to the universal rule mentioned above all differential equations of physics are formulated using derivatives not higher than second order. If we are to conform with this general rule (and we would like to do so), then we should use a Lagrangian containing only first order derivatives. But this would then again produce a conflict with (53), because the variational procedure increases the order of the highest derivative by one. We postpone the resolution of this conflict and proceed by calculating the Euler-Lagrange equations according to (58), which are given by Using the second basic condition (53), we see that the first line of (59) vanishes, and we obtain, introducing the abbreviation , the following partial differential equation for the determination of the functional form of with respect to the variables : Expressing the derivatives of in terms of the derivatives of and leads to a lengthy relation which will not be written down here. Since does not contain higher derivatives than , the sums of the coefficients of both the third and fourth derivatives of have to vanish. This implies that may be written in the form where and are solutions of Thus, two functions of have to be found, instead of a single function of . Fortunately, the solution we look for presents a term in a differential equation. This allows us to restrict our search to relatively simple solutions of (62). If the differential equation is intended to be comparable in complexity to other fundamental laws of physics, then a polynomial form, preferably with a finite number of terms, will be a sufficiently general equation.
Equation (62) must of course hold for arbitrary . Inserting (63), renaming indices and comparing coefficients of equal powers of and one obtains the relations to determine and . These relations may be used to calculate those values of which allow for nonvanishing coefficients and to calculate the proportionality constants between these coefficients; for example, (64) may be used to express in terms of provided and . One obtains the result that the general solution of (60) of polynomial form is given by (61), with where and are arbitrary constants and the index set is given by While the derivation of (66), (67) is straightforward but lengthy, the fact that (61), (66), and (67) fulfill (60) may be verified easily.
At this point, we are looking for further constraints in order to reduce the number of unknown constants. The simplest (nontrivial) special case of (61), (66), (67) is , for all . The corresponding solution for is given by . However, a solution given by a nonzero constant may be eliminated by adding a corresponding constant to the potential in . Thus, this solution need not be taken into account, and we may set .
The “next simplest” solution, given by , for all except , takes the form Let us also write down here, for later use, the solution given by , for all except . It takes the form
Comparison of the r.h.s. of (69) with (29) shows that the solution (69) leads to Schrödinger's equation (32). At this point, the question arises why this particular solution has been realized by nature—and not any other from the huge set of possible solutions, and (69) consists of two parts. Let us consider the two corresponding terms in , which represent two contributions to the Lagrangian in (56). The second of these terms agrees with the integrand of the Fisher functional (48). The first is proportional to . This first term may be omitted in the Lagrangian (under the integral sign), because it represents a boundary (or surface) term and gives no contribution to the Euler-Lagrange equations (it must not be omitted in the final differential equation (53), where exactly the same term reappears as a consequence of the differentiation of ). Thus, integrating the contribution of the solution (69) to the Lagrangian yields exactly the Fisher functional. No other solution with this property exists. Therefore, the reason why nature has chosen this particular solution is basically the same as in classical statistics, namely, the principle of maximal disorder—but realized in a different (local) context and expressed in terms of a principle of minimal Fisher information.
We see that the conflict mentioned at the beginning of this section does not exist for the quantum mechanical solution (69). The reason is again that the term in containing the second derivative is of the form of a total derivative and can, consequently, be neglected as far as its occurrence in the term of the Lagrangian is concerned. Generalizing this fact, we may formulate the following criterion for the absence of any conflict: the terms in containing must not yield contributions to the variation, that is, they must in the present context take the form of total derivatives (for more general variational problems such terms are called “null Lagrangians” ).
So far, in order to reduce the number of our integration constants, we used the criterion that the corresponding term in the Lagrangian should agree with the form of Fisher’s functional. This “direct” implementation of the principle of maximal disorder led to quantum mechanics. The absence of the above-mentioned conflict means that the theory may be formulated using a Lagrangian containing no derivatives higher than first order. As is well known, this is a criterion universally realized in nature; a list of fundamental physical laws obeying this criterion may, for example, be found in a paper by Frieden and Soffer . Thus, it is convincing although of a “formal” character. Let us apply this “formal” criterion as an alternative physical argument to reduce the number of unknown coefficients the above solution. This criterion implies, that the derivatives of with respect to do not play any role; that is, the solutions of (60) must also obey This implies that only those solutions of (60) are acceptable, which obey Using (61) and (66), it is easy to see that the solution (69) belonging to is the only solution compatible with the requirement (72) (as one would suspect it is also possible to derive (69) directly from (71)). Thus, the “formal” principle, that the Lagrangian contains no terms of order higher than one, leads to the same result as the “direct” application of the principle of maximal disorder. The deep connection between statistical criteria and the form of the kinetic energy terms in the fundamental laws of physics has been mentioned before in the literature . The present derivation sheds new light, from a different perspective, on this connection.
Summarizing, the shape of our unknown function has been found. The result for leads to Schrödinger's equation, as pointed out already in Section 5. This means that quantum mechanics may be selected from an infinite set of possible theories by means of a logical principle of simplicity, the statistical principle of maximal disorder. Considered from this point of view quantum mechanics is “more reasonable” than its classical limit (which is a statistical theory like quantum mechanics). It also means (see Section 6) that the choice is justified as far as the calculation of expectation values of is concerned.
In closing this section, we note that the particular form of the function has never been used. Thus, while the calculation of reported in this section completes our derivation of quantum mechanics, the result obtained is by no means specific for quantum mechanics. Consider the steps leading from the differential equation (53) and the variational principle (58) to the general solution (61), (66), (67). If we now supplement our previous assumptions with the composition law (41), we are able to single out the Fisher among all solutions (compare, e.g., (70) and (69)). Thus, the above calculations may also be considered as a new derivation of the Fisher functional, based on assumptions different from those used previously in the literature.
Both the formal transition from classical physics to quantum mechanics (quantization procedure) and the interpretation of the resulting mathematical formalism are presently dominated by the particle picture.
To begin with the interpretation, Schrödinger's equation is used to describe, for example, the behavior of individual electrons. At the same time, the statistical nature of quantum mechanics is obvious and cannot be denied. To avoid this fundamental conflict, various complicated intellectual constructions, which I do not want to discuss here, have been—and are being—designed. But the experimental data from the microworld (as interpreted in the particle picture) remain mysterious, no matter which one of these constructions is used.
Let us now consider the quantization process. The canonical quantization procedure consists of a set of formal rules, which include, in particular, the replacement of classical momentum and energy observables by new quantities, according to which then act on states of a Hilbert state, and so forth. By means of this well-known set of rules, one obtains immediately Schrödinger's equation (32) from the classical Hamiltonian of a single particle. While we are accustomed to “well-established” rules like (73), it is completely unclear why they work. It does not help if more sophisticated versions of the canonical quantization procedure are used. If for example, the structural similarity between quantum mechanical commutators and classical Poisson brackets  is used as a starting point, this does not at all change the mysterious nature of the jump into Hilbert space given by (73); this structural similarity is just a consequence of the fact that both theories share the same space-time (symmetries).
Thus, there seems to be no possibility to understand either the quantization procedure or the interpretation of the formalism if a single particle picture is used as a starting point. According to a (prevailing) positivistic attitude this is no problem, since the above rules “work” (they illustrate perfectly von Neumann's saying “In mathematics you do not understand things. You just get used to them”). On the other hand, an enormous amount of current activity is apparently aimed at an understanding of quantum mechanics.
We believe that the particle picture is inadequate, both as a starting point for the quantization process and with regard to the interpretation of the formalism. In fact, both of these aspects seem to be intimately related to each other; a comprehensible quantization procedure will lead to an adequate interpretation, and a reasonable quantization procedure will only be found if the theory is interpreted in an adequate way. The position adopted here is that the most adequate way is the simplest possible way. We believe that quantum mechanics is a statistical theory whose dynamical predictions make only sense for statistical ensembles and cannot be used to describe the behavior of individual events. This ensemble interpretation of quantum mechanics (for more details, the reader is referred to review articles by Ballentine  and by Home and Whitaker ) is generally accepted as the simplest possible interpretation, free from any contradictions and free from any additional assumptions expanding the range of validity of the original formalism. Fundamental conceptual problems like the “measurement problem” or the impossibility to characterize the wave function of a single particle by means of experimental data  do not exist in the statistical interpretation. Nevertheless, it is a minority view; the reason may be that it forces us to accept that essential parts of reality are out of our control. This inconvenient conclusion can be avoided by postulating that all fundamental laws of nature must be deterministic (with regard to the description of individual events). From the point of view of this deterministic dogma, any interpretation denying the completeness of quantum mechanics must be a “hidden variable theory”.
If we accept the ensemble interpretation of quantum mechanics, then the proper starting point for quantization must be a statistical theory. The assumption of a Hilbert space for the considered system should be avoided. This would mean postulating many essential quantum mechanical properties without any possibility to analyze their origin. Our aim is the derivation of Schrödinger's equation, from which then, afterwards, the Hilbert space structure can be obtained by means of mathematical analysis, abstraction, and generalization . Preferably, Schrödinger's equation should be derived from assumptions which can be understood in the framework of general classical (statistical as well as deterministic) and logical concepts. This route to quantum mechanics is of course not new. Any listing of works [3–6, 14–17, 26, 27, 35, 42, 43] following related ideas must necessarily be incomplete. In the present paper, an attempt has been undertaken to find a set of assumptions which is on the one hand complete and on the other hand as simple and fundamental as possible. Throughout this work, all calculations have been performed for simplicity for a single spatial dimension. In the meantime, after submission of this paper, the present approach has been generalized to three dimensions, gauge fields, and spin . Given that it can be further generalized to a -dimensional configuration space, this would mean that essentially all of nonrelativistic quantum mechanics can be derived in the framework of the present approach (preliminary calculations of the present author indicate that this can indeed be done). This aim has not yet been completely achieved but we will sometimes tacitly assume in the following discussion that it can be achieved.
Our first, and—in a sense—central assumption was the set of relations (2), (3), which may be characterized as a statistical version of the two fundamental equations of classical mechanics displayed in (1), namely, the definition of particle momentum and Newton's equation. In writing down these relations, the existence of two random variables and , with possible values from , has been postulated. This means that appropriate experimental devices for measuring position and momentum may be set up. The probabilities and are observable quantities, to be determined by means of a large number of individual measurements of and . Given such data for and , the validity of the statistical conditions (2), (3) may be tested. Thus, these relations have a clear operational meaning. On the other hand, they do not provide a statistical law of nature, that is, dynamical equations for the probabilities and . Further constraints are required to define such laws. As shown in Section 5, the statistical framework provided by (2), (3) is very general; it contains quantum mechanics and its classical limit as well as an infinite number of other theories. Thus, it provides a “bird's eye view” on quantum theory.
Our second postulate was the validity of a continuity equation of the form (7). For the type of theory considered here, the validity of a local conservation law of probability is a very weak assumption—more or less a logical necessity. The special form of the probability current postulated in (7) is suggested by Hamilton-Jacobi theory (this is of course only an issue for spatial dimensions higher than one). It means that an ensemble of particles is considered for which a wave front may be defined. A detailed study of such sets of particle trajectories, which are referred to as “coherent systems”, may be found in . In fluid dynamics , corresponding fields are called “potential flow fields”
The first two postulates led to an infinite number of statistical theories (coupled differential equations for and ) characterized by an unknown term . Only the classical (limit) theory, defined by , allows for an identification of objects independent from the statistics; in this case, the differential equation for does not depend on . In all other (nonclassical) theories, there is a -dependent coupling term preventing such an identification.
Our third postulate was the assumption that the remaining unknown function of and , in the coupled differential equations for and , takes a form which is in agreement with the principle of maximal disorder (or minimal knowledge). This logical principle of simplicity is well known, in the form of a postulate of maximal entropy, from statistical thermodynamics. In the present case, it has to be implemented in a different and more complicated way, as a postulate of minimal Fisher information. This is due to the fact that the present equation for the determination of is a differential equation, that is, it may not only depend on but also on derivatives of . The entropy is a functional which depends only on and is unable to adjust properly with regard to this new “degree of freedom”. Our analysis started in Section 7 with a discussion of the conventional principle of maximal entropy and led to the variational principle (58) in Section 9, which formally describes the principle of maximal disorder in the present context. Finally, in Section 10, (58) has been used to determine the unknown term, which leads to Schrödinger's equation.
Schrödinger's equation says nothing about the calculation of expectation values of -dependent quantities. To reproduce this part of the quantum mechanical formalism, we had to implement a further requirement, namely, conservation of energy in the mean. From this fourth assumption, the standard quantum mechanical result could be recovered for terms of the form , where . These terms are not the only ones occurring in realistic situations. If we want to study the behavior of charged particles in a magnetic field, we should be able to calculate expectation values of terms of the form , where is an arbitrary function of . Such terms (and more generally the inclusion of gauge fields) will be dealt with in future work.
Summarizing, the most important relations of quantum mechanics have been derived from assumptions which may be characterized either as purely statistical or as statistical versions (or continuum versions) of relations of particle physics. The continuity equation and the principle of maximal disorder belong to the former class. The statistical conditions, conservation of energy in the mean, and the special form of the probability current belong to the latter class. These statistical assumptions imply quantum mechanics and are much simpler to understand than the jump into Hilbert space given by (73). Of course, all of these assumptions are relations or structural properties belonging to the quantum-mechanical formalism; it would not be possible to derive quantum mechanics from assumptions which are not quantum-mechanical in nature. However, it is not trivial that these relatively simple and comprehensible assumptions are sufficient to derive the basic relations of the whole formalism.
The above derivation of the most basic equations of quantum mechanics from statistical assumptions presents a strong argument in favor of a statistical (ensemble) interpretation. This becomes even more evident, if the relation between quantum mechanics and its classical limit is considered in detail. As discussed in Section 4, the transition from the classical to the quantum mechanical theory is characterized by the elimination of a deterministic element, namely, the (deterministic) functional relation between position and momentum variables. Thus, quantum mechanics contains less deterministic elements (it is “more statistical” in nature) than its classical limit, a result in accordance with the general classification scheme setup in Section 2. This loss of determinism is implicitly contained in the above assumptions and presents the essential “nonclassical” element of the present derivation. It is interesting to compare the present derivation with other derivations of Schrödinger's equation making use of different “nonclassical elements” [35, 43]. The loss of determinism mentioned above is also responsible for the crucial role of the concept of Fisher information in the present work. This concept was realized here in a way different from the one followed previously by Frieden , Frieden and Soffer , and Hall and Reginatto . On the other hand, several aspects of the present work may also be seen as a complement to this previous approach. In particular, the “classical limit theory” which is used as a starting point by these authors may be derived from the first two assumptions of the present work.
Many works on the foundations of quantum mechanics are motivated by the wish to identify deterministic, or at least “classical-probabilistic” elements in its structure. According to the original point of view of Schrödinger, he derived a classical wave equation and was (in contrast to the present interpretation as a probability density) an observable field measuring something like the density distribution of an “extended particle”. More recently, this interpretation has been reconsidered in an interesting paper by Barut . But the results of modern high-precision measurements  strongly support the probabilistic interpretation and exclude, in our opinion, this original classical view of Schrödinger.
Since it turned out that probability plays an indispensable role in quantum theory, various attempts have been undertaken to interpret it (at least) as a theory with classical randomness. A well-known example is Nelson's stochastic mechanics , where a stochastic background field of unspecified origin, combined with deterministic mechanics, leads to Schrödinger's equation. There is a certain overlap of ideas between such theories and the present one; in both cases a probabilistic equation is derived from a set of well-defined (partly) probabilistic assumptions. However, the present theory contains neither particle trajectories nor stochastic forces. Probability is introduced in a more abstract way by means of a postulated conservation law and the occurrence of expectation values in the basic equations defining the theory.
Finally, a theory by Khrennikov [50, 51] should be mentioned which continues and extends both the stochastic approach and the original Schrödinger point of view, with the aim of reconciling the latter with the probability concept. To achieve this goal, classical fluctuating fields in an infinite-dimensional phase space are introduced by means of appropriate mathematical axioms. Despite the completely different language, this work is basically written in the same spirit as stochastic theories. As for a comparison with the present theory similar remarks as above apply.
Despite the similarities mentioned above, the methodic basis of the present work differs considerably from the stochastic approach. Indeterminism, as regards individual events in the microworld, is considered as an irreducible feature of nature. This position is not incompatible with the fact that macroscopic bodies follow deterministic laws; these bodies are aggregates of a large number of individual particles and it is reasonable (looking at examples from many-body physics) to expect that all quantum uncertainties will be somehow “washed out” for large ; a similar problem has been analyzed in a remarkable paper by Cini and Serva . Classical physics, as it is generally understood (neglecting back-reaction from self-fields, see  for more details), contains less uncertainty as compared to quantum mechanics. It is therefore impossible to derive quantum mechanics from classical physics, just as it would be impossible to derive classical statistical physics from thermodynamics.
12. Concluding Remarks
The present derivation was based on the assumption that dynamical predictions are only possible for statistical averages and not for single events—leaving completely open the question why predictions on single events are impossible. This deep question remains unanswered; some speculative remarks on a possible source of the indeterminacy have been given elsewhere .
As regards the generalization of the present quantization method to fields, two different routes seem feasible. The first makes use of the well-known fact that the formalism of second quantization may be deduced, in the limit , from the N-particle Schrödinger equation . Thus, proceeding along this route, the decisive step is the derivation of the many-particle Schrödinger equation. This can be done in the present approach; all assumptions can be generalized in a most natural way (just increasing the number of variables) to cover the many-body situation. The second route starts from classical fields and applies the usual quantization rules but now for an infinite number of degrees of freedom. In order to generalize the present approach in an analogous way, the quantity must be interpreted in a different way, as a density of a stream of particles in the framework of an approximate continuum theory. This would be an observable quantity in the sense of Schrödinger and de Broglie. In this respect, the resulting differential equation, which is again (32), should be considered as a classical field equation—despite the presence of a parameter (solutions of this prequantum Schrödinger equation have been studied in the literature ). However, it would be only approximately true—what is actually observed are particles and not fields. This would provide a motivation for a more accurate description (second quantization). This second route to field quantization presents an open problem for future research. It might be interesting in view of some conceptual problems of quantum field theory.
“Interaction between individual objects” and the corresponding notion of force are macroscopic concepts. In the microscopic domain, where according to the present point of view only statistical laws are valid, the concept of force looses its meaning. In fact, in the quantum-mechanical formalism “interaction” is not described in terms of forces but in terms of potentials (as is well known, this leads to a number of subtle questions concerning the role of the vector potential in quantum mechanics). The relation between these two concepts is still not completely understood; the present statistical approach offers a new point of view to study this problem .
In a previous work  of the present author, Schrödinger's equation has been derived from a different set of assumptions including the postulate that the dynamic equation of state may be formulated by means of a complex-valued state variable . The physical meaning of this assumption is unclear even if it sounds plausible from a mathematical point of view. The present paper may be seen as a continuation and completion of this previous work, insofar as this purely mathematical assumption has been replaced now by other requirements which may be interpreted more easily in physical terms. Finally, we mention that the present theory has already been generalized to three spatial dimensions, gauge fields, and spin . Important open questions for future research, extending the range of validity of the present approach, include a higher-dimensional configuration space (several particles), a relativistic formulation, and an extension of the present method to fields.
- L. E. Ballentine, “The statistical interpretation of quantum mechanics,” Reviews of Modern Physics, vol. 42, no. 4, pp. 358–381, 1970.
- J. A. Barrett, The Quantum Mechanics of Minds and Worlds, Oxford University Press, Oxford, UK, 1999.
- M. J. W. Hall and M. Reginatto, “Schrödinger equation from an exact uncertainty principle,” Journal of Physics A, vol. 35, no. 14, pp. 3289–3303, 2002.
- M. Reginatto, “Derivation of the equations of nonrelativistic quantum mechanics using the principle of minimum Fisher information,” Physical Review A, vol. 58, no. 3, pp. 1775–1778, 1998.
- J. Syska, “Fisher information and quantum-classical field theory: classical statistics similarity,” Physica Status Solidi B, vol. 244, no. 7, pp. 2531–2537, 2007.
- U. Klein, “Schrödinger's equation with gauge coupling derived from a continuity equation,” Foundations of Physics, vol. 39, no. 8, pp. 964–995, 2009.
- G. Kaniadakis, “Statistical origin of quantum mechanics,” Physica A, vol. 307, no. 1-2, pp. 172–184, 2002.
- J. E. Moyal, “Quantum mechanics as a statistical theory,” Proceedings of the Cambridge Philosophical Society, vol. 45, p. 99, 1949.
- A. Peres, Quantum Theory : Concepts and Methods (Fundamental Theories of Physics), vol. 57, Kluwer Academic Publishers, Boston, Mass, USA, 1995.
- T. T. Soong, Probability and Statistics for Engineers, John Wiley & Sons, Chichester, UK, 2005.
- T. F. Jordan, “How relativity determines the Hamiltonian description of an object in classical mechanics,” Physics Letters. Section A, vol. 310, no. 2-3, pp. 123–130, 2003.
- T. F. Jordan, “Why is the momentum,” American Journal of Physics, vol. 43, p. 1089, 1975.
- A. C. De La Torre, “On randomness in quantum mechanics,” European Journal of Physics, vol. 29, no. 3, pp. 567–575, 2008.
- N. Rosen, “The relation between classical and quantum mechanics,” American Journal of Physics, vol. 32, pp. 597–600, 1964.
- R. Schiller, “Quasi-classical theory of the nonspinning electron,” Physical Review, vol. 125, no. 3, pp. 1100–1108, 1962.
- H. Nikolić, “Classical mechanics without determinism,” Foundations of Physics Letters, vol. 19, no. 6, pp. 553–566, 2006.
- E. Schrödinger, “Quantisierung als Eigenwertproblem, Erste Mitteilung,” Annalen der Physik, vol. 79, p. 361, 1926.
- D. Sen, S. K. Das, A. N. Basu, and S. Sengupta, “Significance of Ehrenfest theorem in quantum-classical relationship,” Current Science, vol. 80, no. 4, pp. 536–541, 2001.
- A. Khrennikov, “Interference of probabilities and number field structure of quantum models,” Annalen der Physik (Leipzig), vol. 12, no. 10, pp. 575–585, 2003.
- A. Kovner and B. Rosenstein, “On quantisation ambiguity,” Journal of Physics A, vol. 20, no. 10, article no. 015, pp. 2709–2719, 1987.
- J. R. Shewell, “On the formation of quantum-mechanical operators,” American Journal of Physics, vol. 27, pp. 16–21, 1959.
- C. E. Shannon and W. Weaver, The Mathematical Theory of Communication, University of Illinois Press, Urbana, Ill, USA, 1949.
- A. Ben-Naim, Statistical Thermodynamics Based on Information, World Scientific Publishing, Singapore, 2008.
- A. V. Khinchin, Mathematical Foundations of Information Theory, Dover, New York, NY, USA, 1957.
- R. A. Fisher, Statistical Methods and Scientific Inference, Oliver and Boyd, Edinburgh, UK, 1956.
- B. R. Frieden, “Fisher information as the basis for the Schrödinger wave equation,” American Journal of Physics, vol. 57, no. 11, pp. 1004–1008, 1989.
- B. R. Frieden, Science from Fisher Information, a Unification, Cambridge University Press, Cambridge, UK, 2004.
- M. J. W. Hall, “Quantum properties of classical fisher information,” Physical Review A, vol. 62, no. 1, Article ID 012107, 6 pages, 2000.
- P. Ván, “Unique additive information measures-Boltzmann-Gibbs-Shannon, Fisher and beyond,” Physica A, vol. 365, no. 1, pp. 28–33, 2006.
- S. Kullback, Information Theory and Statistics, John Wiley & Sons, New York, NY, USA, 1959.
- A. Caticha, “Relative entropy and inductive inference,” in Proceedings of the 27th AIP International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, G. Erickson and Y. Zhai, Eds., vol. 707, p. 75, 2004.
- E. T. Jaynes, Probability Theory: The Logic of Science, Cambridge University Press, London, UK, 2003.
- B. van Brunt, The Calculus of Variations, Springer, New York, NY, USA, 2004.
- B. R. Frieden and B. H. Soffer, “Lagrangians of physics and the game of Fisher-information transfer,” Physical Review E, vol. 52, no. 3, pp. 2274–2286, 1995.
- Y. C. Lee and W. Zhu, “The principle of minimal quantum fluctuations for the time-dependent Schrödinger equation,” Journal of Physics A, vol. 32, no. 17, pp. 3127–3131, 1999.
- U. Klein, “Lecture Notes onSuperconductivity,” unpublished.
- M. Giaquinta and S. Hildebrandt, Calculus of Variations I, Springer, Berlin, Germany, 2004.
- E. C. G. Sudarshan and N. Mukunda, Classsical Dynamics: A Modern Perspective, John Wiley & Sons, New York, NY, USA, 1974.
- D. Home and M. A. B. Whitaker, “Ensemble interpretations of quantum mechanics. A modern perspective,” Physics Report, vol. 210, no. 4, pp. 223–317, 1992.
- O. Alter and Y. Yamamoto, Quantum Measurement of a Single System, John Wiley & Sons, New York, NY, USA, 2001.
- A. Messiah, Quantum Mechanics, vol. 1, North-Holland, Amsterdam, The Netherlands, 1961.
- L. Motz, “Quantization and the classical Hamilton-Jacobi equation,” Physical Review, vol. 126, no. 1, pp. 378–382, 1962.
- M. J. W. Hall and M. Reginatto, “Quantum mechanics from a Heisenberg-type equality,” Fortschritte der Physik, vol. 50, no. 5-7, pp. 646–651, 2002.
- U. Klein, “The statistical origins of gauge coupling and spin,” preprint.
- J. L. Synge, “Classical dynamics,” in Encyclopedia of Physics: Principles of Classical Mechanics and Field Theory, pp. 1–223, Springer, Berlin, Germany, 1960.
- T. E. Faber, Fluid Dynamics for Physicists, Cambridge University Press, Cambridge, UK, 1995.
- A. O. Barut, “Combining relativity and quantum mechanics: Schrödinger's interpretation of ψ,” Foundations of Physics, vol. 18, no. 1, pp. 95–105, 1988.
- A. Tonomura, J. Endo, T. Matsuda, and T. Kawasaki, “Demonstration of single-electron buildup of an interference pattern,” American Journal of Physics, vol. 57, no. 2, pp. 117–120, 1989.
- E. Nelson, Quantum Fluctuations, Princeton University Press, Princeton, NJ, USA, 1985.
- A. Khrennikov, “A pre-quantum classical statistical model with infinite-dimensional phase space,” Journal of Physics A, vol. 38, no. 41, pp. 9051–9073, 2005.
- A. Khrennikov, “Prequantum classical statistical field theory: complex representation, Hamilton-Schrödinger equation, and interpretation of stationary states,” Foundations of Physics Letters, vol. 19, no. 4, pp. 299–319, 2006.
- M. Cini and M. Serva, “Where is an object before you look at it?” Foundations of Physics Letters, vol. 3, no. 2, pp. 129–151, 1990.
- A. L. Fetter and J. D. Walecka, Quantum Theory of Many-Particle Systems, chapter 1, McGraw-Hill, New York, NY, USA, 1968.
- E. Fick, Einführung in die Grundlagen der Quantentheorie, Akademische Verlagsgesellschaft, Leipzig, Germany, 1968.
Copyright © 2010 U. Klein. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.