Abstract

Why does Planck (1900), referring to Boltzmann’s 1877 probabilistic treatment, obtain his quantum distribution function while Boltzmann did not? To answer this question, both treatments are compared on the basis of Boltzmann’s 1868 three-level scheme (configuration—occupation—occupancy). Some calculations by Planck (1900, 1901, and 1913) and Einstein (1907) are also sketched. For obtaining a quantum distribution, it is crucial to stick with a discrete energy spectrum and to make the limit transitions to infinity at the right place. For correct state counting, the concept of interchangeability of particles is superior to that of indistinguishability.

1. Introduction

Very recently, Sharp and Matschinsky have translated and commented Boltzmann’s famous 1877 paper [1] “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations regarding the Conditions for Thermal Equilibrium” [2]. As a matter of fact, they have done a great service to the scientific community1.

Barely any of Boltzmann’s original scientific work is available in translation. This is remarkable given his central role in the development of both equilibrium and non-equilibrium statistical mechanics, his statistical mechanical explanation of entropy, and our understanding of the Second Law of thermodynamics. What Boltzmann actually wrote on these subjects is rarely quoted directly, his methods are not fully appreciated, and key concepts have been misinterpreted. Yet his work remains relevant today. (Ibid., pp. 1971f.)

The paper “exemplifies several of Boltzmann’s most important contributions to modern physics. These include(1)The eponymous Boltzmann distribution, relating the energy scaled by the mean kinetic energy (temperature)…(2)Much of the theoretical apparatus of statistical mechanics is developed with great clarity… His terminology…is incisive, in some ways superior to the two modern terms macro-state and micro-state…(3)The statistical mechanical formulation of entropy…(4)…Boltzmann also clearly demonstrates that there are two distinct contributions to entropy, arising from the distribution of heat (kinetic energy) and the distribution in space of atoms or molecules… It is fitting that Boltzmann was the one to discover the third fundamental contribution to entropy, namely radiation, by deriving the Stefan-Boltzmann Law [3]…” (ibid., pp. 1972f.)(5)Boltzmann’s “permutability measure”, Ω (3/2 of Clausius’ entropy, ), is constructed as an extensive quantity. “Thus Boltzmann never encounters the apparent Gibbs paradox for the entropy of mixing of identical gases. Furthermore, with Boltzmann’s Permutabilitätmass method for counting states, there is no need for a posteriori division by ! to “correct” the derivation using the “somewhat mystical arguments of Gibbs2 and Planck [4]” nor a need to appeal to quantum indistinguishability, which has been implausibly described as the appearance of quantum effects at the macroscopic classical level [5]. Subsequently, at least four distinguished practitioners of statistical mechanics have pointed out that correct counting of states a la Boltzmann obviates the need for the spurious indistinguishability/! term: Ehrenfest and Trkal [6], Van Kampen [7], Jaynes [8], Swendsen [9] (and possibly Pauli [10]3). This has had little impact on textbooks of statistical mechanics.4 An exception is the treatise by Gallavotti [11]” (Ibid., p. 1974).Indeed, Jaynes [8] argues similarly: “Some important facts about thermodynamics have not been understood by others to this day, nearly as well as Gibbs understood them over 100 years ago [12]… For 80 years it has seemed natural that, to find what Gibbs had to say about this, one should turn to his Statistical Mechanics. For 60 years, textbooks and teachers (including, regrettably, the present writer) have impressed upon students how remarkable it was that Gibbs, already in [13], had been able to hit upon this paradox which foretold—and had its resolution only in—quantum theory with its lore about indistinguishable particles, Bose and Fermi statistics, etc.5 It was therefore a shock to discover that…Gibbs [in “Heterogeneous Equilibrium”] displays a full understanding of this problem, and disposes of it without a trace of that confusion over the “meaning of entropy” or “operational distinguishability of particles” on which later writers have stumbled. He goes straight to the heart of the matter as a simple technical detail, easily understood as soon as one has grasped the full meanings of the words “state” and “reversible” as they are used in thermodynamics. In short, quantum theory did not resolve any paradox, because there was no paradox. Today, the universally taught conventional wisdom holds that “Classical mechanics failed to yield an entropy function that was extensive, and so statistical mechanics based on classical theory gives qualitatively wrong predictions of vapor pressures and equilibrium constants, which was cleared up only by quantum theory in which the interchange of identical particles is not a real event”. We argue that, on the contrary, phenomenological thermodynamics, classical statistics, and quantum statistics are all in just the same logical position with regard to extensivity of entropy; they are silent on the issue, neither requiring it nor forbidding it.”

Last but not least, Boltzmann’s statistical definition of entropy is the first one, which applies to nonequilibrium states, thus “opening the door to the statistical mechanics of non-equilibrium states and irreversible processes” [2, p. 1974].

In this paper, I will concentrate on Boltzmann’s manner of state counting and its consequences for classical and quantum statistics. Boltzmann accounts for the interchangeability of equal particles; nevertheless, he does not obtain Planck’s distribution function, while Planck—starting with similar probabilistic settings [14, 15] or even with the same setting [4]—did.

Moreover, I will sketch some calculations by Einstein [16]. In Einstein’s pioneering paper about the specific heat of solids, Planck’s distribution law emerges from the discreteness of the energy spectrum of Planck’s resonators, when compared with the continuous energy spectrum of classical resonators. From this, Einstein concludes quantization to be a selection problem of states.

2. Boltzmann’s 1868 Probabilistic Scheme

According to Bach [17, Sections 3.2.1, 5.1], Boltzmann [18] invents the scheme ():Since, in Boltzmann’s 1877 paper, the cells are energy levels, I will continue with different cells.

This scheme involves 3 levels of descriptions:(1)configurations,(2)occupation numbers,(3)occupancy numbers.

2.1. Level 1: Configurations

A configuration is the most detailed description of the distribution of the particles over the cells. For each particle, (), it provides the number, (), of the cell, in which it is located.

This can be realized by means of a matrix, , where (0), if particle is (is not) in cell . This matrix can be condensed into the vector , where is the number of the cells, in which particle is located ()There are altogether different configurations [17, p. 58].

The configuration is a complete description. However, since the particles are identical, this description is redundant when applied to physical systems like monoatomic gases. The interchange of two atoms does not change the physical properties of the gas. This fact is accounted for in levels 2 and 3.

2.2. Level 2: Occupation Numbers

Occupation numbers represent a condensed description of the distribution of the particles over the cells. It removes the permutation redundancy in the configurations just mentioned. It means that two configurations’ vectors, , which contain the same numbers in merely different sequence—such as and —are physically equivalent. In other words, relevant is not the complete information: which particle is in a given cell, but only the numbers of particles in the cells. The latter is recorded in theoccupation number, k = (). There are particles in cell ()The occupation numbers are invariant under any permutation of the particles [17, p. 59].

We have the obvious constraintAltogether, there aredifferent occupation numbers withconfigurations for each occupation number, k [17, p. 59].

2.3. Level 3: Occupancy Numbers

A further condensation of the state description consists in the question, how many cells host 0 particles, 1 particle particles? It is answered in the occupancy number vector, z = (). There arecells with particles, where [17, eq. (3.55)].

Now, we have two constraints:We are now prepared to compare Boltzmann’s and Planck’s treatments on an equal footing, in particular, their basic entities, the “complexions.”

3. Boltzmann’s 1877 Manner of State Counting

3.1. Boltzmann’s Discrete Gas Model

For simplicity, Boltzmann begins (p. 1976) with an ideal gas model, in which each of the molecules can assume only “a finite number of velocities, , such aswhere and are arbitrary finite numbers. Upon colliding, two molecules may exchange velocities, but after the collision both molecules still have one of the above velocities…” (p. 1976). Accordingly, the kinetic energy, , of each molecule can also assume only a finite number of valuesHowever, when considering solely the distribution of the various values of over the molecules, it is simpler to assume the arithmetic progressionUpon colliding, two molecules may exchange energy, but after the collision both molecules still have one of the above energies.

The total kinetic energy of the gas is = = const.

3.2. The Kinetic Energy Distribution

There are molecules having kinetic energy , molecules having kinetic energy 1ε, up to the molecules having kinetic energy . The set is the distribution of the kinetic energy “among” the molecules (cf. p. 1977).

(A) One can interpret the set at as distribution of the energy portions, , over the molecules, the probabilistic scheme being Then, tells the number of cells hosting particles. This is the component of the occupancy number, .

(B) In turn, one can reverse the role of molecules and energy portions and interpret that as distribution of the molecules over the energy levels, , the probabilistic scheme beingThen, is the number of particles in cell . This is the component of the occupation number, .

The set is subject to the constraints6the number of molecules in the gas, andthe total energy of the gas in units of . For case (A), they correspond to the constraints (8a) and (8b). For case (B), is identical to (4), while results additionally from the meaning of the cells as energy levels.

3.3. Complexions

Boltzmann continues, “As a preliminary, we will use a simpler schematic approach to the problem, instead of the exact case” (p. 1977). The energy levels are distributed in all possible ways among the molecules, where = = const. “Any such distribution, in which the first molecule may have a kinetic energy of, e.g., , the second may have , and so on, up to the last molecule, we call a complexion…” (p. 1977). In other words, a complexion is the set , where is the energy of molecule in units of ().

(A) Literally, this means the distribution of the possible amounts of energy, , , over the molecules, the probabilistic scheme beingThen, means the number of cells hosting particles. This is the component of the occupancy number, z.

(B) In turn, one can reverse the role of molecules and energy levels, again, and understand that as distribution of the molecules over the energy levels, , the probabilistic scheme beingThen, means that particle is in cell . This is the component of the configuration, j. Boltzmann’s formula below supports this interpretation.

3.4. The State Distribution

This distribution “specifies the number of of complexions [] for that distribution; in other words, it determines the likelihood of that state distribution. Dividing the number by the number of all possible complexions, we get the probability of the state distribution” (p. 1977).

Since a distribution of states does not determine kinetic energies exactly, the goal is to describe the state distribution by writing as many zeros [] as molecules with zero kinetic energy (), ones [1] for those with kinetic energy ε etc. All these zeros [], ones [1], etc. are the elements defining the state distribution. (p. 1977).

As example, Boltzmann considers the case = 7, = 7, = 7. “There are then 15 possible state distributions” (p. 1978). They are listed in Table 1.

“The first state distribution, e.g., has 6 molecules with zero kinetic energy, and the seventh has kinetic energy 7ε. So , , ” (p. 1978). Notice that the numbers in columns are not , but the configuration numbers, .

For the reader’s convenience, I add Table 2 with the corresponding values of .

The rows have been regrouped along increasing value of , in order to demonstrate the fact that sequences of , which differ just in the sequence of numbers, have got the same probability, . There is a “degeneracy” in that different sequences of yield the same value of , if the number is the same; see below.

3.5. Calculation of the Number of Complexions,

“It is now immediately clear that the number for each state distribution is exactly the same as the number of permutations of which the elements of the state distribution are capable, and that is why the number is the desired measure of the permutability of the corresponding distribution of states. Once we have specified every possible complexion, we have also all possible state distributions, the latter differing from the former only by immaterial permutations of molecular labels. All those complexions which contain the same number of zeros, the same number of ones etc., differing from each other merely by different arrangements of elements, will result in the same state distribution; the number of complexions forming the same state distribution, and which we have denoted by , must be equal to the number of permutations which the elements of the state distribution are capable of.” (pp. 1977f.)

In other words, it is not relevant, which molecule has got which (kinetic) energy, but how many molecules have got a given amount of energy. The molecules having got the same energy are interchangeable, while molecules with different energies are not.7

Thus, the number is obtained through permutating the molecules asThe denominator arises, “since of the elements are mutually identical. Similarly with the , , etc. elements” (p. 1979).

Formula is isomorphic with formula (6) for the number of configurations for a given occupation number, k. This indicates that Boltzmann’s “state distribution,” , represents an occupation number, while his complexions are configurations.

The relative probability equals , where is “the sum of the permutations for all possible state distributions” (p. 1979). In Table 1, = 1716 (p. 1978) (p. 1983). This is isomorphic with formula (5), what suggests the scheme (A) of Section 3.2,However, Boltzmann has already made the limit transition → ∞, since . This contradicts the condition . For this, I will not go into more details.

3.6. Entropy [4]

Boltzmann considered the entropy only for the continuum case. For this, I refer to Planck’s lectures “Theory of Heat Radiation”8, in order to show that Boltzmann’s formula yields an extensive entropy.

For large values of and , Stirling’s formula allows for simplifying as9(Planck numbers the cells from 1 to and writes for ). This elegant form yields immediately the entropy, , asThus, for classical gases, Planck obtains essentially the same result as Boltzmann, where the cells are now finite domains of the volume, into which the gas molecules are enclosed. What, then, is the difference to the quantum case?

4. Planck’s Manner of State Counting for Resonators/Oscillators

4.1. Planck’s Radiation Formula I

Planck used the rather exotic10 quantity [19, eq. ()]Here, and are the entropy and the internal energy of the oscillators, Λ is the intensity of the radiation entropy and is the intensity of the radiation per polarization direction, is the radiation frequency, and is the speed of light in vacuo; the index “0” indicates equilibrium values. The caseleads to Wien’s radiation formula [19, 20]. Its simplest generalization reads [20]Using the relationand Wien’s displacement law in the form [20], formula leads to “the two-parametric radiation formula” (Ibid.)It is in agreement with the then available experimental data, in particular, which concerns the differences to Wien’s radiation law. Notice that other radiation formulae of that time did so, too [2123], [20, refs. 2, 4 and  5].

The “” in the denominator makes the difference to the formulae by Maxwell, Boltzmann, and Wien.

4.2. Planck’s Step to Quantum Physics

Immediately after Planck’s talk in October 1900, where he presented his novel formula , Rubens and Kurlbaum went to their closely located laboratory to verify his new formula and told him the following Sunday morning that it indeed fits their data clearly better than the formulae by Wien, Thiesen, and Lord Rayleigh ([24], Fig. ; cf. also [25]). This brought Planck the most strained weeks of his life to find a physical justification of this formula. Having not found any other way (although being an atomist, he worked solely on continuum theories), he eventually resorted to Boltzmann’s 1877 probability approach.

Thus, Planck [14] considers a closed system of linear monochromatic resonators weakly interacting with the electromagnetic radiation surrounding them. resonators have got the frequency , resonators have the frequency , and so on, where The question is how is, in a stationary state, the total (field) energy, , distributed among the resonators and the electromagnetic field between them (radiation).

Assume that the set of resonators with frequency ν has got the field energy , the set of resonators with frequency the field energy , and so on. The field energy of all resonators isNow, one has to distribute the energy among the resonators of frequency ν, and so on. If is a continuous quantity, there are infinitely many possibilities for that. The following quotation describes Planck’s step to quantum physics.

“We consider, however—and this is the most essential point of the whole calculation— to consist of a specific number of finite equal parts. For it we use the natural constant  (ergs).11 This constant, multiplied with the common oscillation number (frequency), , yields the energy element, , in erg. Through division of by we obtain the number, , of energy elements, which are to be distributed among the resonators.” [14]

4.3. Planck’s 1900 Probabilistic Treatment

Referring to Boltzmann, Planck calls a “complexion” the concrete distribution of “energy elements” over resonators. For = 100 “energy elements” on = 10 resonators, Planck writes down the following example:Obviously, this complexion, that is, the set = , corresponds to the occupation number, k, in the probabilistic schemeTwo complexions are considered to be different, if the numbers in the second row are the same, but in different sequence.

Then, the number of different complexions for this kind of resonators equals (, )12This formula, that is,is isomorphic with formula (5) for the amount of occupation numbers for the same probabilistic scheme,Hence, Planck’s complexions (occupation numbers) are not Boltzmann’s complexions (configurations). Accordingly, Planck’s “permutability measure” (20) is the number of occupation numbers, while Boltzmann’s “permutability measure” is the number of configurations for a given occupation number. But this is not the key difference. We will see that there are other possibilities of differentiation between classical and quantum results.

Since is a relative probability (see Boltzmann above), the relative probability for all resonators of all frequencies equals

4.4. Entropy and Energy Density

The value of depends on the set . The corresponding entropy is, up to a constant, For this, Planck asks for the maximum value, , of over all sets obeying condition .

He states that “all quantities, can be expressed through the quantity .” equals the (average) energy, , of a resonator of frequency .

The energy density of the radiation outside the resonators is determined by that of the oscillators as13Given , this determines the value of , too.

The temperature, , is obtained “by means of a second natural constant,  (erg/grd), through the equationThe product is the entropy of the system of resonators; it is the sum of the entropy of all single resonators.”

4.5. Planck’s Radiation Formula II

Then, a “hassle-free” calculation leads to the expressionIt corresponds exactly to the earlier spectral formula .

The corresponding calculations are not provided in Planck’s talk [14]. I guess that he has proceeded as follows. According to and ,On the other hand, the maximization of (replacing with with , and so on)under condition , that is,means being the Lagrangian multiplier. Here,This equation complies with and (24)By definition, andIn contrast to his 1900 talk, in his 1901 paper, Planck sets immediatelyfor the entropy of the set of resonators. This agrees with formula (35) below.

4.6. Equilibrium Entropy (Planck, 1913 [4])

According to the second law of thermodynamics, the entropy, , of a system in equilibrium is maximum for a given (total) energy, . The numbers (or )14 are thus obtained by means of the variation of (or ) in the entropyand in the conditions for the energy of the oscillator (see Appendix B and cf. above)and the total number of energy quanta (as in above),This means [4, p. 141]The result isInserting these into yields the entropy of the system of oscillators asThe thermodynamic relation leads to Planck’s distribution law, now including the zero-point energy15Hence, Planck obtains his quantum distribution law also through using Boltzmann’s “classical,” though discrete “permutability measure” and Boltzmann’s discrete energy spectrum being actually a quantum spectrum. Boltzmann investigated (31a), (31b), and (31c) with finite sums (finite values of and ) and sets only at the end. He did not arrive at Planck’s formulae , but “…the probability of having a kinetic energy is given by”being the mean energy of a molecule (p. 1986). This became “the eponymous Boltzmann distribution” [2, p. 1972]. Finally, Boltzmann considered his discrete model not to be physically relevant.

4.7. Simplified Probabilistic Treatment (Planck, 1913 [4])

After that, Planck [4, Pt. III, Ch. IV] simplifies Boltzmann’s [1] and his own (Pt. III, Ch. I; see the above) probabilistic treatments as follows.

The number of complexions at thermodynamic equilibrium is very much larger than the number of complexions at nonequilibrium. For this, the number of all possible complexions is a good approximation to the number of complexions at thermodynamic equilibrium and thus to the maximum “thermodynamic probability,” . The total number of all possible complexions can be calculated much more easily and directly than the equilibrium value. (See formulae , (20), and above.)

A complexion is a definite assignment of every individual oscillator to a quantum cell in phase space defined by (see Appendix B).16 The constraint can be written as17orwhere is the total number of quanta . The number of complexions equals the number of possibilities to distribute these quanta over the oscillators through varying the numbers .

This task represents ““combinations with repetitions of elements taken at a time,” whose total number is” [4, p. 146]18The entropy thus equalsby Stirling’s formula. If one replaces with from , this agrees exactly with formula .19

Hence, Planck obtains his quantum distribution law also through using his “quantum” “permutability measure” (20) and the same discrete energy spectrum as before. And instead of varying the occupation numbers for maximizing the entropy, , he replaces with according to the energy conservation condition .

In all three variants, Planck uses, as Boltzmann at the beginning, a discrete energy spectrum. For this, let us finally look at Einstein’s 1907 derivation of Planck’s radiation law. Here, the difference between classical and quantum results is immediately connected with the energy spectrum being continuous and discrete, respectively.

5. Einstein’s 1907 Derivation of Planck’s Radiation Law

5.1. Einstein’s Probability

Einstein [16] considers a system of molecules, the state of which is determined by the (very many) variables . The equations of motion are (I use the index instead of )with

20,21Further, there is a subsystem of that system characterized by the variables being a subset of . The energy of the whole system is approximately the sum of one part, which depends solely on , and a second part, which is independent of . , the first part, is much smaller than the whole energy.

The probability for to lie at some time in the infinitesimal domain equals (I set )Then, Einstein assumes that this formula can be written asHere, function is defined aswhere the integration runs over all values of the that correspond to the energy values between and .

5.2. Harmonic Oscillators

For a single oscillator with elongation and velocity , we have and = const. (Indeed, .) Its mean energy thus equalsThis corresponds to the Rayleigh-Jeans formula for the black-body radiation.

Einstein obtained Planck’s formula in the following way. Instead of and = const, he assumes that the energy of the oscillator is restricted to the values , , and so forth. Then,With , this is Planck’s result (24).

The simplest imagination about solids is that they are made of atoms or ions, which vibrate around their equilibrium positions like three-dimensional oscillators. If electrically charged, they interact with the electromagnetic radiation like Planck’s resonators. Compatibility with Planck’s radiation law implies their energy spectrum to be that of Planck’s resonators; see the following.

5.3. Specific Heat of Solids

On the other hand, the vibrating ions carry the heat energy of such a model solid. Then, the classical result yields for the specific heat the valuewhere is the number of atoms/ions in the solid, in agreement with Dulong-Petit’s rule.

In contrast, the nonclassical result yields the formulaThe specific heat is no longer temperature-independent but decreases exponentially for temperatures being small against . Formula is in very good agreement with the then available experimental data for diamond22. In bypassing I notice that this brought him the attention of Nernst, who became one of the key persons to get Einstein to Berlin.

5.4. Quantization as Selection Problem

Before concluding this paper, let me point to the following crucial conclusion by Einstein.

…we are now compelled, for vibratory ions of a certain frequency, which can mediate an energy exchange between matter and radiation, to make the assumption, that the mannifold [sic] of states they are able to assume is a smaller one when compared with the bodies of our experience. (p. 184)

This defines quantization to be the problem to “select” [26, 27, Ch. I, § 15] the set of quantum states “out of” the set of classical states.(i)Planck [14, 19, 20] has selected the discrete set out of the continuum .(ii)Bohr [28, eq. ()] has selected a discrete set of energies, too, namely, for the assumed circular motion of the electron in an -atom.(iii)The additional Sommerfeld-Wilson quantization condition [29, 30] selects discrete values for the action of the radial motion of that electron taken over the period, , in order to quantize elliptical orbits. More generally, this condition is postulated for multiplying periodic motions (), where and are appropriate angular-action variables.(iv)De Broglie [31] assigns to each particle with momentum the wavelength and selects the “resonant” values, , being the length of the periodic orbit.All these examples stem from the “old” quantum theory. Within the “new” quantum theory, beginning with Heisenberg’s [32] “matrix mechanics” and Schrödinger’s [33] “wave mechanics,” there seems to be no place for a quantum selection condition. Schrödinger speaks about the quantum conditions as selection conditions [33, pp. 510f.]. But they are replaced with “for a physical quantity almost self-evident requirement to the [wave] function, ψ, to be unique, finite and continuous” (Ibid., p. 511). Accordingly, he calls the corresponding boundary conditions “natural” and “intrinsic” to the wave equation (Ibid., p. 512). Nevertheless, an eigenvalue problem represents “classical mathematics” for a classical problem, namely, the vibrations of strings and resonators and the like (notably, standing waves). However, it is possible to solve the stationary Schrödinger equation as a selection rather than an eigenvalue problem [34, 35].

Thus, Planck and Einstein have selected right subsets, while Boltzmann did not.

6. Summary and Conclusions

Boltzmann [1] starts from a correct probabilistic setting of the dispersion of energy quanta over molecules, which can lead to the Planck distribution [4, 14, 15, 19, 20]. He does not arrive at the Planck (and, subsequently, Bose-Einstein) distribution, because he finally considered his discrete model to be not physically relevant. Moreover, he obtained expressions, which do not correspond to equilibrium distributions (p. 2003). This could be due to various confusions being common for pioneering work that goes so far ahead.23

Boltzmann [1] did not succeed with a discrete model of matter and energies for the entropy, but he obtained the all-important Boltzmann factor. Without it, there is no partition function, and Einstein could not have presented his 1907 calculations in such a short form.

There are many more aspects of the relation of Boltzmann’s work to quantum statistics ([17, Sect. ]; see also [3642]), which deserve further explorations.

Appendices

A. Permutation Invariance of Newtonian State Quantities: Avoidance of Gibbs’ Paradox

According to the definitions and axioms in Newton’s Principia, the state of a body (here always considered as being pointlike) is given by its momentum (vector) (cf. [43], § 1). It is conserved as long as no external force is acting upon it. For a system of two bodies, which interact at most with each another, the total momentum is conserved (Newton’s 3rd axiom). The state of the system is thus described by that total momentum. It is not changed, when the two bodies interchange their momenta. If the two bodies are equal in their mechanical properties, the mechanical properties of the system are not changed, if they are interchanged. The generalization to a gas is straightforward.

Another conserved quantity for a free body and for an isolated system of bodies is the (total) kinetic energy (envisaged by Leibniz in form of the “living force”).

Hence, generally speaking, in the sense of Newton, the state of a system is described by a complete set of independent conserved quantities, such as total momentum, angular momentum, and energy. In contrast, the positions of the bodies do not enter the state description.

The conserved quantities are not affected, when equal bodies are interchanged (“equal” means equal mass, etc., cf. [44]). As a consequence, the Newtonian state quantities are permutation invariant (cf. [17], Remark , p. 15). This invariance has implicitly been used by Boltzmann and by Planck.

Thus, “identical” should not refer to the (im)possibility of identification, but to the actual impact of interchange on the behaviour of a system. A striking example is provided by the red balls in a snooker game. They are identifiable through their positions on the table; nevertheless, the interchange of two of them does not affect the outcome of the game.

As Feller put it, “Whether or not actual balls are in practice distinguishable is irrelevant for our theory. Even if they are, we may decide to treat them as indistinguishable” ([45, p. 12]; quoted after [17, p. 139]).

Generally speaking, identifiability or distinguishability is not a property of bodies (particles), but of states [17, p. 8]. For this, Bach calls identical particle all those, which have got one and the same intrinsic, that is, state-independent properties (Definition 2.1.1, p. 15). This is Helmholtz’s notion of “equal particles” (loc. cit.).

Hence, the confusion of understanding these terms as stressed by Jaynes [8] arises largely from a confusion of the notions “equal,” “identical,” and “interchangeable.” (This resembles the history of the notions “force” and “energy”.) Their correct use avoids Gibbs’ paradox automatically.

B. Average Energy of a Quantum Oscillator [4]

In his lectures on heat radiation, Planck [4, Pt. III, Ch. I] follows Boltzmann [1] in the probabilistic treatment of ideal gases. For oscillators interacting with electromagnetic radiation, however, he proceeds completely differently (Ibid., Pt. III, Ch. III). For the reader’s convenience, the calculation of the mean energy is sketched here.

The (total) energy of a linear harmonic oscillator is written in the form (Planck’s notation)where and are the appropriate force constant and mass, respectively. Such an oscillator vibrates aswith the frequencyThe generalized momentum equalsNow, the elementary part of the phase space is not the classical infinitesimal , but the finite area24According to , the classical orbit is described by the ellipseIts area equalsHence, the classically continuous values of the amplitude, , are quantized by virtue of condition to the discrete valuesAs a consequence, the energy spectrum is limited to the discrete valuesThe average energy of all oscillators in the th phase-space region equals the number of oscillators in this region, , times the average energy of an oscillator in this region: [4, p. 140]. The r.h.s. contains the, then unknown, zero-point energy. Planck remarks that is the arithmetic mean of and .

The total energy of all oscillators with frequency thus reads

Disclosure

This paper is taken from a special course on statistical mechanics held at the Kazakh National Pedagogical Abai University, Almaty, in 2015.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Endnotes

  1. This paper is absent in Brush’s 1966 collection of original papers on “Kinetic Theory” [46]. There is a Russian translation [47], which deviates from the English one in some details.
  2. Cf. Jaynes [8] about Gibbs as quoted below.
  3. In fact, in Sec. 7, Pauli writes down Boltzmann’s formula for the number of microstates, that is, for “the number of ways in which elements can be ordered into groups of identical elements” [10, p. 21].
  4. This fact has also been stressed by Jaynes [8], including his own (!) writings; see the following.
  5. Surprisingly enough, this misunderstanding is still present on https://de.wikipedia.org/wiki/Gibbssches_Paradoxon (25.10.2015).
  6. is equation () in Boltzmann’s paper.
  7. Appendix A contains additional remarks on this issue.
  8. Held in 1906/7, first published in 1913, I will quote the Dover edition of 1991.
  9. (19-) means the th formula in [14, 20] or formula in [4].
  10. I am not aware about its use in thermodynamics before and after Planck; Planck’s [19] paper “Entropie und Temperatur strahlender Wärme” is not contained in ter Haar’s [48] collection of reprints, but there is a Russian translation by R. B. Segalya in Planck, Selected Works, [49], pp. 234–248.
  11. Notice the difference between Planck’s 1900 talk, where is introduced as a “new universal constant” at the beginning of the talk, and his [15] paper, where appears merely as a combination of spectroscopic parameters at the end of the paper. Einstein did not use in his [50] and [16] papers.
  12. The symmetry of this formula in and is perhaps the reason for that Reiche (Comment 4 to [15], in [51]) sees no essential difference between the dispersion of energy quanta over resonators or, vice versa, that of resonators over energy levels.
  13. The history of this formula is most interesting on its own, but beyond the scope of this paper.
  14. Planck’s are Boltzmann’s ; Planck’s are Boltzmann’s
  15. I add the index to and .; is Planck’s 1900 result for the mean energy of a single oscillator; see (24).
  16. Notice that this definition complies with Boltzmann’s complexion.
  17. belongs to the cell around the origin, , which corresponds to the ground state of an oscillator.
  18. Notice that this formula complies with Planck’s 1900 complexion, but not with Boltzmann’s complexion; see Section 4.3.
  19. The translator, Morton Masius, points to “a complete mathematical discussion” by Lorentz and refers to a meeting of the British Association in September 1913, reviewed in Nature 92, pp. 305ff.
  20. All Hamiltonian systems with twice continuously differentiable Hamiltonian belong to this kind of systems.
  21. is equation in Einstein’s [16] paper.
  22. See the diagram in [16, p. 186] and the explanations on http://www.osti.gov/accomplishments/nuggets/einstein/solidcoldd.html, Fig. 2 (27.07.2015).
  23. Cf. Truesdell’s comment on Newton’s Principia [52, p. 88] quoted by Simonyi [53, p. 296].
  24. In bypassing, this is a nonclassical quantization condition, while boundary conditions in eigenvalue problems are not; cf. [33, p. 511].