Research Article | Open Access

Wayne Cheng-Wei Huang, Herman Batelaan, "Dynamics Underlying the Gaussian Distribution of the Classical Harmonic Oscillator in Zero-Point Radiation", *Journal of Computational Methods in Physics*, vol. 2013, Article ID 308538, 19 pages, 2013. https://doi.org/10.1155/2013/308538

# Dynamics Underlying the Gaussian Distribution of the Classical Harmonic Oscillator in Zero-Point Radiation

**Academic Editor:**Marta B. Rosales

#### Abstract

Stochastic electrodynamics (SED) predicts a Gaussian probability distribution for a classical harmonic oscillator in the vacuum field. This probability distribution is identical to that of the ground state quantum harmonic oscillator. Thus, the Heisenberg minimum uncertainty relation is recovered in SED. To understand the dynamics that give rise to the uncertainty relation and the Gaussian probability distribution, we perform a numerical simulation and follow the motion of the oscillator. The dynamical information obtained through the simulation provides insight to the connection between the classic double-peak probability distribution and the Gaussian probability distribution. A main objective for SED research is to establish to what extent the results of quantum mechanics can be obtained. The present simulation method can be applied to other physical systems, and it may assist in evaluating the validity range of SED.

#### 1. Introduction

According to quantum electrodynamics, the vacuum is not a tranquil place. A background electromagnetic field, called the electromagnetic vacuum field, is always present, independent of any external electromagnetic source [1]. The first experimental evidence of the vacuum field dates back to 1947 when Lamb and his student Retherford found an unexpected shift in the hydrogen fine structure spectrum [2, 3]. The physical existence of the vacuum field has inspired an interesting modification to the classical mechanics, known as stochastic electrodynamics (SED) [4]. As a variation of classical electrodynamics, SED adds a background electromagnetic vacuum field to the classical mechanics. The vacuum field as formulated in SED has no adjustable parameters except that each field mode has a random initial phase and the field strength is set by the Planck constant, . With the aid of this background field, SED is able to reproduce a number of results that were originally thought to be pure quantum effects [1, 4–8].

Despite that the classical mechanics and SED are both theories that give trajectories of particles, the probability distributions of the harmonic oscillator in both theories are very different. In a study of the harmonic oscillator, Boyer showed that the moments of an SED harmonic oscillator are identical to those of the ground state quantum harmonic oscillator [9]. As a consequence, the Heisenberg minimum uncertainty relation is satisfied, and the probability distributions of an SED harmonic oscillator are a Gaussian, identical to that of the ground state quantum harmonic oscillator. While in classical mechanics it is most likely to find an oscillator at the two turning points of the trajectory, hence the double-peak probability distribution, the SED Gaussian probability distribution has the maximum in the center (see Figure 1). How do the dynamics differ in the two classical theories so that the probability distributions become so different?

Although the analytical solution of an SED harmonic oscillator was given a long time ago, it is not straightforward to see the dynamical properties of a single particle from the complicated solution. Additionally, many results in SED such as probability distribution are obtained from (ensemble) phase averaging. Thus, they cannot be used for the interpretation of a single particle’s dynamical behavior in time (some may consider applying the central limit theorem and treat the positions in the time sequence as independent random variables. However, for an SED system this cannot work, because the correlation between the motion at two points in time persists beyond many cycles of oscillation) unless the system is proved to be ergodic. As an analytical proof of the ergodicity is very difficult, we take a numerical approach to study the particle’s dynamical behavior. Besides what is already known from the analytical solution, our numerical studies construct the probability distribution from a single particle’s trajectory. We investigate the relation between such a probability distribution and the particle’s dynamical behavior. Ultimately, we want to know the underlying mechanism that turns the classic double-peak distribution into the Gaussian distribution.

While most works in the field of SED are analytical, numerical studies are rare [10–12]. The advantage of numerical simulation is that it may be extended to other physical systems with relative ease and is flexible in testing different assumptions and approximations. For example, in most SED analyses the effect of the Lorentz force due to the vacuum field is often neglected as the first-order approximation [5]. However, when the field gradient is nonzero, the Lorentz force from the magnetic field can work with the electric field to give a nonzero drift to a charged particle, known as the ponderomotive force. Therefore, the Lorentz force of the vacuum field may play a significant role in SED. In fact, it is shown in the literature that the magnetic part of the vacuum field is responsible for the self-ionization of the atoms and the acquisition of an energy eV in a few nanoseconds by part of a free electron in vacuum [13, 14]. Using numerical simulation, one can easily examine SED beyond the first-order approximation (in this work we limit ourselves to physical parameters where we expect the Lorentz force effects to be minimal, as this work is compared to analytical results where this approximation is made), and the Lorentz force effects in SED can be investigated.

The major challenge for the numerical simulation is to properly account for the vacuum field modes. A representative sampling of the modes is thus the key for successful simulations. In this study, one of our goals is to use a simple physical system, namely, the simple harmonic oscillator, to benchmark our numerical method of vacuum mode selection so that it can be used to test the validity range of SED as discussed in the following.

Over the decades, SED has been criticized for several drawbacks [15]. Authors like Cavalleri argued that SED can neither explain electron slit-diffraction nor derive the nonlinear Schrödinger equation; moreover, SED implies broad radiation and absorption spectra for rarefied gases. In the case of the quartic anharmonic oscillator, Pesquera and Claverie showed that SED disagrees with quantum mechanics [16]. Additionally, the results claimed in [17–19] were shown to be wrong due to improper relativistic approximation [20]. While these theoretical analyses are documented in detail, it may be useful to use numerical simulation as an independent check in order to establish the validity range of SED, which is one of the objectives of our work.

Meanwhile, a modified theory called stochastic electrodynamics with spin (SEDS) was recently proposed [15]. As SEDS are an extension of SED with a model of electron spin motion, it is argued that the introduction of electron spin can eliminate several drawbacks of SED. Among all, it is claimed that SEDS allows the derivation of the complete and even the generalized Schrödinger equation [21–24]. Also, it is claimed that SEDS can explain electron slit-diffraction and the sharp spectral lines of the rarefied gases [25]. In view of the fact that SEDS is a modified theory from SED and may extend its validity range, it is probably important as a next step to apply numerical simulation to such a model as a “quasi-experiment” and test some of its claims. Models that include constraints can be incorporated in our numerical model with, for example, Lagrange multipliers.

The organization of this paper is the following. First, in Section 2 Boyer’s results on the SED harmonic oscillator are briefly reviewed. Based on these results, the probability distribution for the oscillator is derived. Second, in Section 3 details for simulating the vacuum field and the SED harmonic oscillator are documented. Third, in Section 4 the trajectory of the SED harmonic oscillator is solved numerically, and the constructed probability distribution is compared to the analytical probability distribution. Two sampling methods are used in constructing the probability distribution from the simulated trajectories. The first method is “sequential sampling,” which is suitable for studying the relation between the dynamics of the SED harmonic oscillator and its probability distribution. The second approach is “ensemble sampling,” which lends itself well to parallel computing and is convenient for statistical interpretations. Lastly, in Section 6 we discuss some potential applications of the numerical simulation in studying other quantum phenomena. Numerical studies have an advantage over the analytical solutions in that they can be adopted to a range of physical systems, and we hope that the current method of simulation may also assist in assessing SED’s validity range.

#### 2. Theory of Stochastic Electrodynamics

##### 2.1. Brief Review of Boyer’s Work

In his 1975 papers [4, 9], Boyer calculated the statistical features of an SED harmonic oscillator, and the Heisenberg minimum uncertainty relation is shown to be satisfied for such an oscillator. The vacuum field used in Boyer’s work arises from the homogeneous solution of Maxwell’s equations, which is assumed to be zero in classical electrodynamics [4]. In an unbounded (free) space, the vacuum field has an integral form (a detailed account of the vacuum field in unbounded space is given in Appendix A.1) [26]: where , and is the random phase uniformly distributed in . The integral is to be taken over all -space. The two unit vectors, and , describe a polarization basis in a plane that is perpendicular to the wave vector : Furthermore, the polarization basis vectors are chosen to be mutually orthogonal: To investigate the dynamics of the SED harmonic oscillator, Boyer used the dipole approximation, to remove the spatial dependence in the vacuum field (1). Therefore, the equation of motion for an SED harmonic oscillator used in Boyer’s analysis is where is the radiation damping parameter, is the mass, is the charge, and is the natural frequency. The -component of the vacuum field in (6) is and the steady-state solution is obtained as where . Additionally, using the condition of sharp resonance, Boyer further calculated the standard deviation of position and momentum from (8) by averaging over the random phase [9]: where the phase averaging represents the ensemble average over many realizations. In each realization, the random phase of the vacuum field is different. The above result satisfies the Heisenberg minimum uncertainty relation: From an energy argument, Boyer showed that this uncertainty relation can also be derived from a delicate balance between the energy gain from the vacuum field and the energy loss through radiation damping [4].

##### 2.2. Probability Distribution

Given the knowledge of the moments , the Fourier coefficients of the probability distribution can be determined by Taylor expanding in powers of : Using (8) and the relation from Boyer’s paper [9] the moments can be evaluated: where is a natural number. Consequently, only even-power terms are contributing in (13), and the Fourier coefficients can be determined: Therefore, although not explicitly given, it is implied by Boyer’s work [9] that the probability distribution of the SED harmonic oscillator is which is identical to the probability distribution of the quantum harmonic oscillator in the ground state (this result is consistent with the phase space probability distribution given in Marshall’s work [27]).

#### 3. Methods of Numerical Simulation

##### 3.1. Vacuum Field in Bounded Space

While the vacuum field in unbounded space is not subject to any boundary condition and thus every wave vector is allowed [4], the field confined in a space of volume with zero value boundary condition has a discrete spectrum, and a summation over infinitely many countable wave vectors is required [1]. In a simulation, it is convenient to write the vacuum field in the summation form: where , , is the random phase uniformly distributed in , and is the volume of the bounded space. A derivation of the summation form of the vacuum field in bounded space is given in Appendix A.2.

Since the range of the allowed wave vectors is over the whole -space, we choose to sample only the wave vectors whose frequencies are within the finite range . Such sampling is valid as long as the chosen frequency range completely covers the characteristic resonance width of the harmonic oscillator: On the other hand, the distribution of the allowed wave vectors depends on the specific shape of the bounded space. In a cubic space of volume , the allowed wave vectors are uniformly distributed at cubic grid points, and the corresponding vacuum field is The sampling density is uniform and has a simple relation with the space volume : Nevertheless, such uniform cubic sampling is not convenient for describing a frequency spectrum, and it requires a large number of sampled modes to reach numerical convergence. In order to sample only the wave vectors in the resonance region, spherical coordinates are used. For the sampling to be uniform, each sampled wave vector must occupy the same size of finite discrete volume element . To sample for modes in the resonance region with each mode occupying the same volume size, we use a set of specifically chosen numbers to sample the wave vectors . For , , and , where is a random number uniformly distributed in , and the stepsizes are constant: The random number is used here to make the sampling more efficient. A set of numbers is then used for assigning the spherical coordinates to each sampled wave vector : where Therefore, each sampled wave vector is in the resonance region and occupies the same size of finite discrete volume element: where the differential equalities , , and are used. The differential limit is reached when , , and are sufficiently large. The numerical result converges to the analytical solution in this limit. Under the uniform spherical sampling method (described by (22), (25), and (26)), the expression for the vacuum field, (18), becomes where . It is worth noting that when the total number of wave vectors becomes very large, both uniform spherical and cubic sampling converge to each other because they both effectively sample all the wave vectors in -space. In the limit of large sampling number , the two sampling methods are equivalent (the relation implies that the limit of large sampling number (i.e., ) is equivalent to the limit of unbounded space (i.e., ). At this limit, the volume element becomes differential (denoted as ) and is free from any specific shape associated with the space boundary. Therefore, all sampling methods for the allowed wave vectors become equivalent, and the summation approaches the integral. This is consistent with the fact that no volume factor is involved in the vacuum field integral, as shown in (1)), and (21) can be used for both sampling methods to calculate the volume factor in (20) and (28): where

In the simulation, the summation indices in (28) can be rewritten as where the multiple sums indicate a numerical nested loop and the wave vector is chosen according to the uniform spherical sampling method. To achieve numerical convergence, and need to be sufficiently large so that the wave vector at a fixed frequency may be sampled isotropically. In addition, a large is required for representative samplings in frequency. As a result, needs to be very large when using uniform spherical sampling for numerical simulation. To improve the efficiency of the computer simulation, we sample at one random angle for each frequency. Namely, for , where The stepsize is specified in (23), is a random number uniformly distributed in , and is another random number uniformly distributed in . As the number of sampled frequencies becomes sufficiently large, the random angles will approach the angles specified in uniform spherical sampling (22).

In the limit of , the above sampling method (described by (32), (33), and (34)) and the uniform spherical sampling method both approach a uniform sampling on a spherical surface at the radius . In this limit, the addition of the condition in (19) leads to the possible choices for the frequency range : Within this range (35), the expression for the vacuum field in (18) becomes where . For large , the volume factor is calculated using (29).

Finally, for a complete specification of the vacuum field, (36), the polarizations need to be chosen. From (29), we notice that large gives large . Since for large the vacuum field is not affected by the space boundary, there should be no preferential polarization direction and the polarizations should be isotropically distributed. The construction for isotropically distributed polarizations is discussed in detail in Appendix B. Here we give the result that satisfies the property of isotropy and the properties of polarization (described by (3) and (4)): where is a random number uniformly distributed in . With the wave vectors (described by (32), (33), and (34)) and the polarizations (described by (37)), the endpoints of the sampled vacuum field vector are plotted on a unit sphere, as shown in Figure 2, which illustrates the isotropy of the distribution.

In summary, the vacuum field mode in (36) can be sampled by a set of four numbers , which are specified in (32), (33), (34), and (37). The only assumption used in determining these numbers is (35), which is equivalent to the sharp resonance condition (9) used in Boyer’s analysis.

##### 3.2. Equation of Motion in Numerical Simulation

In the unbounded (free) space, the equation of motion in Boyer’s analysis is where the dipole approximation (5) is used. In the bounded space, the equation of motion remains the same, but the vacuum field formulated has the summation form (36): where . The steady-state solution to (38) in the bounded space can be found following Boyer’s approach: where . While this analytical solution can be evaluated using our method of vacuum mode selection ((32), (33), (34), and (37)), our goal with the numerical simulation is to first reproduce Boyer’s analytical results (8) and then later extend the methods to other physical systems. One major obstacle for the numerical approach is the third-order derivative in the radiation damping term, . To circumvent this problem, we follow the perturbative approach in [28–30]. According to classical electrodynamics, the equation of motion for an electron with radiation damping is where is the force and is the radiation damping coefficient. Under the assumption , the zero-order equation of motion is The justification for the assumption is that a point particle description of the electron is used in classical electrodynamics [28, 30]. Using (42), the radiation damping term may be estimated by which can be iterated back to the original equation (41) and get a perturbative expansion: Thus, in this approximated equation of motion we replace the third derivative of position by the first derivative of the force . Applying (44) to (38), the equation of motion becomes The order of magnitude for each term on the right-hand side is where and are the order of magnitude for the particle motion and the vacuum field . The order of magnitude for the time scale of particle motion is given by because of the sharp resonance condition (9). In order to compare the two radiation damping terms ((48) and (49)), we use a random walk model to estimate and . For a fixed time , the order of magnitude for and (see (39) and (40)) is equal to and . Written as complex numbers, the mathematical form of and is analogous to a two-dimensional random walk on the complex plane with random variable , where denotes a set of modes . Averaging over , the order of magnitude for and can be estimated by the root-mean-squared distance of and . In a two-dimensional random walk model [31], the root-mean-squared distance is given by where is the number of steps taken and is a typical stepsize; for , , and for , . Hence, the order of magnitude, and , may be estimated as (using and ((29) and (30)), the value of in (51) can be estimated as , which is consistent with Boyer’s calculation for the standard deviation of position in (10)) The order of magnitude for the two radiation damping terms is evaluated accordingly: Using the sharp resonance condition (9), we approximate the equation of motion (45) to its leading order:

As an additional note, given the estimation of and in (51), the three force terms in (53) have the following relation: Thus, the linear restoring force is the dominating drive for an SED harmonic oscillator, while the vacuum field and radiation damping act as perturbations. The balance between the vacuum field and the radiation damping constrains the oscillation amplitude to fluctuate in the vicinity of .

Finally, as we have established an approximated equation of motion (53) for numerical simulation, the total integration time (i.e., how long the simulation is set to run) needs to be specified. Upon inspection, two important time scales are identified from the analytical solution of (53): where is a coefficient determined by the initial conditions and The first term in (55) represents the transient motion and the second term represents the steady-state motion. The characteristic time for the transient motion is Thus, the simulation should run beyond if one is interested in the steady-state motion. As the steady-state solution is a finite discrete sum of periodic functions, it would have a nonphysical repetition time . The choice of should satisfy to avoid repetitive solutions. A detailed discussion about can be found in Appendix C. Here we give a choice of , where is the frequency gap and it can be estimated using (23), (33), and (34), where .

To summarize, (53) is the approximated equation of motion to be used in numerical simulation. The vacuum field in (53) is given by (39). The specifications of the vacuum field modes (), polarizations , and other relevant variables can be found in Section 3.1. To approximate (38) by (53), two conditions need to be used, namely, the dipole approximation (5) and the sharp resonance condition (9). The parameters , , and simulation should be chosen to satisfy these two conditions, as these two conditions are also used in Boyer’s analysis. Lastly, the integration time for the simulation is chosen to be within the range , where and are given in (57) and (58), respectively.

#### 4. Simulation Results

In Section 2, it was shown that the probability distribution for an SED harmonic oscillator is a Gaussian. In Section 3, we develop the methods for a numerical simulation to investigate the dynamics of the SED harmonic oscillator and how it gives rise to the Gaussian probability distribution. In this section, the results of the simulation are presented, and the relation between the trajectory and the probability distribution is discussed.

To construct the probability distribution from a particle’s trajectory, two sampling methods are used. The first method is sequential sampling and the second method is ensemble sampling. In sequential sampling the position or velocity is recorded in a time sequence from a single particle’s trajectory, while in ensemble sampling the same is recorded only at the end of the simulation from an ensemble of particle trajectories. The recorded positions or velocities are collected in histogram and then converted to a probability distribution for comparison to the analytical result (17). Whereas the sequential sampling illustrates the relationship between the buildup of probability distribution and the dynamics of particle trajectory, the ensemble sampling is convenient for statistical interpretation. In addition, the ensemble sampling is suitable for parallel computing, which can be used to improve the computation efficiency.

##### 4.1. Particle Trajectory and the Probability Distribution

By solving (53) numerically, the steady-state trajectory for the SED harmonic oscillator is obtained and shown in Figure 3. For a comparison, the temporal evolution of the vacuum field (39) is also included. The ordinary differential equation (53) is solved using the adaptive 5th order Cash-Karp Runge-Kutta method [32], and the integration stepsize is set as small as one-twentieth of the natural period, , to avoid numerical aliasing. The charge , mass , natural frequency , and vacuum field frequency range are chosen to be where is the electron charge, is the electron mass, and is the resonance width of the harmonic oscillator. The choice of is made to bring the modulation time and the natural period of the harmonic oscillator closer to each other. In other words, the equation of motion (53) covers time scales at two extremes, and the choice of mass brings these two scales closer so that the integration time is manageable without losing the physical characteristics of the problem.

Here we would like to highlight some interesting features of the simulated trajectory (Figure 3). First, there appears to be no fixed phase or amplitude relation between the particle trajectory and the instantaneous driving field. Second, the rate of amplitude modulation in the particle trajectory is slower than that in the driving field. To gain insights into these dynamical behaviors, we study the steady-state solution of (55) in the Green function form [33]: where . The solution indicates that the effect of the driving field at any given time lasts for a time period of beyond . In other words, the particle motion at time is affected by the vacuum field from all the previous moments (). As the vacuum field fluctuates in time, the fields at two points in time only become uncorrelated when the time separation is much longer than one coherence time (the coherence time of the vacuum field is , see (64)). This property of the vacuum field reflects on the particle trajectory, and it explains why the particle trajectory has no fixed phase or amplitude relation with the instantaneous driving field. Another implication of (61) is that it takes a characteristic time for the particle to dissipate the energy gained from the instantaneous driving field. Thus, even if the field already changes its amplitude, it would still take a while for the particle to follow. This explains why the amplitude modulation in the particle trajectory is slower compared to that in the driving field (however, in case of slow field modulation when the field bandwidth is shorter than the resonance width of the harmonic oscillator, the modulation time of the field and the particle trajectory are the same).

The sequential sampling of a simulated trajectory gives the probability distributions in Figure 4. While Boyer’s result is obtained through ensemble (phase) averaging, the Gaussian probability distribution shown here is constructed from a single trajectory and is identical to the probability distribution of a ground state quantum harmonic oscillator.

To understand how the trajectory gives rise to a Gaussian probability distribution, we investigate the particle dynamics at two time scales. At short time scale, the particle oscillates in a harmonic motion. The oscillation amplitude is constant, and the period is . Such an oscillation makes a classical double-peak probability distribution. At large time scale, the oscillation amplitude modulates. As a result, different parts of the trajectory have double-peak probability distributions associated with different oscillation amplitudes, which add to make the final probability distribution a Gaussian distribution, as shown in Figure 5. To verify this idea, we attempt to reconstruct the Gaussian probability distribution from the double-peak probability distributions at different sections of the trajectory. We approach this problem by numerically sampling the oscillation amplitudes at a fixed time-step. To determine the appropriate sampling time-step, we inspect the steady-state solution (55) in its complex form: Since the frequency components can be sampled symmetrically around , we factorize into an amplitude term and an oscillation term : The complex components rotate in the complex plane at different rates . At any given time, the configuration of these components determines the magnitude of , as shown in Figure 6. As time elapses, the configuration evolves and the amplitude changes with time. When the elapsed time is much shorter than the shortest rotating period , the change in the amplitude is negligible: where . Here we denote this shortest rotating period as coherence time (the coherence time as defined here is equivalent to the temporal width of the first-order correlation function (autocorrelation). As the autocorrelation of the simulated trajectory is the Fourier transform of the spectrum according to Wiener-Khinchin theorem, it has a temporal width the same as the coherence time calculated here) . For our problem at hand, it is clear that the sampling of oscillation amplitudes should use a time-step greater than .

A representative sampling of the oscillation amplitudes with each sampled amplitude separated by is shown in Figure 7. In this figure, the histogram of the sampled oscillation amplitudes shows that the occurrence of large or small amplitudes is rare. Most of the sampled amplitudes have a medium value. This is because the occurrence of extreme values requires complete alignment or misalignment of the complex components in . For most of the time, the complex components are in partial alignment and thus give a medium value of . Interestingly, the averaged value of is close to the oscillation amplitude as predicted by the random walk model (51). Using the amplitude distribution given in Figure 7, a probability distribution can be constructed by adding up the double-peak probability distributions: where is oscillation amplitude, is the amplitude distribution, and is the corresponding double-peak probability distribution. This constructed probability distribution is a Gaussian and is identical to the simulation result shown in Figure 4. The reconstruction of the Gaussian probability distribution indicates the transitioning from double-peak distribution to the Gaussian distribution due to the amplitude modulation driven by the vacuum field.

##### 4.2. Phase Averaging and Ensemble Sampling

In many SED analyses [4–7, 9, 34], the procedure of random phase averaging is often used to obtain the statistical properties of the physical system. A proper comparison between numerical simulation and analysis should thus be based on ensemble sampling. In each realization of ensemble sampling, the particle is prepared with identical initial conditions, but the vacuum field differs in its initial random phase . The difference in the initial random phase corresponds to the different physical realizations in random phase averaging. At the end of the simulation, physical quantities such as position and momentum are recorded from an ensemble of trajectories.

The ensemble sampling of the simulation gives the probability distributions in Figure 8. The position and momentum distributions satisfy the Heisenberg minimum uncertainty as predicted by Boyer’s analysis. In addition, Boyer proposed a mechanism for the minimum uncertainty using an energy-balance argument. Namely, he calculated the energy gain from the vacuum field and the energy loss through radiation damping, and he found that the delicate balance results in the minimum uncertainty relation [4]. We confirm this balancing mechanism by turning off the radiation damping in the simulation and see that the minimum uncertainty relation no longer holds (see Figure 9).

Unlike sequential sampling, ensemble sampling has the advantage that the recorded data are fully uncorrelated. As a result, the integration time does not need to be very long compared to the coherence time . However, since only one data point is recorded from each trajectory, a simulation with ensemble sampling actually takes longer time than with sequential sampling. For example, a typical simulation run with sequential sampling takes 2.3 hours to finish (for number of sampled frequencies ), but with ensemble sampling it takes 61 hours (for number of particles and number of sampled frequencies ). A remedy to this problem is to use parallel computing for the simulation. The parallelization scheme (the parallelization of the simulation program is developed and benchmarked with assistance from the University of Nebraska, Holland Computing Center. The program is written in Fortran and parallelized using message passing interface (MPI) [32, 35]. The compiler used in this work is the GNU Compiler Collection (GCC) gcc-4.4.1 and the MPI wrapper used is openmpi-1.4.3.) for our simulation with ensemble sampling is straightforward, since each trajectory is independent except for the random initial phases . To reduce the amount of interprocessor communication and computation overhead, each processor is assigned an equal amount of work. The parallelized program is benchmarked and shows an inverse relation between the computation time and the number of processors (see Figure 10). As the computation speedup is defined as , where is the single processor computation time and is the multiprocessor computation time, the inverse relation shown in Figure 10 indicates ideal performance of linear speedup. As an additional note, the parallelized code is advantageous for testing the numerical convergence of the simulation. In Figure 11, the convergence of the ensemble-averaged energy as a function of sampled frequency number is shown. As highlighted in the figure, only sampled modes need to be used for the simulation to agree with the analytical result. The fact that is low indicates that our method of vacuum mode selection is efficient.

#### 5. Conclusions

The analytical probability distribution of an SED harmonic oscillator is obtained in Section 2. The details of our numerical methods including vacuum mode selection are documented in Section 3. Agreement is found between the simulation and the analytical results, as both sequential sampling and ensemble sampling give the same probability distribution as the analytical result (see Figures 4 and 8). Numerical convergence is reached with a low number of sampled vacuum field mode (), which is an indication that our method of vacuum mode selection ((32), (33), (34), and (37)) is effective in achieving a representative sampling.

As the probability distribution constructed from a single trajectory is a Gaussian and satisfies the Heisenberg minimum uncertainty relation, we investigate the relation between the Gaussian probability distribution and the particle’s dynamical properties. As a result, the amplitude modulation of the SED harmonic oscillator at the time scale of is found to be the cause for the transitioning from the double-peak probability distribution to the Gaussian probability distribution (see Figures 5, 7, and 12).

#### 6. Discussions: Application of Simulation to Other Physical Systems

In quantum mechanics, the harmonic oscillator has excited, coherent, and squeezed states. A natural extension of our current work is to search for the SED correspondence of such states. Currently, we are investigating how a Gaussian pulse with different harmonics of will affect the SED harmonic oscillator. Can the SED harmonic oscillator support a discrete excitation spectrum, and if so, how does it compare with the prediction from quantum mechanics? Such a study is interesting in the broader view of Milonni’s comment that SED is unable to account for the discrete energy levels of interacting atoms [1] and also Boyer’s comment that at present the line spectra of atoms are still unexplained in SED [36].

The methods of our numerical simulation may be applicable to study other quantum systems that are related to the harmonic oscillator, such as a charged particle in a uniform magnetic field and the anharmonic oscillator [5, 34]. For the first example, classically, a particle in a uniform magnetic field performs cyclotron motion. Such a system can be viewed as a two-dimensional oscillator, having the natural frequency set by the Larmor frequency. On the other hand, a quantum mechanical calculation for the same system reveals Landau quantization. The quantum orbitals of cyclotron motion are discrete and degenerate. Such a system presents a challenge to SED. For the second example, a harmonic potential can be modified to include anharmonic terms of various strengths. Heisenberg considered such a system a critical test in the early development of quantum mechanics [37, 38]. We think that a study of the anharmonic oscillator is thus a natural extension of our current study and may serve as a test for SED.

Lastly, over the last decades there has been a sustained interest to explain the origin of electron spin and the mechanism behind the electron double-slit diffraction with SED [15, 39–41]. Several attempts were made to construct a dynamical model that accounts for electron spin. In 1982, de la Peña calculated the phase averaged mechanical angular momentum of a three-dimensional harmonic oscillator. The result deviates from the electron spin magnitude by a factor of 2 [39]. One year later, Sachidanandam derived the intrinsic spin one-half of a free electron in a uniform magnetic field [40]. Whereas Sachidanandam’s calculation is based on the phase averaged canonical angular momentum, his result is consistent with Boyer’s earlier work where Landau diamagnetism is derived via the phase averaged mechanical angular momentum of an electron in a uniform magnetic field [5]. Although these results are intriguing, the most important aspect of spin, the spin quantization, has not been shown. If passed through a Stern-Gerlach magnet, will the electrons in the SED description split into two groups of trajectories (electron Stern-Gerlach effect is an interesting but controversial topic in its own right. Whereas Bohr and Pauli asserted that an electron beam cannot be separated by spin based on the concept of classical trajectories [42], Batelaan et al. [43, 44] and Dehmelt argue that one can do so with certain Stern-Gerlach-like devices [45, 46])? At this point, the dynamics become delicate and rather complex. To further investigate such a model of spin, a numerical simulation may be helpful.

On the other hand, over the years claims have been made that SED can predict double-slit electron diffraction [4, 15, 41]. In order to explain the experimentally observed electron double-slit diffraction (see [47, 48] for a movie of the single electron buildup of a double-slit diffraction pattern) [49–51], different mechanisms motivated by SED were proposed [15, 41], but no concrete calculation has been given except for a detailed account of the slit-diffracted vacuum field [52]. In 1999, Kracklauer suggested that particles steered by the modulating waves of the SED vacuum field should display a diffraction pattern when passing through a slit, since the vacuum field itself is diffracted [41]. In recent years, another diffraction mechanism is proposed by Cavalleri et al. in relation to a postulated electron spin motion [15]. Despite these efforts, Boyer points out in a recent review article that at present there is still no concrete SED calculation on the double-slit diffraction [36]. Boyer suggests that as the correlation function of the vacuum field near the slits is modified by the slit boundary, the motion of the electron near the slits should be influenced as well. Can the scattering of the vacuum field be the physical mechanism behind the electron double-slit diffraction (see Figure 13)? As Heisenberg’s uncertainty relation is a central feature in all matter diffraction phenomena, any proposed mechanism for electron double-slit diffraction must be able to account for Heisenberg’s uncertainty relation. In the physical system of the harmonic oscillator, SED demonstrates a mechanism that gives rise to the Heisenberg minimum uncertainty. We hope that the current simulation method may help in providing a detailed investigation on the proposed SED mechanisms for the electron slit-diffraction.

#### Appendices

#### A. The Vacuum Field in Unbounded and Bounded Space

In “unbounded” space, the modes are continuous and the field is expressed in terms of an integral. In “bounded” space, the modes are discrete and the field is expressed in terms of a summation. In both cases, the expression for the field amplitude needs to be obtained (see Appendices A.1 and A.2). The integral expression helps comparison with analytical calculations in previous papers [4–7, 9], while the summation expression is what we use in our numerical work.

##### A.1. Unbounded Space

The homogeneous solution of Maxwell’s equations in unbounded space is equivalent to the solution for a wave equation: where is the undetermined field amplitude for the mode and has the unit of electric field (V/m), is defined as the unit vector of , and the two vectors, and , describe an orthonormal polarization basis in a plane that is perpendicular to the wave vector .

Without loss of generality, a random phase can be factored out from the field amplitude : where . The field amplitude can be determined through the phase averaged energy density: where is the vacuum permittivity, is the vacuum permeability, and is the random phase in (A.2). To evaluate the phase averaged energy density , we first calculate and using (A.2): where , The random phase average can be calculated with the following relation [9]: Applying (A.6) to (A.5), we obtain Consequently, and can be evaluated using (A.4) and (A.7): The above calculation leads to a relation between the field amplitude and the phase averaged energy density in unbounded space: Now, if we postulate that the vacuum energy is for each mode , then in a bounded cubic space of volume the vacuum energy density is In the limit of unbounded space (i.e., ), the volume element becomes differential (i.e., ) and the vacuum energy density becomes Comparing this result with (A.9), we find Assuming is a positive real number, the vacuum field amplitude in unbounded space is determined: Therefore, the vacuum field in unbounded space is found to be which is (1).

##### A.2. Bounded Space

The solution of homogeneous Maxwell’s equations in bounded space has the summation form: where , is the undetermined field amplitude for the mode and has the unit of electric field (V/m), is defined as the unit vector of , and the two vectors, and , describe an orthonormal polarization basis in a plane that is perpendicular to the wave vector .

Using the relation (if the two modes are not identical (i.e., or ), then and are independent, which leads to the factorization ) we can follow the same argument in Appendix A.1 and obtain the phase averaged energy density in bounded space: where . Again, if we postulate that the vacuum energy is for each mode , then in a bounded space of volume the vacuum energy density is Comparing (A.17) and (A.18), the vacuum field amplitude in bounded space is determined: Therefore, the RED vacuum field in bounded space is

#### B. Isotropic Polarization Vectors

A wave vector chosen along the -axis, has an orthonormal polarization basis in the -plane: where the random angle is uniformly distributed in . To obtain the wave vector in (32), can be first rotated counterclockwise about the -axis by an angle and then counterclockwise about the -axis by an angle . The corresponding rotation matrix is described by and is obtained accordingly: In the same manner, we can rotate and with the rotation matrix and obtain an isotropically distributed (after the rotation, the uniformly distributed circle will span into a uniformly distributed spherical surface) polarization basis as described in (37):

#### C. Repetitive Time

A field composed of finite discrete frequencies, repeats itself at the least common multiple (LCM) of all the periods of its frequency components: where . An example of a two-frequency beat wave is given in Figure 14. Given the frequency spectrum of , one can draw a relation between the repetition time and the greatest common divider (GCD) of the frequencies: The derivation of the relation in (C.4) is the following. First, we factorize the sum of all the frequencies into two terms: where are positive integers, Now, it is true that ; otherwise it would lead to a contradiction to (C.6). Therefore, one can conclude that