Abstract

Simulations of Shapes and Shifts of Spectral Lines (SSSL) are important as the third powerful research methodology—in addition to theories and experiments. However, there is a growing tendency in physics in general and in the area of SSSL in particular, to consider the ultimate test of any theory to be the comparison with results of a code based on fully-numerical simulations starting from the “scratch” rather than from some analytical advance. In this paper, we show by examples that fully-numerical simulations are often not properly verified and validated, fail to capture emergent principles and phenomena, and lack the physical insight. Physics is the experimental science. So, the ultimate test of any theory—including theories of SSSL—should be the comparison with experiments conducted in well-controlled conditions (benchmark experiments).

1. Introduction

By a commonly accepted classification, the determination of Shapes and Shifts of Spectral Lines (SSSL) belongs to the highest level of spectroscopy—compared to the determination of unperturbed wavelengths and frequency-integrated intensities of spectral lines. In plasmas with the high degree of ionization, SSSL are controlled primarily by various electric fields: this is the Stark broadening of spectral lines (the term includes both Stark shapes and Stark shifts). In weakly ionized plasmas, a significant contribution to SSSL can come from pressure broadening by neutrals.

The research area covered by this special issue includes both the SSSL dominated by various electric fields (including electron and ion microfields in strongly ionized plasmas) and the SSSL controlled by neutral particles. In the physical slang, the former is called “plasma broadening” while the latter is called “neutral broadening” (of course, the results of neutral broadening apply also to the spectral line broadening in neutral gases).

The subject of SSSL is a rather old field beginning about 100 years ago for plasma broadening and even earlier (about 150 years ago) for neutral broadening. Despite the age, the research area of SSSL is alive and flourishing and has a bright future.

Indeed, the growth of this field is manifested both “horizontally” (in terms of the number of publications) and “vertically” (in terms of breakthroughs to advanced approaches and better physical insights). Just over the last 5 years, there have been published several books (such as, e.g., [13]) and numerous papers in refereed journals and/or conference proceedings (examples of the latter are [4, 5]). Examples of vertical advances are (but not limited to):

(i)unification of the impact and one-perturber theories of line shapes [6], (ii)QED approach to modeling spectra of isolated atoms and ions, as well as those influenced by a strong laser field [7, 8],(iii)path integral formalism for the spectral line shapes in plasmas [9],(iv)temperature dependence of the Stark broadening dominated by strong collisions [10],(v)various new features in X-ray spectral lines from plasmas, such as charge-exchange-caused dips [11], Langmuir-waves-caused dips [12], and effects of external laser fields [1316],(vi)formalism of dressed atomic states for diagnostics (including laser-aided diagnostics) of quasimonochromatic electric fields in plasmas [17, 18], (vii)formalism of atomic states dressed by the broadband electric microfield in plasmas [2, 19].

Just the above incomplete list demonstrates the virility of the area of SSSL. This special issue is a further proof of this fact. There is no doubt in our mind that the field of SSSL will continue thriving. However, there is a trend, which—if continued—could jeopardize this research area. It has to do with the following.

One of the most important questions in physics in general and in SSSL in particular is what should be the ultimate test of various theories. There are two different schools of thought on this issue.

One school of thought considers the comparison with benchmark experiments as the ultimate test of the theory. Benchmark experiments are those that are conducted in well-controlled conditions, for example, for SSSL in plasmas, benchmark experiments are those, where plasma parameters are determined independently of the SSSL theory to be tested.

Another school of thought insists that the ultimate test of a particular theory is the comparison with another theory (!)—specifically, with results of a code based on fully-numerical simulations starting from the “scratch” rather than from some analytical advance.

There is no question about the importance of simulations as the third powerful research methodology—in addition to theories and experiments. Large-scale codes have been created to simulate a garden variety of complicated phenomena.

However, first, not all large-scale codes are properly verified and validated. Second, fully-numerical simulations are generally ill-suited for capturing so-called emergent principles and phenomena, such as conservation laws, the laws of thermodynamics, detailed balance, and preservation of symmetries. Third, as any fully-numerical method, they lack the physical insight. A number of physicists started warning about this several years ago. Let us present the relevant quotations.

In 2005 Post and Votta published a very insightful article [20], the main point of which was that “much of computational science is still troublingly immature” and that new methods of verifying and validating complex codes are necessary and should be mandatory. Further they wrote

A computational simulation is only a model of physical reality. Such models may not accurately reflect the phenomena of interest. By verification we mean the determination that the code solves the chosen model correctly. Validation, on the other hand, is the determination that the model itself captures the essential physical phenomena with adequate fidelity. Without adequate verification and validation, computational results are not credible.

They described the underlying problems as follows:

Part of the problem is simply that it's hard to decide whether a code result is right or wrong. Our experience as referees and editors tells us that the peer review process in computational science generally doesn't provide as effective a filter as it does for experiment or theory. Many things that a referee cannot detect could be wrong with a computational-science paper. The code could have hidden defects, it might be applying algorithms improperly, or its spatial or temporal resolution might be inappropriately coarse.

The few existing studies of error levels in scientific computer codes indicate that the defect rate is about seven faults per 1000 lines of Fortran. That's consistent with fault rates for other complex codes in areas as diverse as computer operating systems and real-time switching. Even if a code has few faults, its models and equations could be inadequate or wrong

The existing peer review process for computational science is not effective. Seldom can a referee reproduce a paper's result. Generally a referee can only subject a paper to a series of fairly weak plausibility checks: Is the paper consistent with known physical laws? Is the author a reputable scientist? Referees of traditional theoretical and experimental papers place some reliance on such plausibility checks, but not nearly to the degree a computational-science referee must. The plausibility checks are, in fact, sometimes worse than inadequate.”

This was written by two leading experts in computational science. Indeed, Post is a computational physicist at Los Alamos National Laboratory and an associate Editor-in-Chief of the journal Computing in Science and Engineering. Votta is a Distinguished Engineer at Sun Microsystems Inc. and an associate Editor of IEEE Transactions on Software Engineering. No wonder that their paper caused lots of comments published in [21]. In one of the comments, J. Loncaric from Los Alamos wrote in particular:

Unfortunately, these days universities turn out users who employ codes as black boxes but do not understand what they do or when their results can be trusted.

Further, speaking of components of a code, he added

components can be combined, but their combination could be wrong even though the components test well individually. A combination that is insensitive to minor component errors could still give invalid results. Each component has an unstated region of applicability that is often horribly complicated to describe, yet the combination could unexpectedly exceed individual component limits.

Responding to the comments, Post and Votta wrote in particular [21]:

The second point Loncaric highlights is that a model for a natural system—physical, chemical, biological, and so forth—is often much more than the sum of the individual components. For physical systems, Robert Laughlin recently pointed out that much of science today is inherently reductionist. Present scientific research paradigms emphasize the detailed study of the individual elements that contribute to a complex system's behavior. High-energy physics, for example, involves the study of fundamental particles at progressively higher accelerator energies. Yet successful models of complex systems, such as low-temperature superconductors, are relatively insensitive to the detailed accuracy of the individual constituent effects. Laughlin stresses that successful models capture the emergent principles that determine the behavior of complex systems. Examples of these emergent principles are conservation laws, the laws of thermodynamics, detailed balance, and preservation of symmetries.

Since a computational simulation is only a model of nature, not nature itself, there is no assurance that a collection of highly accurate individual components will capture the emergent effects. Yet most computational simulations implicitly assume that if each component is accurate, the whole code will be accurate. Nature includes all of the emergent phenomena, but a computational model may not. This perspective underscores the importance of validation of the integrated code and of individual models.

The above general deficiencies of complicated codes resulted in huge failures of important large-scaled projects. Post and Votta described the following examples [20]:

Examples abound of large-scale software failures in fields like information technology and aerospace. The 1995 failure of the European Space Organization's Arianne 5 rocket and the 1999 loss of NASA's Mars Climate Orbiter are still fresh in memory. After the Columbia space shuttle's ill-fated February 2003 launch and first reports of possible problems with the mission, a NASABoeing team's computational assessment of potential failure modes yielded misleading conclusions that may have contributed to the tragedy.

The quest for fusion energy provides two more examples of problematic computation. By stretching boundary conditions far beyond what could be scientifically justified, computer simulations were able to “reproduce” the exciting but wrong experimental discovery of sonoluminescent fusion. With regard to the International Thermonuclear Experimental Reactor (ITER), preliminary computational predictions in 1996 of inadequate performance by the proposed facility were wrongly characterized as definitive. Those predictions contributed to the 1998 US withdrawal from that important and promising international undertaking.

As for the research area of SSSL, let us bring up just one example of the unjustifiable reliance on fully-numerical simulations that led to a conclusion contradicting first-principle-based analytical results obtained in various ways. The example concerns a direct coupling of the electron and ion microfields in plasmas. This coupling results from the Acceleration of the Electrons by the Ion Field (AEIF). The AEIF is a universal effect: it affects all kinds of spectral lines. The net result of the AEIF is a reduction of Stark widths and shifts.

This phenomenon was first described analytically in the binary approach in paper [22] with subsequent analytical improvements in paper [23]. Then it was also described analytically in the multiparticle approach in book [2] and paper [19].

More recently, there have been conducted fully-numerical simulations trying to “mimic” the phenomenon of AEIF [24]. Based on their fully-numerical simulations conducted for the Hα line at just one value of the electron density and just one value of the temperature , the authors of [24] claimed that the AEIF leads to an increase of the electron-caused Stark width rather than to its decrease.

It should be emphasized that those simulations [24] had lots of limitations. The primary limitation was their employment of the binary version of the AEIF. Thus, their results have no bearing on the analytical results for the AEIF obtained in the multiparticle approach [2, 19]. Nevertheless, the controversial results of simulation from [24] for the binary version of the AEIF required a resolution.

This issue has been resolved in [25] as follows. The previous analytical calculations of the AEIF [2, 19, 22, 23] were based on the dynamical treatment of the perturbing electrons. In other words, in [2, 19, 22, 23] there was calculated analytically how the ion microfield changes the trajectories and velocities of the individual perturbing electrons and then averaged their contribution to the broadening over the ensemble of electrons. In [25], instead of the dynamical treatment there was employed a statistical approach. It started from the electron velocity distribution function modified by the presence of the ion microfield—this modified electron velocity distribution function had been calculated (for a different purpose) by Romanovsky and Ebeling in the multiparticle description of the ion microfield [26]. With the help of the modified electron velocity distribution function from [26], it was then calculated in [25] the Stark broadening by electrons within the framework of the conventional theory usually assigned to Griem [27] (who is one of the coauthors of [24]). The result showed that the electron Stark broadening decreases.

Thus two totally different analytical approaches (dynamical and statistical) agreed with each other (by predicting a decrease of the electron Stark broadening) and therefore disproved the fully-numerical simulations from [24] (that claimed an increase of the electron Stark broadening).

In summary, while simulations are important as the third powerful research methodology—in addition to theories and experiments—there is a growing tendency in physics in general and in the area of spectral line shapes in particular, to consider the ultimate test of a particular theory to be the comparison with results of a code based on fully-numerical simulations. However, fully-numerical simulations are often not properly verified and validated, fail to capture emergent principles and phenomena, and lack the physical insight. The last but not least: physics is the experimental science. So, the ultimate test of any theory—including theories of spectral line shapes—should be the comparison with experiments conducted in well-controlled conditions (benchmark experiments).