Abstract

The transition from quarks to hadrons in a heavy-ion collision at high energy is usually studied in two different contexts that involve very different transverse scales: local and nonlocal. Models that are concerned with the spectra and azimuthal anisotropy belong to the former, that is, hadronization at a local point in space, such as the recombination model. The nonlocal problem has to do with quark-hadron phase transition where collective behavior through near-neighbor interaction can generate patterns of varying sizes in the space. The two types of problems are put together in this paper both as brief reviews separately and to discuss how they are related to each other. In particular, we ask how minijets produced at LHC can affect the investigation of multiplicity fluctuations as signals of critical behavior. It is suggested that the existing data from LHC have sufficient multiplicities in small intervals to make the observation of distinctive features of clustering of soft particles, as well as voids, feasible that characterize the critical behavior at phase transition from quarks to hadrons, without any ambiguity posed by the clustering of jet particles.

1. Introduction

Critical phenomenon is a subject that has been extensively investigated in many fields but never explicitly identified in heavy-ion collisions. There are two types of phase transitions involved when nuclei collide. One is QCD deconfinement caused by compression of the nuclei such that quarks are liberated from the nucleons, as probed in the beam energy scan (BES) program at RHIC for collision energy ranging from 7.7 to 19.6 GeV [1]. The other is quark-hadron phase transition for a quark-gluon plasma that has previously been hot and dense after creation by nuclear collisions at very high energy but has expanded enough so that confinement forces set in at the end of evolution to form hadrons. One expects such a phase transition (PT) to be operative at energy 200 GeV at RHIC, but no definitive signal has been reported. A baryon-free central region suitable for the Ginzburg-Landau description of PT [2] is more likely to be created and to offer detectable signature only at the Large Hadron Collider (LHC) in Pb-Pb collisions. Our concern in this paper is exclusively on the latter type of PT, where hadronization is a process far removed and distinct from the initial configurations of colliding nuclei. However, at LHC there is the added complication of jets and shower partons that may contaminate the signals for PT. It is our objective here to clarify these various related issues.

The observables in a heavy-ion collision are the momentum variables of hadrons in , apart from their identities. Since the partons in the medium are not directly observable, one must infer from the multiparticle distribution in what the nature of the dynamical system is before hadronization occurs. Just as the medium exists over a period of time, so also does one expect the formation of hadrons to occur at various times. The experiment collects all the particles produced in its acceptance windows irrespective of the time of emission in each event. After averaging over many events, it is hard to recognize from the inclusive distributions or even particle correlations whether the quark-hadron transition is of a type characteristic of critical behavior.

Hydrodynamical models take the macroscopic approach in describing the evolution of the medium [3]. In treating the fluid flow of the energy-momentum tensor that averages over the microscopic variables of the constituents and adopting simple schemes for freeze-out, the approach relinquishes any interest in the question of the collective behavior of the underlying quarks. That is not to say that there is no collective motion of the fluid. The central theme of our study here is the critical behavior that arises from the tension between the ordered (collective) and the disordered (thermal) dynamics. The macroscopic variables used in hydromodels can describe collective flow but not near-neighbor interactions that generate cooperative behavior.

Event generators that incorporate microscopic dynamics such as parton scattering and hadronization by coalescence, as in a multiphase transport (AMPT) model [4, 5], do not treat the movements of large patches of partons in reaction to confinement forces, and so they also do not contain the dynamics of phase transition. They do reproduce the data on hadronic spectra in much wider ranges in phase space than hydromodels.

Since the systems created in heavy-ion collisions are so complex, any theoretical treatment of them must rely on some approximations and some degree of averaging that can render a tractable description of what is regarded as essential. The focus on distributions and azimuthal anisotropy is orthogonal to the issues of critical behavior, although both sides of the landscape involve the interface of quarks and hadrons.

To extract the signature of quark-hadron PT from the massive data acquired in heavy-ion collisions, it is essential that averaging is not done that erases the signal from the start. Even before averaging, the nature of the structure in momentum space where the thousands of produced particles populate in one event may contain overlapping features that have origins from different dynamics. Clustering due to collective behavior may at some level appear to be similar to clusters of fragmentation products of hard jets. To be able to distinguish the different features is crucial to the construction of a program that can be successful in discovering any evidence for or against critical behavior in heavy-ion collisions.

A good measure of criticality for heavy-ion collisions was proposed two decades ago [6] on the basis of Ginzburg-Landau theory of second-order PT [7]. It emphasizes the study of the scaling properties of multiplicity fluctuations over a wide range of bin sizes. The numerical value of a scaling exponent was derived analytically. Although the universality of was verified in a laser experiment at the onset of lasing [8], it has never been tested in heavy-ion experiments because of complications arising from insufficiently high multiplicities. Now, with the data available from LHC on Pb-Pb collisions at 2.76 TeV, dedicated analysis can begin. On the other hand, it has also been found in the study of spectra of hadrons at LHC that shower partons dominate over thermal partons because of profusely produced minijets at that high collision energy [9]. It is therefore important to understand how the shower partons affect the multiplicity fluctuations due to critical clustering. In clarifying those issues, we will review all the main features of each one and show why minijets present no problem in the search for scaling behavior, whether or not critical quark-hadron PT exists.

2. Brief Review of Signature of Critical Behavior

Unlike most problems in statistical physics where critical phenomena occur, temperature cannot be controlled in heavy-ion collisions. Thus, even the notion of criticality has no experimental relevance unless some phenomenological measure can be devised that can be shown at least on theoretical grounds to reveal the occurrence of quark-hadron PT. In a high-energy collision, quarks turn into hadrons anyway whether or not there is a PT, as in or leptonic collisions. Those are local properties where temperature is not an appropriate description of the system. To be qualified as a PT, the system must have enough constituents to form a quasi-equilibrium state in which near-neighbor interactions can result in correlated behavior in patches of neighborhoods as in ferromagnetism or in QCD-confined clusters. Those are semiglobal properties that require a spatially extended region to display their characteristics.

In heavy-ion collisions, the components of a particle momentum can be related to a point on the surface of a cylinder that contains the quark-gluon plasma (QGP). Since the plasma lasts for some time, it is reasonable to expect, even without hydrodynamics, that the interior is hotter than the surface, and so hadrons are more likely to be formed on the surface as the system expands and cools. It is the spatial pattern of where the hadrons are formed that is of interest. But those patterns at different times are superimposed on one another as time progresses, and so, at the end, when the detector collects all the particles in any given event, the characteristic features of the fluctuation patterns disappear in the cumulative result. Since it is not feasible to cut the time sequence into small intervals, the best that we can do is to make cuts in the hope that the high and intermediate particles are mostly emitted at early times and that only the bulk matter that remains at late time undergoes more graduate PT at low . Lacking the ability to tune the temperature , we need a measure that captures the essence of quark-hadron PT without reference to .

The physical basis for our interest in studying the fluctuations of spatial patterns is that it is the characteristic property of critical behavior to exhibit patches of all sizes [10]. At well below the critical temperature , near-neighbor interactions dominate over thermal randomization so there is an ordered state throughout, as exemplified by the alignment of spins in a ferromagnet. At , thermal agitation dominates and the state becomes disordered. At , the tension between the two forces results in patches of ordered state of various sizes. For quark-hadron PT in heavy-ion collisions to exhibit that kind of behavior, the lego plot of hadrons formed in the space should show clusters of all sizes. Since there is no characteristic scale in the problem, there should be scaling property of the hadron multiplicities in bin sizes. Concrete models have been devised to simulate such critical behavior for hadron observables, ranging from the Ising model [11] on the one end to contracting clusters due to color confinement [12] on the other. Power-law behaviors have indeed been found with characteristic scaling exponents.

To quantify the fluctuation properties of bin multiplicities, normalized factorial moments have been used, where is defined by [13, 14] is the multiplicity distribution in a 2D bin of size , and so . There are two ways of interpreting the average . One is to fix the location of the bin, average over all events, and study the result as a function of the bin width . In that case, is the probability of having particles in that bin after events. Such an average is called vertical average and can be denoted by . The other way is to partition the 2D space into square bins (each of width on each side) and to take the average over all bins in one event. That is called horizontal average , for which is the probability of having particles in any of the bins for any particular event. If the space is large and uniform (which can be achieved by use of the cumulative variables), one expects on the basis of ergodic principle that the two ways of averaging lead to the same result. Clearly, doing both averages yields better statistics. Since the spatial pattern displays explicitly the event structure, it is best to take the horizontal average for first and then the vertical average of ; that is, The advantage of studying is that it filters out statistical fluctuations, for which the reader is referred to the original article [13, 14] and later reviews [2, 15].

If has a power-law dependence on , the phenomenon is referred to as intermittency [13, 14]. It implies the lack of any particular spatial scale in the system. Various experiments have found intermittency in collisions of leptons, hadrons, and nuclei with positive intermittency index [15, 16]. Since critical phenomenon like quark-hadron PT has scaling properties, one can expect intermittency to be indicative of the existence of QGP in nuclear collisions [17]. However, what is indicative is not a necessary condition. Hadronization of QGP may take various routes, and there is no consensus on what that route must necessarily be.

To build a more solid foundation on quark-hadron PT, it is appealing to refer to the phenomenological theory of Ginzburg-Landau (GL) that has found universal validity in describing critical behaviors [7, 10]. The GL theory is a mean field theory and as such cannot predict reliably the critical exponents for systems with large fluctuations. However, it is sufficient for our purpose since the temperature is not subject to control in heavy-ion experiments. The first question is how to relate the order parameter in that theory to the observables in heavy-ion collisions. In [6], is identified with hadron density, and specific scaling behavior of was found analytically. Two years later, the same treatment was applied to nonlinear optics where the number of photons created at the threshold of lasing can be related to second-order PT [8]. There, the GL order parameter is the complex field amplitude of the single-mode laser. Experimentally, it is possible to control the operating point of the laser in the same way that is under control in a condensed-matter system. It turns out that at the threshold of lasing exactly the same scaling behavior is found as predicted for quark-hadron PT. Regarding that as an experimental confirmation of the relevance of GL theory to the production of particles, photons or hadrons, let us now go back to the problem of heavy-ion collisions and describe what that theory predicts for multiplicity fluctuations if the QGP undergoes a quark-hadron PT.

The GL free-energy density is [7, 10] where a gradient term is usually also on the RHS but will be neglected if we ignore the spatial dependence within a bin in the space that we will consider. We will do only vertical average at one fixed bin, so according to our identification of with hadron density, the average multiplicity in the bin of size is That means that hadrons are formed wherever is nonzero for and in (4), which corresponds to having a (hadron) condensate in a second-order PT. The case of a first-order PT with , and a third term in (4) is more complicated and will be commented on below after the completion of our discussion about the second-order PT.

Since the system can fluctuate around the minimum of , the multiplicity distribution involves an integration over all : where and is the Poisson distribution: Applying (6) to (1) one can obtain an analytical formula for , whose dependence on is found not to satisfy a simple power-law description, as described by (3) for intermittency [6]. However, it can be established that it has the property for an extended range of , a behavior that has been referred to as -scaling. Furthermore, the scaling exponent satisfies to a high degree of accuracy. The index is independent of the GL parameters and (and therefore independent of ) so long as and and independent of the bin size [6, 18]. It is an amazingly simple numerical quantity that connects GL theory to hadronic observable in heavy-ion collisions. In photon counting at the threshold of lasing, this index is verified accurately [8]. It is so far the only experiment that has examined the relevance of -scaling to PT in any system.

There are event generators that simulate particle production in heavy-ion collisions. They are usually tuned to reproduce the data on spectra and azimuthal anisotropy. Recently, the AMPT code [4, 5] has been used to generate multiparticle production by Pb-Pb collisions at  TeV and the multiplicity fluctuations have been examined using factorial moments defined in (2) [19]. It is found that there is not only no intermittency, but further the exponents in (3) are negative for all . It means that there are no large fluctuations. The absence of any sign of PT is not surprising, since no collective interaction has been built into the code. The analysis done in [19] is, however, preparatory to the application of the method to the real data from LHC.

Returning to theory, it should be noted that the GL theory of PT, being of mean field, is too smooth to be able to describe the fluctuation pattern in the whole space of any event. The Ising model in 2D that simulates ferromagnetism on a square lattice is well suited to amend the deficiency of the GL theory on the one hand and to make more direct connection with the event structure of particle production on the other hand. That problem has been investigated in [11]. Hadron multiplicity in a cell of lattice sites is defined in proportion to the net spin of the cell along the direction of the net magnetization of the whole lattice and is treated as zero multiplicity if the net spin is in the opposite direction. Thus, each Ising configuration corresponds to an event with hadron occupancy fluctuating throughout the lattice. Since is under control in the Ising simulation, one can examine the dependence of the average multiplicity in all cells and determine the critical temperature so that is large for and small for with a sharp transition at . This is a simulation of PT because the Ising Hamiltonian has near-neighbor interaction that favors parallel spin but is randomized by thermal motion in accordance to the Boltzmann factor. Thus, the Ising model is orthogonal to event generators, like AMPT, which has parton scattering dynamics but not collective interaction.

Since each event in a heavy-ion collision corresponds to an Ising configuration, the spatial pattern of the event structure can be analyzed by the horizontal factorial moments, and the vertical averaging can be achieved by simulating many Ising configurations at a fixed . Thus, it is possible to calculate , defined in (2), as a function of bin size , with each bin consisting of cells. It is found that strict power-law behavior of intermittency defined by (3) occurs only at ; however, -scaling defined by (9) is valid for a range of but not at . The dependence of on given in (10) is indeed verified by the Ising model, but the index shows dependence on [11]. It varies from at low to at , so the average agrees well with the result from GL given in (10). Thus, we learn that if the Ising model is a valid description of quark-hadron PT and supplements the gross prediction of GL theory, then hadrons can form at a range of temperature after the quark system cools below the point when the constituents of hadrons can no longer remain deconfined.

The above discussion is all about second-order PT. If the quark-hadron PT is first order, the GL free-energy density for that phenomenon corresponds to , in (4), so must have a term, with , in order that is bounded at large . Thus, has a maximum located at a value of between 0 and the point where has a minimum, which is where the system jumps to in a transition. The quantity that characterizes the strength of that transition is [20] which must be for to be at the minimum. When is 0.6 or larger, the minimum is so deep compared to the bump at the maximum that the PT is very nearly second order. So the interesting region for first-order PT is for , where small corresponds to strong first order. A study of the behavior of has been carried out in [20], where it is found that -scaling (9) is well satisfied and that the index depends on . It decreases from at asymptotically to at . Since is a measurable quantity, it offers a simple procedure to characterize quark-hadron PT in the absence of any way to determine the GL parameters or the temperature.

From the results summarized above for first- and second-order PTs, it is evident that the scaling exponent does not provide distinctive characterization of the nature of PT, except if its value is very nearly 1.3. If is larger than 1.3, it could mean that the quark-hadron system hadronizes at a lower than in second order or supercooled in first order. In the latter case, the system persists for some time in the deconfined phase despite the existence of a lower minimum in the GL free energy. On the other hand, it is also possible that other physical processes are at play to mask the expectations from the equilibrium dynamics of PT. Thus, we need other methods of analyses to exhibit a wider scope of the properties of hadronization.

If a system exhibits intermittency as in (3) or even if there is no strict power law but increases with , then the existence of large implies large spatial fluctuations at small bin size . To amplify the behavior at large , it is natural to consider positive powers of before vertical averaging. If we denote the quantity inside in (2) by for an event , its fluctuation from its event average can be quantified by the double moment [21, 22]: where need not be an integer. If has a power-law behavior in , the phenomenon is referred to as erraticity. If further the exponent depends linearly on in an interval of above 1, then the slope, is referred to as erraticity index that is independent of and . It was found that this way of measuring fluctuations can be applied to common problems in classical chaos and that is as effective as the Lyapunov exponent [23]. Multiparticle production in and collisions in fixed target experiments at 250 GeV has been studied by use of erraticity analysis; it is found that at such low collision energy the erraticity behavior is dominated by purely statistical fluctuations [24]. For light-ion collisions and emulsion data at low energies, the erraticity index has been determined with results that cannot be used to shed any light on PT [2527]. More recently, it has been considered in model studies at LHC energies [12, 19]. Now, it is eminently suited for the real data at  TeV.

In order to have a hadronization scheme that simulates critical transition, it is attempted in [12] to build in two opposing subprocesses at each time step: one is contraction of dense regions that mimics confinement attraction and the other is randomization by redistributing the and in the dilute bins randomly throughout the space. The former represents ordered motion and the latter disordered motion. Pionization occurs whenever a pair of and is within an assigned small distance from each other. Applying erraticity analysis to the multiplicity fluctuations generated in that model results in the erraticity index for being for the critical case but twice larger for the no-contraction case [12].

There is nothing special about . If there is a region in which the dependence of on is linear, we can define the slope in that region as which then becomes an index that is independent of the two parameters used in the analysis and can be regarded as a numerical quantity to characterize erraticity.

3. Transverse-Momentum Spectra at RHIC and LHC

In the previous section we considered the multiplicity fluctuations in space in small intervals at low in search for signatures of quark-hadron PT. Now, we consider the orthogonal problem of distributions over wide ranges of , averaged over and . We summarize here only the approach based on the recombination model (RM), in which hadronization is treated by the recombination of thermal and shower partons [28]. It is the effect of the increase of shower partons, as energy is increased from RHIC to LHC, which we will discuss in the next section.

Since our aim is primarily focused on the effects of minijets on the signature for PT, it is sufficient for us to restrict our discussion here to only the production of pions at midrapidity, that is, , with averaged over for central collisions. Thus, the distribution appears as a 1D problem with where and are the transverse momenta of a quark and an antiquark that recombine to form a pion. is the recombination function (RF), which includes a function to conserve momentum. is the invariant distribution of and at late time when quark-hadron transition takes place. That transition is not referred to as PT because the RM is a microscopic description of hadron formation without collective dynamics, so it does not contain the macroscopic features discussed in the preceding section. As mentioned earlier, this is the orthogonal problem.

The kernel of attention in the RM is the determination of . Without following through the evolution of the system as in hydrodynamics (which involves assumptions that are inconsistent with our premise that minijets can be important), cannot be computed from the initial configuration at early time. The parton distributions due to hard and semihard jets can be calculated. The shower partons () are the fragmentation products of the hard and semiarid partons that emerge from the surface after their momenta are degraded by the medium they traverse. Without going into the details that can be found in [9, 2831], let it simply be stated that the shower-parton distribution, denoted by , can be calculated subject to a few adjustable parameters that are mainly associated with the momentum degradation process over a wide range of hard- and semihard-parton momenta, extending beyond the validity of pQCD on the low virtuality side. depends on collision energy, but for any given energy it can be determined by fitting the pion distribution at high using (16) and the identification of with recombination in a single jet. That is, if denotes the distribution of a hard or semihard parton of type with momentum upon emerging from the medium and is the unintegrated distribution of shower parton fragmented in vacuum from parton , then the integrated distribution is and the two-shower contribution to in (16) is which will be abbreviated by the symbolic notation . It is the degradation parameters in that are adjusted to fit the pion distribution at high where dominates. Once is determined phenomenologically this way, then is known through (17) even at low where thermal partons in the bulk medium cannot be neglected.

The notion of thermal partons deserves extensive discussion, especially since thermal interaction in the context of PT discussed in the previous section has a slightly different meaning from what is to be considered here. Postponing that discussion until the next section, let us proceed with our brief summary of how the pion distribution is calculated in the RM. Minijets are clusters of particles that stand above the background, which consists of all the particles that the QGP transforms into. At the parton level just before hadronization, the former are the shower partons, while the latter are called thermal partons, since they are the constituents of the equilibrated medium. With denoting the distribution of the thermal partons, the two-parton distribution in (16) is a sum of all possible pairings of and and can therefore be represented symbolically as If minijets are rarely produced, then TT recombination is all that is needed to reproduce the data on pion spectrum.

At RHIC, the effects of at low are negligible even though it is dominant at high , so TT recombination alone is sufficient to account for the low- behavior of , which is exponential. Thus, by adopting the form [29], the pion distribution can be well fitted for  GeV/c with These are phenomenological quantities determined from central Au-Au collisions at  GeV. The distribution for the same collisions, when calculated according to (17), is lower than for  GeV/c and higher above. Both and are shown in [9].

At LHC, there are many more shower partons because of copious production of minijets at higher collision energy. The ratio is shown in [9], increasing from a value of ~7 at  GeV/c to ~30 at  GeV/c (and more at higher ) for Pb-Pb collisions at  TeV. Such a large increase of cannot be accompanied by a similar increase of or even a milder increase, because the inclusive charged particle multiplicity per participant pair at midrapidity increases by only a factor of ~2. In fact, the pion spectrum at LHC can be well reproduced over the whole range of GeV/c with remaining the same as that at RHIC, specified by (20) and (21) [9]. The ratio increases from ~1 at  GeV/c to ~102 at  GeV/c. With being approximately equal in magnitude to at low , it becomes necessary to ask some questions on the effect of shower partons on observables related to quark-hadron PT.

4. Thermal and Shower Partons

In Section 2, we have discussed multiplicity fluctuations in the space with the aim of finding possible signals of quark-hadron PT. In Section 3, we have reviewed the orthogonal problem of determining the distribution for the purpose of exposing the underlying quark distributions of the two types, thermal and shower. At LHC, the two problems intersect at low in the region  GeV/c, since TS recombination dominates at  GeV/c. At RHIC, TS dominance does not occur until  GeV/c, so there is a wider region below 3 GeV/c to examine the fluctuation patterns in with minimal effects of minijets. However, small cuts still need to be made to minimize the overlap of spatial patterns, and at  GeV there are not enough produced particles to populate the space when is as narrow as 0.1 GeV/c. For that reason, LHC offers for the first time realistic opportunity to study quark-hadron PT. Hereafter, our attention will be focused on only the Pb-Pb collisions at LHC in the region  GeV/c, although the ubiquitous shower partons are understood to arise from minijets that are due to semihard partons with momenta  GeV/c. It should be recognized, however, that the shower parton distribution at low could not have been determined without studying the pion distribution at high , as discussed in Section 3.

The thermal partons are not derived from hydrodynamics, which would be inadequate in the presence of abundant shower partons if the parameters in it are determined by neglecting minijets. In the RM, the parametrization of is adjusted to fit the RHIC data at low . It is the distribution of partons (quarks and antiquarks with gluons converted to pairs, ready for confinement transition) at the end of evolution of the medium. At midrapidity in central collisions for which there is no azimuthal anisotropy, is the average transverse momentum distribution that contains no information about the system outside the narrow interval at . Hence, it is insensitive to the edges of the rapidity plateau in the fragmentation region of the leading particles. Indeed, it should be independent of the collision energy (provided that it is high enough to have a mid- region well separated from the edges) because the transition from deconfined quarks to confined quarks in hadrons occurs at such a late stage of the medium’s life span that the local properties of the quarks retain no memory of the initial temperature or configuration [9]. That is analogous to water vapor condensing at 100°C independent of how hot it has previously been. Thus, (20) and (21) have been used for for both RHIC and LHC collisions, resulting in good agreement with the data on and spectra for all  GeV/c; in fact, using higher values of and than those in (21) for LHC is found to yield too many hadrons at all [9].

Despite phenomenological success, the universality of stimulates further inquiries on what constitutes the thermal partons. In the RM, they are regarded as whatever partons there are in the neighbor of a shower parton so that a recombination process can take place between and any of its near neighbors. Nothing is asked about the origin of that neighbor that is not another . The shower parton , whose distribution is , is the in-vacuum fragmentation product of a hard or semihard parton that has emerged from the medium. While in the medium, the hard or semihard parton can undergo scattering, radiation, and other energy-losing processes. The energy lost to the medium enhances the disordered motion of the soft partons in it. They are called the enhanced thermal partons in the neighborhood of the trajectory of the semihard parton in a series of studies of the ridge and azimuthal anisotropy as an alternative to the hydroexplanation [3134]. Being phenomenologically parameterized, includes the enhanced thermal partons that are difficult to calculate. They evolve as more gluon radiation interacts with more medium partons until at late time the distinction between the original and in-medium shower partons becomes meaningless. They reach local equilibrium and are referred to as thermal partons in a generic sense. However, the choice of the word “thermal” does not imply that they are precluded from collective interaction when the local temperature gets down to .

It is reasonable to ask why if increases with collision energy, does not. In (17), the part that depends on is the hard-parton distribution at the point of creation before is degraded to and evolves into [9, 30, 35]. Thus, increases because more minijets are created at higher energies. Although more harder jets are also created at higher , their rates of production are orders of magnitude lower compared to the minijets so their effects on at low are negligible. With the increase of minijets, the enhanced thermal partons are also increased. However, the important point about is that if the medium is denser as a result of the additional in-medium shower partons, the quarks in it would remain deconfined until further expansion reduces the density enough to become ready for recombination. is always the distribution at the end of the expansion phase and has the same inverse slope from RHIC to LHC, no matter how many more partons are produced initially or during expansion. The formalism of RM is not precise at very low momenta, since pion spectrum at  GeV/c can contain contributions from resonance decays that are not accounted for. Thus, is not expected to be accurate at  GeV/c, which is the region that is most likely to be where more thermal partons are enhanced at higher energy due to the prolongation of momentum-degradation processes in the extended expansion phase. Above  GeV/c, the RM has succeeded in reproducing not only the pion distribution at LHC, as mentioned at the end of Section 3, but also the spectra of proton (with the same ) and strange particles (at slightly higher for strange quarks) at all where particle species are identified [9].

5. Effects of Minijets on Multiplicity Fluctuations

We have now arrived at the point where enough has been presented to allow a meaningful discussion about the interplay between fluctuation analysis and minijet effects. The region in momentum space that is relevant is , , and  GeV/c. The region is lower than where the RM is formulated to treat reliably, but distribution is not of interest in the fluctuation analysis. What we know from preceding sections is that at  GeV/c TS recombination is dominant at LHC and that the shower partons are developed outside the medium carrying little information about the collective behavior of the medium. At  GeV/c, the thermal partons contain the in-medium shower partons that are equilibrated with the soft partons in the expanding medium so that they are not distinguishable. Though they are named thermal, those partons are not just in random disordered motion but can participate in ordered near-neighbor confinement interaction before hadronization.

To find the signal for quark-hadron PT, the region  GeV/c should be divided into narrow intervals, for example, 0.1 GeV/c wide, in order to minimize the overlap of spatial patterns in generated in different time intervals on the assumption that there is a close correlation between and hadronization time , that is, larger at earlier , smaller at later . If that assumption turns out to be unrealistic, there is another method of analysis that will be discussed in the next section. An easy way to test that assumption is to check whether the signature, such as the scaling exponent , is essentially the same in the  GeV/c interval as in a larger region  GeV/c without partition. If not, then the smaller interval is likely to reveal the finer structure at the cost of larger errors.

Since the method of analysis described in Section 2 is aimed at quantifying the nature of clusters of particles produced in heavy-ion collisions, the effects of minijets are of concern. High- jets are known to be characterized by clusters of particles that are identified in experimental searches as towers in lego plots. Those are jets with  GeV/c. They are so rarely produced at that their effects on fluctuation analysis are totally negligible. Minijets with  GeV, for example, are more copiously produced in heavy-ion collisions but are not extensively investigated by LHC experiments. Each such minijet fragments into a cluster of a small number of particles which, if concentrated within a small cone in , could contaminate the clustering effect due to critical behavior. However, a cluster of five or six of those particles have a spread of values, most of which are at low . Since the interval in which fluctuation analysis is done should be small, for example,  GeV/c, the average multiplicity per minijet in such small windows is much less than 1. The addition of one particle with minijet origin to the particles from the bulk medium does not contribute to cluster formation. Although an event may have numerous minijets, they are uncorrelated and are therefore randomly distributed throughout . Thus, again, there is no reason to expect ordered clustering from all minijets within a small window. Besides, Poissonian fluctuation is filtered out by factorial moments.

A concrete way to investigate the problem is to use an event generator that can reproduce the hadronic spectra and to examine the multiplicity fluctuations in narrow windows. That is exactly what Gupta and Sharma have done in [19] using the AMPT model [4, 5]. That model has no collective dynamics to simulate critical behavior but presumably has minijets. They have found negative intermittency that corresponds to the intermittency index in (3) being negative. Thus, AMPT does not generate any clusters that can be measured by factorial moments in small bins. A dedicated event generator designed to produce minijets would be good for trial here to elucidate the effect of minijets on multiplicity fluctuations.

6. Signal for Critical Behavior by Void Analysis

In the preceding section, we have seen how the clustering of particles from minijets does not affect the fluctuation analysis by factorial moments, provided that small windows are used with the consequence that each cluster is broken up into several uncorrelated intervals. It is also hoped that making the small cuts prevents the overlap of spatial patterns of hadronic clusters created at different times. That hope is based on an approximate one-to-one correspondence between and emission time. However, such a simple relationship may not be realistic. One can imagine that there is a Gaussian distribution in at any given emission time and that the mean may decrease with increasing time but the width can be broader than the windows. Then, in each interval the spatial patterns can receive contributions from several neighboring emission times. In such a scenario, it is no longer persuasive to argue that the pattern of hadrons in in a window of each event in a heavy-ion collision corresponds to a 2D configuration simulated in the Ising model. When there is a quark-hadron PT, it is not clear whether calculated according to (2) using real data can be interpreted by the same calculation in the Ising model, as done in [11].

A way to circumvent the complication discussed above is to eliminate the need to count the number of hadrons in a bin, which has to be in order to contribute to . If for a fixed the number of bins is large enough such that on average the total number of particles detected in that interval is about, for example, half of , then a random distribution of those particles in the space would generate a random array of occupied and empty bins, with more empty than occupied ones because more than one particle can be in a common bin. If collective interaction is at play as in a PT, then the clustering effect would result in more empty bins. Moreover, a region of connected empty bins can vary in size, both within one event and from event to event. Let us call that region without hadrons a void, to be defined more precisely below. Focusing on the size variation of voids allows us to ignore how many particles are piled up in the nonempty bins. In that way, the effects of minijets are minimized. Moreover, it is appealing to make use of the physical sense that if hadronization time is divided into many steps, then a void in one time step is more likely to be followed by some hadrons emitted in that region in the next step. That is because hadrons are not emitted uniformly from the cylindrical surface of the plasma but in patches in a given time interval, and, as the medium expands, the quarks below the surface in the void region are subject to stronger confinement forces, thus more likely to hadronize there in the next time interval. That kind of physics can be built into a model and be tested in the void analysis [36, 37].

On the Ising lattice, there are spins pointing up or down at each site. Let hadron density in a cell be defined in proportion to the net spin of the cell (containing 4 × 4 sites, e.g.) if it is positive, but let it be defined to be zero, if it is negative. A bin containing several cells is defined to be empty if its average density is below a threshold ; otherwise, occupied. A void is defined to be a region consisting of several empty bins that are connected by at least one common side between adjacent bins. It is the size of a void that is of interest not the densities of the occupied bins. The reason is that at the critical point the void regions can have all sizes. The presence of a few particles here and there due to minijets makes a negligible effect on the void sizes, even if the threshold is low, let alone when is raised. The point is to suppress small fluctuations and concentrate on large patches of contiguous empty bins. An analogy is the study of the topography of a rough terrain by flooding it with water and measuring only the areas of the submerged regions at different water levels.

Using to denote the total number of empty bins in the th void and defining to be the fraction of bins in the space occupied by , where is the total number of bins, let us define where is the total number of voids in a configuration. We can consider two averages over all configurations (or events): one is just and the other is They have been simulated in the Ising model for various thresholds in [36] and found to have scaling behaviors: Moreover, and exhibit linear dependences on : The values of and vary about ±10% as functions of the threshold , depending on . They converge to approximately constant values independent of when is on the low end at about 8% of the maximum density. Their values are which are numerical characterization of critical behavior by void analysis in the Ising model [36].

It is possible to go closer to quark-hadron PT in heavy-ion collisions by generating temporally integrated configurations, for which a range of time steps in the Ising simulation are allowed to populate a window according to a Gaussian distribution in [37]. Preference for void regions to be occupied in successive steps can also be introduced. The result again exhibits the scaling behavior of (24) for a wide range of . The slopes and in (25) turn out also to be very similar to those given above; that is, At a critical point there is so much fluctuation that many bins are empty. The scaling behavior found about the voids is evidently a property of the second-order PT that is independent of the details of time evolution.

The void analysis can readily be applied to the real data at LHC. Dividing the space into a square lattice of bins and counting only the connected empty bins whose hadron densities are lower than a threshold (so as to calculate the void moments ) are easier to perform than calculating factorial moments , since there is no horizontal average to determine first. It is important, however, to be sure that the empty bins are connected to form a void. Sharing a corner only is not a connection. The scaling behaviors of and have been found in the analyses of emulsion data at low energy [38] where the values of and are lower than those given in (27), which is not a surprise since PT is not expected. We expect the void analysis to be an effective tool to study the critical behavior at LHC, if it exists.

7. Conclusion

We have reviewed two approaches in the diagnostics of critical behavior in heavy-ion collisions and considered the effects of minijets. The first approach is to analyze multiplicity fluctuations, while the second is to do the complementary analysis on voids where no hadrons are formed. Scaling behaviors are found in both approaches, based on theoretical calculations in accordance with the Ginzburg-Landau theory of phase transition and Ising model simulations. The lack of models that specifically address the phase transition problem in heavy-ion collisions is an indication of the difficulty in incorporating collective dynamics in a quark-gluon system that is dilute and ready for confinement. Calling the process freeze-out avoids the confrontation with the complexity of the problem, which may be necessary to get quickly to the hadronic distributions that are observed but not to gain insight into a very different realm of physics that has not been explored experimentally. There is, therefore, an urgent need for the existing data from LHC to be analyzed in the manner suggested here in the hope of finding some preliminary signs of quark-hadron PT that can stimulate a movement toward developing in-depth theories focused on criticality in nuclear collisions at high energy. Such theories may suggest new observables that can invigorate dedicated experiments to reveal new physics.

Minijets and maxijets may initially seem like detractors in the search for genuine signature of clusters due to critical behavior. However, as we have seen, so long as the variable is divided into small intervals, the fragments of jets will not affect either the multiplicity or void analysis in those independent intervals. That can be tested just by applying the analyses to the data generated by any of the existing codes on heavy-ion collisions or even and collisions.

The conventional wisdom in the nuclear community is that the quark-gluon system created in nuclear collisions at high energies from RHIC to LHC behaves like a fluid that flows according to hydrodynamics. The hydrotreatment ignores the effects of minijets that are abundantly produced at LHC. The hard and semihard partons that are responsible for those jets lose energy while traversing the medium that takes some time, and the lost energy takes more time to thermalize the bulk partons. Thus, the assumption of rapid equilibration in the hydroformalism is not realistic, let alone the neglect of shower partons that dominate over the thermal partons. Does that mean that there is no thermal system to which one can apply the usual description of critical phenomenon, such as that of Ginzburg-Landau? This is where the results from analyses of the present LHC data can provide some answers. If there are signs of scaling behavior described here, they are evidences in support of quark-hadron PT, which in turn suggests that by the time of hadronization the quark-gluon system has reached local thermal equilibrium with temperature being a good characterization, even though not directly measurable. The stage would then be set for further investigation on how the system with turbulent beginning relaxes to a thermal system at the end that includes the shower partons as the relics of hard processes at early time. More pertinent to the problems discussed here is not so much how the system gets there but what the detailed nature of the PT is, when quarks become bound to form hadrons in the semiglobal perspective of a collective phenomenon.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work relies heavily on previous collaborations with Z. Cao, C. B. Yang, Q.-H. Zhang, and L. Zhu, whose participation in this long journey is gratefully acknowledged.