Table of Contents Author Guidelines Submit a Manuscript
Advances in High Energy Physics
Volume 2019, Article ID 3036904, 68 pages
Review Article

A Guide to Light-Cone PDFs from Lattice QCD: An Overview of Approaches, Techniques, and Results

1Faculty of Physics, Adam Mickiewicz University, Umultowska 85, 61-614 Poznań, Poland
2Department of Physics, Temple University, Philadelphia, PA 19122 - 1801, USA

Correspondence should be addressed to Martha Constantinou; ude.elpmet@cahtram

Received 17 November 2018; Accepted 15 January 2019; Published 2 June 2019

Guest Editor: Alexei Prokudin

Copyright © 2019 Krzysztof Cichy and Martha Constantinou. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The publication of this article was funded by SCOAP3.


Within the theory of Quantum Chromodynamics (QCD), the rich structure of hadrons can be quantitatively characterized, among others, using a basis of universal nonperturbative functions: parton distribution functions (PDFs), generalized parton distributions (GPDs), transverse momentum dependent parton distributions (TMDs), and distribution amplitudes (DAs). For more than half a century, there has been a joint experimental and theoretical effort to obtain these partonic functions. However, the complexity of the strong interactions has placed severe limitations, and first-principle information on these distributions was extracted mostly from their moments computed in Lattice QCD. Recently, breakthrough ideas changed the landscape and several approaches were proposed to access the distributions themselves on the lattice. In this paper, we review in considerable detail approaches directly related to partonic distributions. We highlight a recent idea proposed by X. Ji on extracting quasidistributions that spawned renewed interest in the whole field and sparked the largest amount of numerical studies within Lattice QCD. We discuss theoretical and practical developments, including challenges that had to be overcome, with some yet to be handled. We also review numerical results, including a discussion based on evolving understanding of the underlying concepts and the theoretical and practical progress. Particular attention is given to important aspects that validated the quasidistribution approach, such as renormalization, matching to light-cone distributions, and lattice techniques. In addition to a thorough discussion of quasidistributions, we consider other approaches: hadronic tensor, auxiliary quark methods, pseudodistributions, OPE without OPE, and good lattice cross-sections. In the last part of the paper, we provide a summary and prospects of the field, with emphasis on the necessary conditions to obtain results with controlled uncertainties.

1. Introduction

Among the frontiers of nuclear and particle physics is the investigation of the structure of hadrons, the architecture elements of the visible matter. Hadrons consist of quarks and gluons (together called partons), which are governed by one of the four fundamental forces of nature, the strong force. The latter is described by the theory of Quantum Chromodynamics (QCD). Understanding QCD can have great impact on many aspects of science, from the subnuclear interactions to astrophysics, and, thus, a quantitative description is imperative. However, this is a very challenging task, as QCD is a highly nonlinear theory. This led to the development of phenomenological tools such as models, which have provided important input on the hadron structure. However, studies from first principles are desirable. An ideal ab initio formulation is Lattice QCD, a space-time discretization of the theory that allows the study of the properties of fundamental particles numerically, starting from the original QCD Lagrangian.

Despite the extensive experimental program that was developed and evolved since the first exploration of the structure of the proton [1, 2], a deep understanding of the hadrons’ internal dynamics is yet to be achieved. Hadrons have immensely rich composition due to the complexity of the strong interactions that, for example, forces the partons to exist only inside the hadrons (color confinement), making the extraction of information from experiments very difficult.

Understanding internal properties of the hadrons requires the development of a set of appropriate quantities that can be accessed both experimentally and theoretically. The QCD factorization provides such formalism and can relate measurements from different processes to parton distributions. These are nonperturbative quantities describing the parton dynamics within a hadron and have the advantage of being universal, that is, do not depend on the process used for their extraction. The comprehensive study of parton distributions can provide a wealth of information on the hadrons, in terms of variables defined in the longitudinal direction (with respect to the hadron momentum) in momentum space, and two transverse directions. The latter can be defined either in position or in momentum space. These variables are as follows: the longitudinal momentum fraction carried by the parton, the longitudinal momentum fraction obtained via the longitudinal momentum transferred to the hadron, and the momentum transverse to the hadron direction of movement. Parton distributions can be classified into three categories based on their dependence on , , and the momentum transferred to the hadron, , as described below.

Parton distribution functions (PDFs) are one-dimensional objects and represent the number density of partons with longitudinal momentum fraction while the hadron is moving with a large momentum.

Generalized parton distributions (GPDs) [37] depend on the longitudinal momentum fractions and and, in addition, on the momentum transferred to the parent hadron, . They provide a partial description of the three-dimensional structure.

Transverse momentum dependent parton distribution functions (TMDs) [812] describe the parton distribution in terms of the longitudinal momentum fraction and the transverse momentum . They complement the three-dimensional picture of a hadron from GPDs.

As is clear from the above classification, PDFs, GPDs, and TMDs provide complementary information on parton distributions, and all of them are necessary to map out the three-dimensional structure of hadrons in spatial and momentum coordinates. Experimentally, these are accessed from different processes, with PDFs being measured in inclusive or semi-inclusive processes such as deep-inelastic scattering (DIS) and semi-inclusive DIS (SIDIS); see e.g., [13], for a review of DIS. GPDs are accessed in exclusive scattering processes such as Deeply Virtual Compton Scattering (DVCS) [14], and TMDs in hard processes in SIDIS [10, 11]. Most of the knowledge on the hadron structure is obtained from DIS and SIDIS data on PDFs, while the GPDs and TMDs are less known. More recently, data emerge from DVCS and Deeply Virtual Meson Production (DVMP) [15]. This includes measurements from HERMES, COMPASS, RHIC, Belle and Babar, E906/SeaQuest, and the 12 GeV upgrade at JLab. A future Electron-Ion Collider (EIC), that was strongly endorsed by the National Academy of Science, Engineering and Medicine [16], will be able to provide accurate data related to parton distributions and will advance dramatically our understanding on the hadron tomography. Together with the experimental efforts, theoretical advances are imperative in order to obtain a complete picture of hadrons. First, to interpret experimental data, global QCD analyses [1726] are necessary that utilize the QCD factorization formalism and combine experimental data and theoretical calculations in perturbative QCD. Note that these are beyond the scope of this review and we refer the interested Reader to the above references and a recent community white paper [27]. Second, theoretical studies are needed to complement the experimental program and, in certain cases, provide valuable input. This is achieved using models of QCD and more importantly calculations from first principles. Model calculations have evolved and constitute an important aspect of our understanding of parton structure. An example of such a model is the diquark spectator model [28] that has been used for studies of parton distributions (for more details, see Section 4). The main focus of the models discussed in Section 4 is the one-dimensional hadron structure (-dependence of PDFs), but more recently the interest has been extended to the development of techniques that are also applicable to GPDs and TMDs (some aspects are discussed in this review). Let us note that there have been studies related to TMDs from the lattice, and there is intense interest towards that direction (see, e.g., [2931], and references therein).

Despite the tremendous progress in both the global analyses and the models of QCD, parton distributions are not fully known, due to several limitations: global analysis techniques are not uniquely defined [22]; certain kinematic regions are difficult to access, for instance, the very small -region [3234]; and models cannot capture the full QCD dynamics. Hence, an ab initio calculation within Lattice QCD is crucial, and synergy with global fits and model calculations can lead to progress in the extraction of distribution functions.

Lattice QCD provides an ideal formulation to study hadron structure and originates from the full QCD Lagrangian by defining the continuous equations on a discrete Euclidean four-dimensional lattice. This leads to equations with billions of degrees of freedom, and numerical simulations on supercomputers are carried out to obtain physical results. A nonperturbative tool, such as Lattice QCD, is particularly valuable at the hadronic energy scales, where perturbative methods are less reliable, or even fail altogether. Promising calculations from Lattice QCD have been reported for many years with the calculations of the low-lying hadron spectrum being such an example. More recently, Lattice QCD has provided pioneering results related to hadron structure, addressing, for instance, open questions, such as the spin decomposition [35] and the glue spin [36] of the proton. Another example of the advances of numerical simulations within Lattice QCD is the calculation of certain hadronic contributions to the muon , for example, the connected and leading disconnected hadronic light-by-light contributions (see recent reviews of [37, 38]). Direct calculations of distribution functions on a Euclidean lattice have not been feasible due to the time dependence of these quantities. A way around this limitation is the calculation on the lattice of moments of distribution functions (historically for PDFs and GPDs) and the physical PDFs can, in principle, be obtained from operator product expansion (OPE). Realistically, only the lowest moments of PDFs and GPDs can be computed (see, e.g., [3944]) due to large gauge noise in high moments, and also unavoidable power-divergent mixing with lower-dimensional operators. Combination of the two prevents a reliable and accurate calculation of moments beyond the second or third, and the reconstruction of the PDFs becomes unrealistic.

Recent pioneering work of X. Ji [45] has changed the landscape of lattice calculations with a proposal to compute equal-time correlators of momentum boosted hadrons, the so-called quasidistributions. For large enough momenta, these can be related to the physical (light-cone) distributions via a matching procedure using Large Momentum Effective Theory (LaMET) (see Sections 3.1 and 8). This possibility has opened new avenues for direct calculation of distribution functions from Lattice QCD and first investigations have revealed promising results [46, 47] (see Section 3.2). Despite the encouraging calculations, many theoretical and technical challenges needed to be clarified. One concern was whether the Euclidean quasi-PDFs and Minkowski light-cone PDFs have the same collinear divergence, which underlies the matching programme. In addition, quasi-PDFs are computed from matrix elements of nonlocal operators that include a Wilson line. This results in a novel type of power divergences and the question whether these operators are multiplicatively renormalizable remained unanswered for some time. While the theoretical community was addressing such issues, the lattice groups had to overcome technical difficulties related to the calculation of matrix elements of nonlocal operators, including how to obtain reliable results for a fast moving nucleon, and how to develop a nonperturbative renormalization prescription (see Section 7). For theoretical and technical challenges, see Sections 5-6. Our current understanding on various aspects of quasi-PDFs has improved significantly, and lattice calculations of quasi-PDFs have extended to quantities that are not easily or reliably measured in experiments (see Sections 9-10), such as the transversity PDF [48, 49]. This new era of LQCD can provide high-precision input to experiments and test phenomenological models.

The first studies on Ji’s proposal have appeared for the quark quasi-PDFs of the proton (see Sections 3.2 and 9). Recently, the methodology has been extended to other hadrons, in particular mesonic PDFs and distribution amplitudes (DAs). Progress towards this direction is presented in Section 10. Other recent reviews on the -dependence of PDFs from Lattice QCD calculations can be found in [27, 50, 51]. The quasi-PDFs approach is certainly promising and can be generalized to study gluon quasi-PDFs, quasi-GPDs, and quasi-TMDs. In such investigations, technical difficulties of different nature arise and must be explored. First studies are presented here. Apart from the quasidistribution approach, we also review other approaches for obtaining the -dependence of partonic functions, both the theoretical ideas underlying them (see Section 2) and their numerical explorations (Section 11).

The central focus of the review is the studies of the -dependence of PDFs. We present work that appears in the literature until November 10, 2018 (published, or on the arXiv). The discussion is extended to conference proceedings for recent work that has not been published elsewhere. The presentation is based on chronological order, unless there is a need to include follow-up work by the same group on the topic under discussion. Our main priority is to report on the progress of the field, but also to comment on important aspects of the described material based on theoretical developments that appeared in later publications, or follow-up work. To keep this review at a reasonable length, we present selected aspects of each publication discussed in the main text and we encourage the interested Reader to consult the referred work. Permission for reproduction of the figures has been granted by the Authors and the scientific journals (in case of published work).

The rest of the paper is organized as follows. In Section 2, we introduce methods that have been proposed to access the -dependence of PDFs from the lattice, which include a method based on the hadronic tensor, auxiliary quark field approaches, quasi- and pseudodistributions, a method based on OPE, and the good lattice cross-sections approach. A major part of this review is dedicated to quasi-PDFs, which are presented in more detail in Section 3, together with preliminary studies within Lattice QCD. The numerical calculations of the early studies have motivated an intense theoretical activity to develop models of quasidistributions, which are presented in Section 4. In Section 5, we focus on theoretical aspects of the approach of quasi-PDFs, that is, whether a Euclidean definition can reproduce the light-cone PDFs, as well as the renormalizability of operators entering the calculations of quark and gluon quasi-PDFs. The lattice techniques for quasi-PDFs and difficulties that one must overcome are summarized in Section 6. Recent developments on the extraction of renormalization functions related to logarithmic and/or power divergences are explained in Section 7, while Section 8 is dedicated to the matching procedure within LaMET. Lattice results on the quark quasi-PDFs for the nucleon are presented in Section 9. The quasi-PDFs approach has been extended to gluon distributions, as well as studies of mesons, as demonstrated in Section 10. In Section 11, we briefly describe results from the alternative approaches presented in Section 2. We close the review with Section 12 that gives a summary and future prospects. We discuss the -dependence of PDFs and DAs, as well as possibilities to study other quantities, such as GPDs and TMDs. A glossary of abbreviations is given in the Appendix.

2. -Dependence of PDFs

In this section, we briefly outline different approaches for obtaining the -dependence of partonic distribution functions, in particular collinear PDFs. We first recall the problem by directly employing the definitions of such functions on the lattice, using the example of unpolarized PDFs. The unpolarized PDF, denoted here by , is defined on the light cone:where is the hadron state with momentum , in the standard relativistic normalization1, the light-cone vectors are taken as , and is the Wilson line connecting the light-cone points and , while the factorization scale is kept implicit. Such light-cone correlations are not accessible on a Euclidean spacetime, because the light-cone directions shrink to one point at the origin. As discussed in the Introduction, this fact prevented lattice extraction of PDFs for many years, apart from their low moments, reachable via local matrix elements and the operator product expansion (OPE). However, since the number of moments that can be reliably calculated is strongly limited, alternative approaches were sought for to yield the full Bjorken- dependence.

The common feature of all the approaches is that they rely to some extent on the factorization framework. For a lattice observable that is to be used to extract PDFs, one can generically write the following:2where is a perturbatively computable function and is the desired PDF. In the above expression we distinguish between the factorization scale, , and the renormalization scale, . These scales are usually taken to be the same and, hence, from now on we will adopt this choice and take . Lattice approaches use different observables that fall into two classes: (1)Observables that are generalizations of light-cone functions such that they can be accessed on the lattice; such generalized functions have direct -dependence, but does not have the same partonic interpretation as the Bjorken-.(2)Observables in terms of which hadronic tensor can be written; the hadronic tensor is then decomposed into structure functions like and , which are factorizable into PDFs.

Below, we provide the general idea for several proposals that were introduced in recent years.

2.1. Hadronic Tensor

All the information about a DIS cross-section is contained in the hadronic tensor [5256], defined bywhere is the hadron state labeled by its momentum and polarization , is virtual photon momentum, and is electromagnetic current at point . The hadronic tensor can be related to DIS structure functions and hence, in principle, PDFs can be extracted from it. is the imaginary part of the forward Compton amplitude and can be written as the current-current correlation functionwhere the subscript denotes averaging over polarizations. The approach has been introduced as a possible way of investigating hadronic structure on the lattice by K.-F. Liu and S.-J. Dong already in 1993. They also proposed a decomposition of the contributions to the hadronic tensor according to different topologies of the quark paths, into valence and connected or disconnected sea ones. In this way, they addressed the origin of Gottfried sum rule violation.

A crucial aspect for the implementation in Lattice QCD is the fact that the hadronic tensor , defined in Minkowski spacetime, can be obtained from the Euclidean path-integral formalism [5458], by considering ratios of suitable four-point and two-point functions. In the limit of the points being sufficiently away from both the source and the sink, where the hadron is created or annihilated, the matrix element receives contributions from only the ground state of the hadron. Reconstructing the Minkowski tensor from its Euclidean counterpart is formally defined by an inverse Laplace transform of the latter and can, in practice, be carried out using, e.g., the maximum entropy method or the Backus-Gilbert method. Nevertheless, this aspect is highly nontrivial and improvements thereof are looked for. As pointed out in [59], a significant role may be played by power-law finite volume effects related to the matrix elements being defined in Euclidean spacetime. A similar phenomenon was recently observed also in the context of mixing [60]. Another difficulty of the hadronic tensor approach on the lattice is the necessity to calculate four-point correlation functions, which is computationally more intensive than for three-point functions, the standard tools of hadron structure investigations on the lattice. However, the theoretical appeal of the hadronic tensor approach recently sparked renewed interest in it [6163]. We describe some exploratory results in Section 11.1.

2.2. Auxiliary Scalar Quark

In 1998, a new method was proposed to calculate light-cone wave functions (LCWFs) on the lattice [64]. This finds its motivation from the fact that LCWFs enter the description of many processes, such as electroweak decays and meson production. LCWF is the leading term of the full hadronic wave function in the expansion, where is the hadron momentum. For concreteness, we write the defining expression for the most studied LCWF, the one of the pion, , where is the momentum fraction:with being boosted pion state, being vacuum state, and being pion decay constant. is the Wilson line that ensures gauge invariance of the matrix element.

The essence of the idea is to “observe” and study on the lattice the partonic constituents of hadrons instead of the hadrons themselves [65, 66]. As shown in [64], the pion LCWF can be extracted by considering a vacuum-to-pion expectation value of the axial vector current with quark fields separated in spacetime. Gauge invariance is ensured by a scalar quark propagator with color quantum numbers of a quark, and at a large momentum transfer. The relation between the Fourier transform of this matrix element, computed on the lattice, and the pion LCWF, , is given by the following formula:where is momentum transfer, is scalar colored propagator, and is discrete set of partonic momentum fractions (allowed by the discretized momenta in a finite volume). The spacetime points are explained in Figure 1, which shows the three-point function that needs to be computed. The interval needs to be large to have an on-shell pion. To extract the LCWF, several conditions need to be satisfied: injected pion momentum needs to be large (to have a “frozen” pion and see its partonic constituents), the scalar quark needs to carry large energy, the time (time of momentum transfer and “transformation” of a quark to a scalar quark) has to be small (to prevent quantum decoherence and hadronization), and the lattice volume has to be large enough (to minimize effects of discretizing parton momenta). We refer to the original papers for an extensive discussion of these conditions. An exploratory study of the approach was presented in [65, 66] and later in [67], both in the quenched approximation. Naturally, the conditions outlined above are very difficult to satisfy simultaneously on the lattice, due to restrictions from the finite lattice spacing and the finite volume. However, the knowledge of the full hadronic wave function from first principles would be very much desired and further exploration of this approach may be interesting. In particular, integrals of hadronic wave functions over transverse momenta yield distribution amplitudes and PDFs.

Figure 1: Schematic representation of the three-point function that needs to be computed to extract the pion light-cone wave function [6466]. Source: arXiv version of [66], reprinted with permission by the Authors.
2.3. Auxiliary Heavy Quark

In 2005, another method was proposed [58] to access hadron structure on the lattice, including PDFs. The idea relies on simulating on the lattice the Compton scattering tensor, using currents that couple physical light quarks of a hadron with a purely valence fictitious heavy quark. In the continuum limit, one can extract matrix elements of local operators in the OPE in the same renormalization scheme in which the Wilson coefficients are calculated. In this way, one gets the moments of PDFs. The crucial difference with respect to standard lattice calculations of moments is that the approach removes power divergent mixings with lower-dimensional operators, unavoidable in the lattice formulation for fourth and higher moments due to the breaking of the rotational invariance into the discrete hypercubic subgroup . This is derived and discussed in detail in [58]. Thus, in principle, any PDF moment can be extracted and the whole PDF can be reconstructed. Moreover, the heavy fictitious quark suppresses long-range correlations between the currents and also removes many higher-twist contaminations (twist=dimension−spin). The results are independent of the mass of the heavy quark, , as long as it satisfies a lattice window restriction; that is, it should be much larger than , but much smaller than the ultraviolet cutoff . In practice, this means a requirement of rather small lattice spacings.

The considered heavy-light current is defined as follows:with denoting the light quark field and the fictitious heavy quark field. The Dirac structure can be general and is typically chosen according to the desired final observable. The Euclidean Compton scattering tensor is then constructed:where are hadron states with momentum and spin and the spins are summed over. The momentum transfer between the hadron and the scattered lepton in a DIS process should be smaller or at most of the order of the heavy quark mass. Furthermore, the momenta should satisfy the constraint , where the momenta with subscript are the Minkowski counterparts of the Euclidean ones. If this condition is satisfied, analytic continuation of the hadronic tensor to Euclidean spacetime is straightforward and can be achieved by relating to . Expanding this tensor, in the continuum, using OPE, one can relate it to moments of PDFs. On the lattice, one needs to compute four-point functions to access the Compton tensor for PDFs. Analogous procedure may be applied to compute distribution amplitudes, for which the hadronic interpolating operator needs to be applied only once to the vacuum state and hence a computation of only three-point functions is required.

Numerical exploration, in the quenched approximation, is in progress [68], aimed at extracting the moments of the pion DA. Since the matching to OPE has to be performed in the continuum, at least three values of the lattice spacing need to be employed for a reliable extrapolation. Preliminary results are presented in Section 11.2 demonstrating the feasibility of this method.

2.4. Auxiliary Light Quark

Another possibility for extraction of light-cone distribution functions appeared in 2007 by V. Braun and D. Müller [69] and is based on the lattice calculation of exclusive amplitudes in coordinate space. It is similar to the fictitious heavy quark approach, but the heavy quark is replaced by an auxiliary light quark. One considers a current-current correlator:with being the electromagnetic current, but other choices of currents are also possible. For a discussion on extracting partonic distributions at light-like separations from such Euclidean correlators, we refer to Section II of [70]. On the lattice, can be computed as an appropriate ratio of a three-point and a two-point function. If the separation between currents is small, the correlator can be computed perturbatively (using OPE) and in such a case Equation (9) yields:at leading order and leading twist. Equation (10) is proportional to the Fourier transform, , of the pion DA, , where is the quark momentum fraction. The perturbative expression for the correlator was also derived in [69] to NNLO, including twist-4 corrections. The LO and leading twist expression for the case of scalar-pseudoscalar densities in Equation (9) was given in [71]. It has been emphasized that the pion boost plays a different role than in some other approaches, as it does not suppress higher-twist contributions, but rather enters the Ioffe time . Thus, going to large boosts is important to have the full information on the coordinate space pion DA, , which can allow disentanglement between phenomenological models considered in the literature, that disagree in the regime of large Ioffe times. Advantages of the approach include the possibility of having arbitrary direction of with respect to the boost direction, which may make it possible to minimize discretization effects. Moreover, one avoids complications related to the renormalization in the presence of a Wilson line (see Section 7); that is, one only needs renormalization of standard local operators which is at most logarithmically divergent. Finally, different possible Dirac structures may give the possibility of better control of higher-twist contamination. Obviously, the approach can also be generalized to extract PDFs, which, however, would necessitate the computation of four-point functions (see also Section 2.8).

The first numerical investigation of this approach is under way by the Regensburg group [70, 71] and is aimed at computing the pion DA, using multiple channels. The results fully prove the feasibility of this method and establish its status as a promising way of studying hadron structure; see also Section 11.3. Nevertheless, the requirement of calculation of four-point functions for extracting PDFs may prove to be a serious restriction and an exploratory study for, for example, nucleon PDFs, is not yet available.

2.5. Quasidistributions

In 2013, X. Ji proposed a new approach to extracting the -dependence of structure functions [45]. Although historically it was not the first idea, it can be presently judged that it has been a breakthrough in the community’s thinking about -dependence from numerical simulations on a Euclidean lattice. In particular, it clearly renewed the interest also in approaches proposed earlier and described above. Ji’s approach, obviously, bears similarities with the earlier methods and is also based on the factorization framework, in which a lattice computable function is factorized into a hard coefficient and a nonperturbative object like a PDF or a DA. The main difference is another type of object that is used to connect a quark and an antiquark separated by some distance and that ensures gauge invariance. In earlier proposals, different types of auxiliary quark propagators were used for this: scalar, heavy, or light quark propagators. In Ji’s technique, this role is played by a Wilson line, that is, the same object that is used in definitions of PDFs and other distribution functions. Thus, in general, the quasidistribution approach is the closest transcription of a light-cone definition to Euclidean spacetime, effectively boiling down to replacing light-cone correlations by equal-time correlators along the direction of the Wilson line.

We illustrate the idea using the example of PDFs, while analogous formulations can be used to define DAs, GPDs, etc. It is instructive to see the direct correspondence between the light-cone definition (Equation (1)) and the definition of quasi-PDFs. As pointed out above, since light-cone correlations cannot be accessed on a Euclidean lattice, Ji proposed to evaluate on the lattice the following distribution, now termed the quasidistribution:where , is the quark momentum in the -direction, and is the Wilson line in the boost direction3. The light-cone definition corresponds to the above expression at infinite momentum boost, in line with Feynman’s original parton model [72, 73]. Since the momentum of the nucleon on the lattice is obviously finite, the partonic interpretation is formally lost and some quarks can carry more momenta than the whole nucleon () or move in the opposite direction to it ().

The quasidistribution differs from the light-cone one by higher-twist corrections suppressed with and , where is the nucleon mass; see Section 3.1 for more details. A vital observation of Ji was that the difference between the two types of distributions arises only in the UV region; that is, their structure in the IR is the same. This means that the UV difference can be computed in perturbation theory and subtracted from the result, which comes under the name of matching to a light-cone distribution or Large Momentum Effective Theory (LaMET) [74]. The possibility of correcting the higher-twist effects by LaMET is an important difference with respect to previously mentioned approaches. However, explicit computation of such effects is also possible in them, as demonstrated already in the original paper for the auxiliary light quark approach [69].

The quasidistribution approach received a lot of interest in the community and sparked most of the numerical work among all the direct -dependence methods. In further sections, we discuss in more detail its various aspects and the plethora of numerical results obtained so far.

2.6. Pseudodistributions

The approach of quasidistributions was thoroughly analyzed by A. Radyushkin [7577] in the framework of virtuality distribution functions introduced by the same Author [78, 79] and straight-link primordial TMDs. In the process, he discovered another, but strongly related, type of distribution that is accessible on the lattice and can be related to light-cone distributions via factorization. It can be extracted from precisely the same matrix element that appears in Equation (1), , viewed as a function of two Lorentz invariants, the “Ioffe time” [52], and . Thus, has been termed the Ioffe time distribution (ITD). As in Ji’s approach the vector can be chosen to be purely spatial, on a Euclidean lattice. Then, one defines a pseudodistribution:Thus, the variation with respect to a quasi-PDF is the Fourier transform that is taken over the Ioffe time (at fixed ), as opposed to being over the Wilson line length (at fixed momentum ). A consequence of this difference is that pseudo-PDFs have considerably distinct properties from quasi-PDFs. In particular, the distribution has the canonical support, .

We briefly mention here the issue of power divergences induced by the Wilson line, to be discussed more extensively in Sections 5.2.1 and 7. In the pseudodistribution approach, a convenient way of eliminating these (multiplicative) divergences is to take the ratio [77, 80]. The reduced ITD, , can then be perturbatively matched to a light-cone Ioffe time PDF [8186], as demonstrated in Section 8. The (inverse) length of the Wilson line plays the role of the renormalization scale and can be related to, e.g., the scale.

Numerical investigation of the pseudodistribution approach has proceeded in parallel with the theoretical developments and promising results are being reported [81, 8790] (see also Section 11.4).

2.7. OPE without OPE

Yet another recent proposal to compute hadronic structure functions was suggested in [91]. It is closely related to known ideas introduced around 20 years ago, dubbed “OPE without OPE” by G. Martinelli [92] and applied in, e.g., flavor physics [93]. The name originates from the fact that one directly computes the chronologically ordered product of two currents rather than matrix elements of local operators. In addition, one works in the regime of small spacetime separations between currents (to use perturbation theory to determine the expected form of the OPE), but large enough to avoid large discretization effects. The idea is also an ingredient of the proposal to compute LCWFs with the aid of a fictitious scalar quark [64].

The starting point is the forward Compton amplitude of the nucleon, defined similarly as in Equation (3). It can be decomposed in terms of DIS structure functions and . With particular choice of kinematics, one can obtain the following relations between the -component of the Compton amplitude and :where . Being able to access for large enough number of values of , one can extract the moments of or even the whole function.

Another important ingredient of the method proposed in [91] is the efficient computation of , that is, one that avoids the computation of four-point functions. It relies on the Feynman-Hellmann relation [94]. One extends the QCD Lagrangian with a perturbationwhere is the electric charge of the -th flavor and is a parameter with dimension of mass. Evaluating the derivative of the nucleon energy with respect to , which requires dedicated simulations at a few values, leads to estimates of :The Authors also showed first results obtained in this framework and point to directions of possible improvements and to prospects of computing the entire structure function based on this method (see Section 11.5).

2.8. Good Lattice Cross Sections4

A novel approach to extracting PDFs or other partonic correlation functions from ab initio lattice calculations was proposed by Y.-Q. Ma and J.-W. Qiu [85, 86, 95]. They advocate for a global fit of “lattice cross-sections” (LCSs), i.e., appropriate lattice observables defined below, to which many of the ones described above belong. The logic is that standard phenomenological extractions of PDFs rely on an analogous fit to hadronic cross-sections (HCSs) obtained in experiments and a global fit approach can average out some of the systematics and yield ultimately good precision.

Good LCSs, i.e., ones that can be included in such a global fit, are the ones that have the following properties:(1)They are calculable in Euclidean Lattice QCD(2)Have a well-defined continuum limit(3)Have the same and factorizable logarithmic collinear divergences as PDFs.

All of these properties are crucial and nontrivial. The first one excludes the direct use of observables defined on the light cone. In practice, the second one requires the observables to be renormalizable. Finally, the third property implies that the analogy with global fits to HCSs is even more appropriate; both strategies need to rely on the factorization framework: LCSs and HCSs are then written as a convolution of a perturbatively computable hard coefficient with a PDF.

Ma and Qiu constructed also a class of good LCSs in coordinate space that have the potential of being used in the proposed global fits, demonstrating that the three defining properties of LCSs are satisfied [86]. The considered class is very closely related to the one proposed by Braun and Müller (see Section 2.4), but the latter Authors concentrated on the pion DA, while the analysis of Ma and Qiu deals with the case of hadronic PDFs. In general, the relevant matrix element can be written aswhere stands for different possible operators that can be shown to be factorizable into the desired PDF. is the hadron momentum, and is the largest separation of fields from which the -th operator is constructed (), . One suggested choice for are the current-current correlators:where stands for the dimension of the renormalized current , with being the renormalization function of the current . Different possible options for the currents were outlined and, then, factorization was demonstrated for this whole class of LCSs. Renormalizability of these objects is straightforward, as they are constructed from local currents. Also, the feasibility of a lattice calculation is easy to establish if has no time component. Thus, this class of matrix elements belongs to the set of good LCSs. It was also shown in [86] that three of the observables discussed above, quasi-PDFs, pseudo-PDFs, and the Compton amplitude , are also examples of good LCSs.

An explicit numerical investigation of the current-current correlators is in progress by the theory group of Jefferson National Laboratory (JLab) and first promising results for pion PDFs, using around 10 different currents, have been presented. For more details see Section 11.6.

3. Quasi-PDFs: More Details and Early Numerical Studies

We discuss now, in more detail, the quasidistribution approach which is the main topic of this review. The focus of this section is on the theoretical principles of this method and we closely follow the original discussion in Ji’s first papers. Since these were soon followed by numerical calculation within Lattice QCD exploring the feasibility of the approach, we also summarize the progress on this side. We also identify the missing ingredients in these early studies and aspects that need significant improvement.

3.1. Theoretical Principles of Quasi-PDFs

Ji’s idea of quasi-PDFs [45] relies on the intuition that if light-cone PDFs can be equivalently formulated in the infinite momentum frame (IMF), then the physics of a hadron boosted to a large but finite momentum has to have much in common with the physics of the IMF. Moreover, the difference between a large momentum frame and the IMF should vanish when the hadron momentum approaches infinity. These intuitions were formalized by Ji in his original paper and we reproduce here his arguments.

Consider a local twist-2 operatorwhere parentheses in superscript indicate symmetrization of indices and the subtracted trace terms include operators of dimension with at most Lorentz indices. The matrix element of such an operator in the nucleon state readswhere is a symmetric rank- tensor [96] and the coefficients are moments of PDFs, i.e., with even . Taking all indices , one recovers the light-cone, time-dependent correlation that defines the PDF. We now consider a different choice of indices, without any temporal component, :with the trace terms containing operators with again at most Lorentz indices. Because of Lorentz invariance, matrix elements of the trace terms in the nucleon state are at most multiplied by . On the other hand [96],where is a combinatorial coefficient and is the nucleon mass. As a consequence, we find thatThe form of this expression implies that using an operator with the Wilson line in a spatial direction, in a nucleon state with finite momentum, leads to the light-cone PDF up to power-suppressed corrections in the inverse squared momentum. The corrections are of two kinds: generic higher-twist corrections and ones resulting from the nonzero mass of the nucleon. As we will discuss below, the latter can be calculated analytically and subtracted out. However, the former can only be overcome by simulating at a large enough nucleon boost and by using a matching procedure.

In the original paper that introduced the quasidistribution approach [45], Ji pointed out an intuitive way to understand the above result: “(…) consider the Lorentz transformation of a line segment connecting with the origin of the coordinates. As the boost velocity approaches the speed of light, the space-like line segment is tilted to the light-cone direction. Of course, it cannot literally be on the light-cone because the invariant length cannot change for any amount of boost. However, this slight off-light-cone-ness only introduces power corrections which vanish asymptotically.” This intuition is schematically represented in Figure 2.

Figure 2: Schematic illustration of the relation between a finite momentum frame, with the Wilson line in a spatial direction and the light-cone frame of a hadron at rest. Due to Lorentz contraction, going to the light-cone frame increases the length by a boost factor , in the IMF. Source: [74], reprinted with permission by the Author and Springer Nature.

We turn now to discussing how to match results obtained on the lattice, with a hadron momentum that is finite and relatively small, to the IMF. The subtlety of this results from the fact that regularizing the UV divergences does not commute with taking the infinite momentum limit. When defining PDFs, the latter has to be taken first, i.e., before removing the UV cutoff, whereas on the lattice one is bound to take all scales, including the momentum boost of the nucleon, much smaller than the cutoff, whose role is played by the inverse lattice spacing. To overcome this difficulty, one needs to formulate an effective field theory, termed Large Momentum Effective Theory (LaMET) [74], which takes the form of matching conditions that take the quasidistribution to the IMF, or light-cone, distribution. LaMET is an effective theory of QCD in the presence of a large momentum scale , in a similar sense as Heavy Quark Effective Theory (HQET) [97] is an effective theory of QCD in the presence of a heavy quark, that can have a mass larger than the lattice UV cutoff.

The parallels of LaMET with HQET are more than superficial. We again follow Ji’s discussion [74]. In HQET, a generic observable depends on the heavy mass and a cutoff . The matching with an observable defined in the effective theory, in which the heavy quark has infinite mass, can be written in the following way, due to asymptotic freedom:where is renormalized at a scale in the effective theory. Additionally, renormalization of the full theory translates the cutoff scale to a renormalization scale . The crucial aspect is that and have the same infrared physics. Thus, the matching coefficient, , is perturbatively computable as an expansion in the strong coupling constant. Apart from the perturbative matching, there are power-suppressed corrections, which can also be calculated.

Using the same ideas, one can write the relation between an observable in the lattice theory, , dependent on the analogue of a heavy mass, i.e., a large momentum (and on the cutoff scale), and an observable in a theory in the IMF, , thus corresponding to Feynman’s parton model or to a light-cone correlation. This is again valid because of asymptotic freedom. The matching reads as follows:We have, therefore, established the close analogy between HQET and the IMF parton model and the latter plays the role of an effective theory for a nucleon moving with a large momentum, just as HQET is an effective theory for QCD with a heavy mass. The infrared properties are, again, the same in both theories and the matching coefficient, , can be computed in perturbation theory. There are power-suppressed corrections in inverse powers of , vs. inverse powers of in HQET.

To summarize, the need for LaMET when transcribing the finite boost results to light-cone parton distributions is the consequence of the importance of the order of limits. Parton physics corresponds to taking in the observable first, before renormalization. On the lattice, in turn, UV regularization is necessarily taken first, before the infinite momentum limit, since no scale in the problem can be larger than the UV cutoff. However, interchanging the order of limits does not influence infrared physics and, hence, only matching in the ultraviolet has to be carried out and can be done perturbatively. The underlying factorization can be proven order by order in perturbation theory. It is important to emphasize that any partonic observable can be accessed within this framework, with the same universal steps:(1)Construction of a Euclidean version of the light-cone definition. The Euclidean observable needs to approach its light-cone counterpart in the limit of infinite momentum(2)Computation of the appropriate matrix elements on the lattice and renormalizing them(3)Calculation of the matching coefficient in perturbation theory and use of LaMET, Equation (24), to extract the light-cone distribution.

There is complete analogy also with accessing parton physics from scattering experiments, using factorization theorems and, thus, separating the nonperturbative (low-energy) and perturbative (high-energy) scales. To have similar access to partonic observables from lattice computations, LaMET plays the role of a tool for scale separation. Moreover, just as parton distributions can be extracted from a variety of different scattering processes, they can also be approached with distinct lattice operators.

We continue the discussion of LaMET by considering now the matching process in more detail. In the first paper devoted to the matching in the framework of LaMET, the nonsinglet PDF case was discussed [98]. We remind here the definition of the quasi-PDF:taking the original choice of the Dirac structure, i.e., , for the unpolarized case (see discussion about mixing for certain Dirac structures in Section 7). The matching condition should take the formwhere the quasi-PDF, , is renormalized at a scale . The calculation of the matching is performed in a simple transverse momentum cutoff scheme, regulating the UV divergence, and, later in Section 8, we will consider further developments, including matching from different schemes to the scheme. The motivation behind using the transverse momentum cutoff scheme is to take trace of the linear divergence related to the presence of the Wilson line, which would not be possible when using dimensional regularization.

The tree level of both the quasi- and the light-cone distributions is the same, i.e., a Dirac delta . At one-loop level, two kinds of contributions appear: the self-energy diagram (left one in Figure 3) and the vertex diagram (right one in Figure 3). The quasidistribution receives, hence, the following one-loop correction: where are one-loop self-energy corrections (wave function corrections) and are the one-loop vertex corrections. Expressions for an explicit form of and are given in [98]. Their crucial aspect is that they are nonzero not only in the canonical range , but also outside of it, for any positive and negative . This corresponds to the loss of the standard partonic interpretation mentioned above. An important aspect is the particle number conservation, . Different kinds of singularities appear:(i)Linear (UV) divergences due to the Wilson line, taking in this scheme the form (ii)Collinear (IR) divergences, only in , expected to be the same as in the light-cone distribution(iii)Soft (IR) divergences (singularities at ), canceling between the vertex and the self-energy corrections (“plus prescription”)(iv)Logarithmic (UV) divergences in self-energy corrections, regulated with another cutoff5.

Figure 3: One-loop diagrams entering the calculation of quasidistributions: self-energy corrections (left) and vertex corrections (right). Source: [98], reprinted with permission by the Authors and the American Physical Society.

We turn now to the light-cone distribution. It can be calculated in the same transverse momentum cutoff scheme by taking the limit à la Weinberg [99]. We do not write here the final formulae, which can be found again in [98]. The result is the same as obtained from the light-cone definition. Crucially, the collinear divergence is the same as in the quasi-PDF, as anticipated based on physical arguments that the construction of quasidistributions should not modify the IR properties. Obviously, the diagrams in this case are nonzero only for ; i.e, has a partonic interpretation.

Having computed the one-loop diagrams, one is ready to calculate the matching coefficient in Equation (26). Its perturbative expansion can be written as with the following one-loop function: Note that the matching process effectively trades the dependence on the large momentum for renormalization scale dependence (the term with the logarithm of ), another characteristic feature of effective field theories. The antiquark distribution and the -factor satisfy ; hence including antiquarks is straightforward. Similar matching formulae were also derived for the case of helicity and transversity distributions [98].

The early papers of [45, 74, 98] provided the systematic framework for defining quasidistributions and matching them with their light-cone counterparts. Since then, there have been several improvements of many aspects of this programme, including renormalization, matching, target mass corrections, and other theoretical aspects, as well as developments for distributions other than the nonsinglet quark PDF of the nucleon discussed here. Before we turn to them, we report the early numerical efforts in Lattice QCD that illustrate the state-of-the-art calculations of that time.

3.2. Early Numerical Investigations

Ji’s proposal for a novel approach of extracting partonic quantities on the lattice, in particular PDFs, sparked an enormous wave of interest, including numerical implementation and model investigations (see Section 4).

The first lattice results were presented in 2014 in [46] by H.-W. Lin et al. and later in [47, 100] by the ETM Collaboration6. Lin et al. used a mixed action setup of clover valence quarks on a HISQ sea, lattice volume , fm, and pion mass () around 310 MeV, while ETMC used a unitary setup with maximally twisted mass quarks, lattice volume , fm, and MeV. Both papers implemented the bare matrix elements of the isovector unpolarized PDF ( flavor structure, Dirac structure ). The statistics for Lin et al. is 1383 measurements, while ETMC used a larger statistics of 5430 measurements. The employed nucleon boosts were in both cases the three lowest multiples of , i.e., 0.43, 0.86, and 1.29 GeV (Lin et al.) and 0.47, 0.94, and 1.42 GeV (ETMC), with noticeable increase of noise for the larger boosts, resulting in larger statistical errors. In view of the missing renormalization programme, both collaborations used HYP smearing [101] to bring the renormalization functions closer to their tree-level values (ETMC also applied the renormalization factor to correctly renormalize the local matrix element, i.e., one without the Wilson line). ETMC presented a study of the bare matrix elements dependence on the number of HYP smearing iterations, finding large sensitivity to this number especially for the imaginary part (the matrix elements are real only in the local case). Furthermore, ETMC tested the contamination by excited states by using two source-sink separations () of fm and fm, finding compatible results, but within large uncertainties. The source-sink separation in the study of Lin et al. was not reported. We note that separations below 1 fm are more susceptible to excited states contamination. However, the goal of these preliminary studies is to explore the approach of quasi-PDFs, postponing the investigation of excited states for later calculations. Having the bare matrix elements, the Fourier transform was taken to obtain the corresponding quasi-PDFs. The quasi-PDFs were matched to light-cone PDFs using the formulae of [98] and nucleon mass corrections were also applied. The obtained final PDFs are shown in Figure 4 for each study. One observes a similar picture from both setups and certain degree of qualitative agreement with phenomenological PDFs [102104], shown for illustration purposes. Lin et al. also computed the helicity PDF (Dirac structure in the matrix elements) and quoted the value of the sea quark asymmetry, but without showing the quasi- or final distributions.

Figure 4: Final isovector unpolarized PDFs (shaded bands) at the largest employed nucleon boost, left: Lin et al., 1.29 GeV, right: ETMC, 1.42 GeV. The right plot also shows the quasi-PDF and the matched PDF before nucleon mass corrections. For illustration purposes, selected phenomenological parametrizations are plotted (dashed/dotted lines, no uncertainties shown) [102104]. The errors are only statistical. Source: [46, 100], reprinted with permission by the Authors and the American Physical Society.

The two earliest numerical investigations of Ji’s approach showed the feasibility of lattice extraction of PDFs. However, they also identified the challenges and difficulties. On one side, these were theoretical, like the necessity of development of the missing renormalization programme and the matching from the adopted renormalization scheme to the desired scheme. On the other side, it became also clear that the computation is technically challenging, in particular because of the decreasing signal-to-noise ratio when increasing the nucleon boost. The computational cost also increases with the source-sink separation, for which a large value (typically above 1 fm) is needed to suppress excited states. In addition, full control over typical lattice systematics, e.g., cutoff effects, finite volume effects, or the pion mass dependence, was also missing. At this stage, some difficulties were still unidentified, for example, the mixing between certain Dirac structures due to the chiral symmetry breaking in the used lattice discretizations, first identified by Constantinou and Panagopoulos [105, 106].

Further progress was reported in the next two papers by the same groups (with new members), early in 2016 by Chen et al. [107] and later in the same year by Alexandrou et al. (ETMC) [108]. Both groups used the same setups as in [46, 100] but implemented a number of improvements and considered all three types of collinear PDFs: unpolarized, helicity, and transversity. Chen et al. [107] considered two source-sink separations, fm and fm, and performed measurements on 449 gauge field configuration ensembles with 3 source positions on each configuration, using the same set of nucleon momenta as in [46], 0.43, 0.86, and 1.29 GeV. They also derived and implemented nucleon mass corrections (NMCs, also called target mass corrections, TMCs7) for all three cases of PDFs. The NMCs will be discussed below in Section 6. In the work of ETMC [108], a large-statistics study was performed with 30000 measurements for each of the three momenta, 0.47, 0.94, and 1.42 GeV, at an increased source-sink separation of fm. In the course of this work, the method of momentum smearing was introduced [109] (see Section 6 for details) to overcome the difficulty of reaching larger nucleon boosts. The technique was implemented by ETMC and results were presented for additional momenta, 1.89 and 2.36 GeV with small statistics of 150 and 300 measurements, respectively. Moreover, a test of compatibility between standard Gaussian smearing, applied in the earlier work of both groups, and the momentum smearing was performed at GeV for the unpolarized case. This revealed a spectacular property that similar statistical error as for Gaussian smearing with 30000 measurements can be obtained with only 150 measurements employing momentum smearing.

As an illustration, we show the final helicity PDFs in Figure 5. Direct visual comparison between the two results is not possible, since the plot by Chen et al. shows the PDF multiplied by . Nevertheless, the qualitative picture is similar, revealing that no striking differences occur due to different lattice setups. The much smaller uncertainty in the plot by ETMC results predominantly from over 20 times larger statistics. Analogous plots for the unpolarized and transversity cases can be seen in [107, 108].

Figure 5: Final isovector helicity PDFs (shaded bands; in the left plot, PDF in the right plot) at the largest employed nucleon boost, left: Lin et al., 1.29 GeV, right: ETMC, 1.42 GeV. The right plot also shows the quasi-PDF and the matched PDF before nucleon mass corrections. For illustration purposes, also selected phenomenological parametrizations/models are plotted (left: dashed/dotted lines, no uncertainties shown, right: unfilled black bands representing uncertainties) [206209]. The errors are only statistical. Source: left: [107], reprinted with permission by the Authors (article available under CC BY), and right: [108], reprinted with permission by the Authors and the American Physical Society.

This concludes our discussion of the early explorations of the quasi-PDF approach. References [46, 100, 107, 108] proved its feasibility on the lattice and initiated identification of the challenges, already mentioned above. Further progress was conditioned on theoretical and practical improvements that will be described in later sections.

4. Quasidistributions: Model Investigations

Apart from theoretical analyses and numerical investigations in Lattice QCD, insights about the quasidistribution approach were obtained also from considerations in the framework of phenomenological or large- models. In this section, we take a closer look at quasi-PDFs, quasi-GPDs, and quasi-DAs in such models and review the conclusions obtained in setups where direct access to analytical forms of quasi- and light-cone distributions is possible.

4.1. Diquark Spectator Model

The aim of the early (2014) work of L. Gamberg et al. [110] and I. Vitev et al. [111] was to provide guidance about nucleon momenta needed for a reliable approach to the light-cone PDFs, for all collinear twist-2 distributions, i.e., unpolarized, helicity, and transversity PDFs. The Authors considered the diquark spectator model (DSM) [28], a phenomenological model that captures many of the essential features of the parton picture. The central idea of the DSM is to insert a completeness relation with intermediate states in the operator definition of PDFs (or some quark-quark correlation functions) and then truncate them to a single state with a definite mass. Such state is called the diquark spectator. This procedure boils down to making a particular ansatz for the spectral decomposition of the considered observable. The diquark spectator, in the simplest picture, can have spin-0 (scalar diquark) or spin-1 (axial-vector diquark). Finally, the nucleon is viewed as a system consisting of a constituent quark of some mass and a scalar or axial-vector diquark. The basic object in this approximation is the nucleon-quark-diquark interaction vertex, which contains a suitably chosen form factor, taken in the so-called dipolar form in [110].

With such setup, one can derive the model expressions for all kinds of collinear quasi-PDFs, combining the expressions for scalar and axial-vector diquarks. The obtained relations can be used to study the approach to the light-cone PDFs, also calculated in the DSM. Gamberg et al. [110] got 3 couplings, fixed by normalization and 9 parameters of the model that were fixed by fitting to experimental data, with good quality of fits. Then, they considered the quasi-PDFs for different boosts from 1 to 4 GeV. It was found that the shape of quasi-PDFs approaches the PDF for GeV. The agreement is especially good in the small to intermediate- regime, while large- needs significantly larger boost for a satisfactory agreement. The Authors also studied the Soffer inequality [112], stating that the transversity distribution should not be larger than the average of unpolarized and helicity ones. It holds for the standard PDFs. For quasi-PDFs, it was found that the inequality is always satisfied for the quark, while it is violated for the quark in the entire range of for small momenta of around 0.5 GeV.

Further model study of quasi-PDFs was presented in [113] in 2016. Bacchetta et al. confirmed the conclusions of [110] and, motivated by the conclusion that the large- region of quasi-PDFs converges much more slowly to the appropriate light-cone PDF, they proposed a procedure to aid the large- extraction of PDFs from the quasidistribution approach. This is a relevant aspect for computations of quasi-PDFs in Lattice QCD. The main idea is to combine the result of a quasi-PDF and that of the corresponding moments of PDFs. One divides the whole -region into two intervals, with a “matching” point . For , one assumes that the computed quasi-PDF is already a good approximation to the standard PDF. In turn, for , a parametrization is used, with parameters fixed by conditions of smoothness at (for the value of the quasi-PDF and for its derivative) and by the available moments of PDFs. The procedure to reconstruct unpolarized and helicity PDFs was tested numerically in the DSM for two matching points, , and two nucleon momenta, GeV. In all cases, there is significant improvement of agreement with respect to the standard PDF, especially at . Excellent agreement was observed for the quark even for the lower nucleon momentum, while the quark seems to require a larger value of , which is due to the worse agreement of the quasi-PDF and the standard PDF at the matching point. Overall, the procedure was proven to be successful in the DSM and the Authors are hopeful that it can be effectively applied also for actual Lattice QCD data.

The DSM (with scalar diquarks) was also employed as a framework for studying quasi-GPDs [114] in 2018. S. Bhattacharya, C. Cocuzza, and A. Metz calculated twist-2 unpolarized quasi-GPDs (the so-called and functions; for some definitions of GPDs variables and functions, see Section 8.2), using two Dirac structures, and , motivated by the discovery of mixing for one of them [105, 106]. They verified that, in the forward limit, their expressions reduce to the ones for the respective quasi-PDFs and, in the infinite momentum limit, their quasi-GPDs approach the appropriate GPDs. They found that all results for quasidistributions are continuous and argued that this feature should hold also at higher twist in the DSM. The Authors also checked the reliability of the cut-diagram approach, widely used in spectator models, and concluded it does not reproduce certain terms appearing from handling the calculation exactly. Thus, this approach is a simplification that should be avoided when dealing with quasidistributions. Having the final analytical expressions for quasi-GPDs and quasi-PDFs, they studied numerically the approach to infinite nucleon boost. They found that of order 2 GeV and larger yields quasifunctions within of their light-cone counterparts in a wide range of . The problematic region, as for quasi-PDFs, is the large regime, and the discrepancies increase for larger skewness . Interestingly, the derivation of matching for GPDs [115], described shortly in Section 8.2, indicates that no matching is required for the function (at leading order). However, in this model study, no significant differences in the convergence of the and functions were seen. In the ERBL region, , the agreement with standard GPDs is good, provided that is not too small. The results for both Dirac structures were found to be very similar at large enough momentum ( GeV). To verify and strenghten the conclusions, the Authors also checked the sensitivity to parameter variations (constituent mass and spectator mass) and found no significant differences. As numerical exploration of quasi-GPDs on the lattice is still missing, the DSM results can provide very useful guidance to such attempts.

4.2. Virtuality Distribution Functions

A model investigation of quasi-PDFs was performed also by A. Radyushkin in 2016-17 [75, 76]. He used his formalism of virtuality distribution functions (VDFs) [78, 79]. In the VDF formalism, a generic diagram for a parton-hadron scattering corresponds to a double Fourier transform of the VDF, , where is related to the parton virtuality (giving the name to the VDF) and is the hadron mass. The variables conjugates in the double Fourier transform are ( is hadron momentum and is separation of fields) and . The VDF representation holds for any and , but the case relevant for PDFs is the light-cone projection, . Then, one can define primordial (straight-link) TMDs and derive relations between VDFs, TMDs, and quasi-PDFs. Working in a renormalizable theory, one can represent the VDF as a sum of a soft part, i.e., generating a nonperturbative evolution of PDFs, and a hard tail, vanishing with the inverse of .

Numerical interest in these papers was in the investigation of the nonperturbative evolution generated by the soft part of the VDF or, equivalently, the soft part of the primordial TMD. Radyushkin considers two models thereof, with a Gaussian-type dependence on the transverse momentum (“Gaussian model”) and a simple non-Gaussian model (“ model”). These models are two extreme cases of a family of models, one with a too fast and one with a too slow fall-off in the impact parameter. In the numerical part of [75], the formalism was applied to a simple model PDF, . Both TMD models give similar evolution patterns, implying that one observes some universal features related to the properties of quasi-PDFs. It was also observed that the approach to the limiting PDF is not uniform for different and it can even be nonmonotonic for small nucleon momenta. These conclusions can provide very useful guidance to Lattice QCD calculations, meaning, e.g., that simple extrapolations in the inverse squared momentum might not be justifiable.

In the work of [76], Radyushkin considered target mass corrections (TMCs) in quasi-PDFs using the same framework. In both TMD models, it was found that TMCs become negligible already significantly before the quasi-PDF approaches the standard PDF. The Author suggested that, given the realistic precision of lattice simulations, TMCs can be neglected for nucleon boosts larger than around twice the nucleon mass.

4.3. Chiral Quark Models and Modeling the Relation to TMDs

Further model studies of the quasidistribution approach were performed in 2017 by W. Broniowski and E. Ruiz-Arriola [116, 117].

In the first paper [116], the pion quasi-DA and quasi-PDFs were computed in the framework of chiral quark models, namely, the Nambu-Jona-Lasinio (NJL) [118, 119] and the spectral quark model (SQM) [120122]. The NJL model is a well-known toy model of QCD, which is a low-energy approximation to it and encompasses a mechanism of spontaneous chiral symmetry breaking from the presence of strong four-quark interactions. The SQM model, in turn, is a spectral regularization of the chiral quark model based on the introduction of the Lehmann representation of the quark propagator. The Authors derived analytical expressions for the quasi-DA and the quasi-PDF, together with their underlying unintegrated versions dependent on the transverse momentum, as well as the ITDs. They also verified the relations between different kinds of distributions found by Radyushkin [7577]. This allowed them also to study the approach of the quasi-DA and quasi-PDF towards their light-cone counterparts and they found clear convergence for pion momenta in the range of a few GeV. Moreover, a comparison to lattice data [123] was made. For the NJL model, very good agreement was found with the lattice results at both considered pion momenta, GeV. In the case of the SQM model, similar agreement was observed at GeV and at the smaller momentum there were some differences between the model and the lattice data, but the agreement was still satisfactory. This implies that both models are able to capture the essential physical features.

In the second paper [117], Broniowski and Ruiz-Arriola explored further the relations between nucleon quasi-PDFs, PDFs and TMDs, following the work of Radyushkin [75, 76]. They derived certain sum rules, e.g., relating the moments of quasi-PDFs, PDFs, and the width of TMDs. Furthermore, Broniowski and Ruiz-Arriola modeled the factorization separating the longitudinal and transverse parton dynamics. They applied this model to study the expected change of shape of ITDs and reduced ITDs, for both quarks and gluons. They also considered the breakdown of the longitudinal-transverse factorization induced by the evolution equations, in the context of consequences for present-day lattice simulations, finding that the effects should be rather mild in quasi-PDFs, but could be visible in ITDs. Finally, they also performed comparisons to actual lattice data of the ETM Collaboration [108] for isovector unpolarized PDFs of the nucleon. The model quasi-PDFs, resulting from the assumed factorization, do not agree well with the ETMC data at 4 values of the nucleon boost between 0.94 and 2.36 GeV (see Section 3.2 for more details about these results). The discrepancy was attributed to the large pion mass, shifting the distributions to the right of the phenomenological ones, and to other lattice systematics (see also discussion about the role of the pion mass in Section 9.2.1 and [124]). More successful was the test of the aforementioned sum rule, predicting linearly increasing deviation of the second central moment of the quasi-PDF from that of the PDF with increasing , with the slope giving the TMD width. Using the ETMC data, they indeed observed the linear behavior and, moreover, extrapolating to infinite momentum, they found the second central moment to be compatible with a phenomenological analysis. In the last part of the paper, the Authors offered considerations for the pion case, presenting predictions for the valence-quark quasi-PDFs and ITDs.

4.4. Quasidistributions for Mesons in NRQCD and Two-Dimensional QCD

Meson DAs were first considered in the quasidistribution formalism in 2015 by Y. Jia and X. Xiong [125]. They calculated the one-loop corrections to quasi-DAs and light-cone DAs employing the framework of nonrelativistic QCD (NRQCD). This resulted in analytical formulae for quasi- and light-cone DAs for three -wave charmonia: the pseudoscalar and both the longitudinally and the transversely polarized vector . They checked analytically the convergence of quasi-DAs to standard DAs and performed also a numerical investigation of the rate of convergence. A function was introduced, called degree of resemblance, that quantifies the difference between the quasi- and the standard DAs. In general, momentum of around 3 times the meson mass is needed to bring the quasidistribution to within 5% of the light-cone one. The Authors also considered first inverse moments of quasi- and light-cone DAs, concluding that their rate of convergence is somewhat smaller and the difference at equal to three times the hadron mass may still be of order 20%, with 5% reached at six times larger than the meson mass.

Following the NRQCD investigation, Y. Jia and X. Xiong continued their work related to model quasidistributions of mesons. In 2018, together with S. Liang and R. Yu [126], they presented results on meson quasi-PDFs and quasi-DAs in two-dimensional QCD in its large- limit, often referred to as the ’t Hooft model [127]. The Authors used the Hamiltonian operator approach and Bars-Green equations in equal-time quantization [128], instead of the more standard diagrammatic approach in light-cone quantization. They performed a comprehensive study comparing the quasidistributions and their light-cone counterparts, studying the approach of the former to the latter at increasing meson momentum. Among the most interesting conclusions is the observation that the approach to the standard distributions is slower for lighter mesons than for heavier quarkonia of [125]. This observation was illustrated with numerical studies of the derived analytical equations for the different distributions. It was found that, for the pion, even momentum 8 times larger than the pion mass leads to significant discrepancies between the shapes of quasidistributions and light-cone ones. For () meson, in turn, momentum of five (two) times the meson mass already leads to the two types of distributions almost coinciding. An analogous phenomenon is also beginning to emerge in lattice studies and provides a warning that, e.g., pion PDFs might be more difficult to study than nucleon PDFs, i.e., require relatively larger momentum boosts.

Additionally, Jia et al. studied both types of distributions in perturbation theory, thus being able to consider the matching between quasi- and light-cone PDFs/DAs. The very important aspect of this part is that they were able to verify one of the crucial features underlying LaMET, that quasi- and light-cone distributions share the same infrared properties at leading order in . This is interesting, because the two-dimensional model has a more severe IR divergence than standard QCD.

As such, this work in two-dimensional QCD provides a benchmark for lattice studies of quasidistributions in four-dimensional QCD. It is expected that many of the obtained conclusions regarding the ’t Hooft model hold also in standard QCD. Moreover, the setup can also be used to study other proposals for obtaining the -dependence of light-cone distributions, in particular pseudo-PDFs and LCSs.

5. Theoretical Challenges of Quasi-PDFs

In this section, we summarize the main theoretical challenges related to quasi-PDFs, that have been identified early on. Addressing and understanding these challenges was very critical in order to establish sound foundations for the quasidistribution method. We concentrate on two of them, the role of the Euclidean signature (whether an equal-time correlator in Euclidean spacetime can be related to light-cone parton physics in Minkowski) and renormalizability. The latter is not trivial due to the power-law divergence inherited from the Wilson line included in the nonlocal operator. It is clear that problems related to either challenge could lead to abandoning the whole programme for quasi-PDFs. Therefore, it was absolutely crucial to prove that both of these aspects do not hide insurmountable difficulties.

5.1. Euclidean vs. Minkowski Spacetime Signature

One of the crucial assumptions of the quasidistribution approach is that these distributions computed on the lattice with Euclidean spacetime signature are the same as their Minkowski counterparts. In particular, they should share the collinear divergences, such that the UV differences can be matched using LaMET. In [129], C. Monahan and K. Orginos considered the Mellin moments of bare PDFs and bare quasi-PDFs in the context of smeared quasidistributions that differ from the standard ones only in the UV region, by construction (see Section 7 for more details about the smeared quasi-PDFs). They found that the Wick rotation from the bare Euclidean quasi-PDF to the light-cone PDF is simple.

However, in [130], a perturbative investigation was performed by C. Carlson and M. Freid, who discovered that there are qualitative differences between loop corrections in Euclidean and Minkowski spacetimes. In particular, it seemed that the IR divergence of the light-cone PDF is absent in the Euclidean quasi-PDF, which would be a problem at the basic root of LaMET. The complication emerged in certain diagrams, because the integration contours along real and imaginary axes of the complex loop temporal momentum plane could not be linked by smooth deformation, with physical observables being related to the integration along real and the lattice objects being extracted from integration along the imaginary axis. The Authors gave also a physical intuition justifying this finding. The IR divergence in Minkowski spacetime comes from collinear configurations of nearly on-shell quarks and gluons, with quark mass preventing exactly parallel configuration. Such a parallel situation is not possible in Euclidean spacetime and, hence, the Authors argued that no divergence can appear, invoking, thus, also mismatch of IR regions that could not be corrected for, perturbatively.

The serious doubts about the importance of spacetime signature were addressed in [131] by R. Briceño, M. Hansen, and C. Monahan. They formulated a general argument that for a certain class of matrix elements computed on the lattice, observables from the Euclidean-time dependence and from the LSZ reduction formula in Minkowski spacetime coincide. The class of (single-particle) matrix elements requires currents local in time, but not necessarily in space, and it includes the matrix elements needed to obtain quasi-PDFs. More precisely, the correlation functions depend on the spacetime signature, but matrix elements do not. The central issue was illustrated with a computation in a toy model, without the added, irrelevant from the point of view of the argument, complications of QCD. The focus was on a Feynman diagram directly analogous to the problematic one in [130]. The Authors, using the LSZ reduction formula, calculated its contribution to the quasi-PDF. They indeed found that the result depends on the contour of integration along the axis but pointed out that the contour along the imaginary axis does not coincide with what is done on the lattice. Instead, the connection can be made by computing the diagram contribution to a Euclidean correlator in a mixed time-momentum representation. At large Euclidean times, the result is dominated by a term which is exactly the same one as in the Minkowski calculation. After establishing the perturbative connection in the toy model for some specific kind of diagram, the proof was extended to all orders in perturbation theory. The Authors concluded their paper with a general statement about the proper prescription that yields the same result from Euclidean and Minkowski diagrams: the chosen contour must be an analytic deformation of the standard, Minkowski-signature definition of the diagram.

Thus, the apparent contradiction pointed out in [130] was fully resolved. As its Authors identified, the problem lied in the definition of the integration contour in the plane. However, the contour along the imaginary axis does not correspond to the perturbative contribution to Euclidean matrix elements, as shown in [131]. Even though the arguments of [130] turned out to be misplaced, they certainly discussed an interesting problem and they induced very valuable insights and a general proof in [131]. To our knowledge, the arguments were accepted by the Authors of [130] and no further arguments were given that would question the connection between Euclidean and Minkowski signatures in the context of quasidistributions.

5.2. Renormalizability of Quasi-PDFs

One of the indispensable components of the quasi-PDFs approach is the ability to match equal-time correlation functions (calculable on the lattice) to the light-cone PDFs using LaMET. For this approach to be successful, it is crucial that the quasi-PDFs can be factorized to normal PDFs to all orders in QCD perturbation theory, and this requires that quasi-PDFs can be multiplicatively renormalized [85]. However, the renormalization programme of quasi-PDFs is not straightforward due to the UV power divergences and, for quite some time, was not well understood (see Section 7 for recent progress).

One of the main concerns is whether the nonlocal operators are renormalizable. For example, the nonlocality of the operators does not guarantee that all divergences can be removed, due to the additional singularity structures compared to local operators and also the divergences with coefficients that are nonpolynomial. Due to the different UV behavior of quasi-PDFs and light-cone PDFs, the usual renormalization procedure is not ensured. Based on the work of [98], this originates from the different definition of the momentum fraction; that is (where () is plus momentum for the quark in the loop (initial quark)) for light-cone PDFs and (: space-like vector) in quasi-PDFs. In addition, the momentum fraction for the light-cone PDFs is restricted to , while for the quasi-PDFs can extend to . As a consequence of the above, the vertex correction (see diagrams in the second row of Figure 6) has different behavior. In fact, the UV divergences appear in the self-energy, while the vertex correction is UV finite. As pointed out in [132], the renormalizability of nonlocal operators has been proven up to two loops in perturbation theory by the analogy to the static heavy-light currents.

Figure 6: One-loop diagrams entering the quasiquark PDFs in Feynman gauge. Self-energy diagrams are shown in the first row and vertex correction diagrams in the second row. Source: [133], reprinted with permission by the Authors and the American Physical Society.

Thus, it is of utmost importance for the renormalizability to be confirmed to all orders in perturbation theory. This issue has been addressed independently by two groups [133135], concluding that the Euclidean spacelike correlation functions leading to the quasi-PDFs are indeed renormalizable. These are based on two different approaches: the auxiliary heavy quark method [133, 135137] and the diagrammatic expansion method [134, 138], employed for both quark and gluon quasi-PDFs. Below we highlight their main findings.

5.2.1. Renormalizability of Quark Quasi-PDFs

X. Ji and J.-H. Zhang in one of their early works [133] have studied renormalization of the unpolarized nonsinglet distribution. They performed an analytic calculation in the Feynman gauge to one-loop level using dimensional regularization (DR) to extract the full contributions of the diagrams entering the calculation, as shown in Figure 6. This includes the self-energy diagrams (top row) and the vertex correction diagrams (bottom row). We point out that the self-energy diagrams require integration over all components of the loop momentum, while the vertex correction diagrams have the component of the loop momentum that is parallel to the Wilson line unintegrated. This has implications on the discussion about renormalizability. This work exhibits how in this gauge all UV divergences in the vertex correction do not alter the renormalization of the quasi-PDFs, as they are removed by counterterms for subdiagrams from the interaction. Based on this, the renormalization of the quark quasidistribution reduces to the renormalization of two quark fields in the axial gauge. This study was also extended to two loop corrections, assuming one-to-one correspondence between the two-loop diagrams, as well as equivalence between the UV divergences of the two-loop self-energy in the quasiquark PDFs and of the two-loop corrections of the heavy-light quark current. Thus, multiplicative renormalizability was proven to hold up to two loops in perturbation theory. The arguments presented in this work can be generalized to include helicity and transversity PDFs.

The renormalizability of quasi-PDFs to all orders in perturbation theory has been proven for the first time by T. Ishikawa et al. in [134]. They performed the complete one-loop calculation of the quasi-PDFs in coordinate space and in the Feynman gauge, which is convenient because the renormalization of the QCD Lagrangian is known in this gauge. The one-loop calculation shows explicitly the renormalizability (to that order) of the quasi-PDFs.

More interestingly, the Authors have studied all sources of UV divergences for the nonlocal operators that enter the quasi-PDFs calculation using a primitive basis of diagrams (see Figures 3-6 in [134]). These diagrams were used to construct all possible higher-order Feynman diagrams that are presented schematically in Figure 7, and the Authors explained in great detail the proof of both power-law and logarithmic divergences being renormalized multiplicatively to all orders. This can be summarized in the calculation of the diagrams shown in Figure 7.

Figure 7: Topologies that may lead to UV divergent contributions to the quark quasi-PDFs. Source: [134], reprinted with permission by the Authors and the American Physical Society.

Diagrams of the topology shown in Figure 7(a) can be reordered in terms of one-particle-irreducible (1PI) diagrams and, therefore, one can derive all corresponding linear UV power divergences explicitly into an exponential. The latter may be removed by a mass renormalization of a test particle moving along the gauge link [139]. In addition, these diagrams have logarithmic UV divergences that can be removed by a “wave function” renormalization of the test particle [140]. The second type of diagrams (Figure 7(b)) has only logarithmic UV divergences, which can be absorbed by the coupling constant renormalization of QCD [140]. The last type of UV divergent diagrams is shown in Figure 7(c) that differs from types (a) and (b), because the loop momentum goes through an external quark, leading to divergences from higher-order loop corrections to the quark-gauge-link vertex. It was concluded that the UV divergent term of diagrams (c) is proportional to the tree level of the operator, and, therefore, a constant counterterm is sufficient to remove it. All the above constitute a concrete proof that all remaining perturbative UV divergences of the quark quasi-PDFs can be removed by introducing multiplicative renormalization factors. Exact calculations to one-loop level show that quasi-PDFs of different types do not mix under renormalization, which completes the proof of the renormalizability in coordinate space [134].

The study of the renormalizability of quark quasi-PDFs has been complemented with the work of Ji et al. in [135], in which the auxiliary heavy quark field formalism was employed. The approach shows renormalizability to all orders in perturbation theory and was confirmed in both the dimensional and the lattice regularizations, the latter for the first time. As in other studies, the focus was on the unpolarized PDFs and it was shown explicitly that the procedure mimics the renormalization of two heavy-light quark currents; the latter is valid to all orders in perturbation theory. We note that the conclusions hold for all types of PDFs and can be confirmed following the same procedure as the unpolarized one.

The introduction of a heavy quark auxiliary field, , modifies the QCD Lagrangian by including an additional term. This allows us to replace the nonlocal straight Wilson line operator by a composite operator, which is the product of two auxiliary heavy quark fieldsThus, the question of the renormalizability of the nonlocal operator can be addressed based on the renormalization of the above operator in the extended QCD theory. This has been demonstrated in DR and we urge the interested Reader to see the proof in [135]. Here we discuss the case of the lattice regulator which is particularly interesting for the numerical simulations in Lattice QCD. Unlike the case of DR, in lattice regularization (LR) the self-energy of the auxiliary quark introduces a divergence beyond leading order in perturbation theory. This may be absorbed as an effective mass counterterm, that is [141]Using the above, and for spacelike correlators, the linear divergence of Equation (31) can be factorized in the renormalized operatorwhere the remaining divergence is at most logarithmic and can be canceled to all orders in perturbation theory.

5.2.2. Renormalizability of Gluon Quasi-PDFs

For completeness, we also address the renormalizability of the gluon quasi-PDFs, which are more complicated to study compared to nonsinglet quark PDFs due to the presence of mixing. Their renormalizability was implied using arguments based on the quark quasi-PDFs [134, 135], but more recently there are direct studies for the renormalization of gluon quasi-PDFs [136138].

The first investigation appeared in 2017 by W. Wang and S. Zhao [136], using the auxiliary field approach to study the renormalization of gluon nonlocal operators, and, in particular, the power divergences. The mixing under renormalization was also addressed. This follows their work on the matching between the quasi- and the normal gluon PDFs [142], as described in Section 8.2. The light-cone gluon PDFs are nonlocal matrix elements of the formwhere is the field strength tensor. Based on this, gluon quasidistribution can be defined by nonlocal spacelike matrix elementin which the sum over is in all directions except the direction of the Wilson line. This definition is slightly modified from the definition used in [45, 86, 142], where the sum is over the transverse directions. Despite this modification, Equation (34) is still a proper definition of a gluon quasi-PDF, as demonstrated in [136] based on the energy-momentum tensor decomposition. This is also confirmed numerically, as the one-loop matching to the light-cone PDFs coincides for the two definitions.

Reference [136] presented the complete one-loop calculation for the gluon operator of Equation (34), introducing a UV cutoff on the transverse momentum. The calculation was performed in Feynman gauge and in the adjoint representation. The relevant one-loop diagrams can be separated into two categories: (1) diagrams in which the vertex from the operator does not include gluons from the Wilson line (shown in Figure 8) and diagrams that have at least one gluon from the Wilson line in the vertex of the operator, as shown in Figure 9. This calculation identified all divergences including the linear divergence, and, unlike the case of the quark quasi-PDFs, the Wilson line self-energy (right diagram of Figure 9) is not the only source of linear divergence in the gluon distributions. As a consequence, it is not possible to absorb all linear divergences in the renormalization of the Wilson line, but a more complicated renormalization is needed. However, as argued in [137], this is due to the choice of a nongauge invariant regulator.

Figure 8: One-loop corrections to a gluon quasidistribution, without the Wilson line. The symbol “” denotes the nonlocal vertex from the operator. Source: [136], reprinted with permission by the Authors (article published under an open access license).
Figure 9: One-loop corrections to a gluon quasidistribution, which involve the Wilson line (double line). The symbol “” denotes the nonlocal vertex from the operator. Source: [136], reprinted with permission by the Authors (article published under an open access license).

One approach to study the renormalization of the quasigluon PDFs is to introduce an auxiliary heavy quark field, as adopted in the renormalization of the quark distributions. This auxiliary field is in the adjoint representation of and does not have spin degrees of freedom. Therefore, this approach allows one to study local operators instead of the nonlocal operator of Equation (34). Mixing between these new operators and the gauge invariant gluon field strength tensor is permitted. In addition, it was shown at the level of one-loop corrections that the power divergence can be absorbed in the matrix elements of the local operators, which is expected to hold to all orders in perturbation theory. This work by W. Wang and S. Zhao has contributed to understanding the renormalization of the gluon quasi-PDFs, but there were a number of issues to be addressed. This included, but was not limited to, (a) the study of gauge fields in the fundamental representation and the corresponding mixing and (b) the study of the renormalization in the lattice regularization, preferably nonperturbatively. The latter is highly nontrivial and technically more complicated than in the case of quarks.

The renormalizability of both the unpolarized and the helicity gluon PDFs has been studied by J.-H. Zhang et al. in [137], including possible mixing that is permitted by the symmetries of the theory. The auxiliary field formalism was employed in a similar fashion as the studies presented above [133, 135137]. Explicit results were given for the unpolarized quasigluon PDFs in the dimensional and gauge-invariant cutoff regularizations.

In the auxiliary field formalism, the operator presented in Equation (34) ( summed over the transverse directions) can be replaced by a new operator; that is, where , . denotes the auxiliary adjoint “heavy quark” field. For a proof, see [137]. Based on symmetry properties, such a composite operator can mix with lower-dimensional operators that are gauge-invariant, BRST variations or vanish by the equations of motion. The identified mixing pattern helps to construct the proper operators for the gluon quasi-PDFs that are multiplicatively renormalizable. In particular, three (four) operators are identified for the unpolarized (helicity) gluon PDFs, that do not suffer from mixing. Here we provide the operators for the unpolarized case: where the index represents the temporal direction and the direction of the Wilson line. In addition, runs over all Lorentz components, while over the transverse components only (). In a similar way, it was found that three operators related to the gluon helicity distributions can be renormalized multiplicatively. For details, see Section III C of [137]. This work provides crucial guidance for numerical simulations in Lattice QCD and the development of a nonperturbative renormalization prescription. Based on the mixing pattern, the Authors provided a renormalization prescription suitable for lattice simulations, and a factorization for gluon and quark quasi-PDFs.

Z.-Y. Li, Y.-Q. Ma, and J.-W. Qiu have studied renormalizability of gluon quasi-PDFs in [138], a work that appeared simultaneously with [137]. Their work is based on diagrammatic expansion approach, as studied for the quark quasi-PDFs [134, 138]. By studying the UV divergence of gluon operators, it was demonstrated that appropriate combinations can be constructed, so that their renormalization is multiplicative to all orders in perturbation theory. Such operators are related to gluon quasi-PDFs. The demonstration is based on a quasigluon operator that has a general form where is the Wilson line with gauge links in the adjoint representation.

The procedure followed in this work is based on a one-loop calculation of the Green’s functions which is performed in DR. It was demonstrated that the linear UV divergences of the gluon-gauge-link vertex are canceled explicitly. This was extended to all loops in perturbation theory by investigating all possible UV divergent topologies of higher-order diagrams, showing that the corresponding linear UV divergences are canceled to all orders in perturbation theory. It was also discussed in detail that the UV divergences of all 36 pure quasigluon operators (including the antisymmetry of gluon field strength) can be multiplicatively renormalized. This work, thus, constitutes a powerful proof of the renormalizability of gluon quasi-PDFs.

6. Lattice Techniques and Challenges for Quasi-PDFs

Apart from theoretical challenges of the quasidistribution approach, discussed in the previous section, also the lattice implementation and efficiency of computations are a major issue for the feasibility of the whole programme. In this section, we discuss these aspects in some detail, showing that tremendous progress has been achieved also on this side. In addition, we discuss challenges for the lattice that need to be overcome for a fully reliable extraction of PDFs.

6.1. Lattice Computation of Matrix Elements

To access quasi-PDFs of the quarks in the nucleon, one needs to compute the following matrix elements:where the Dirac structure determines the type of quasi-PDF (see below), is the boosted nucleon state with momentum , and is a Wilson line of length along the spatial direction of the boost. To obtain the above matrix elements, one constructs a ratio of three-point and two-point functions:where is a kinematic factor that depends on the Dirac structure, and the correlation functions are computed according towith the proton interpolating operator, , the current insertion time, parity plus projector for the two-point function, , and parity projector for the three-point functions, , dependent on the Dirac structure of the current.

The Wick contractions for the three-point function lead, in general, to a quark-connected and a quark-disconnected diagram. Since the evaluation of the latter is far more demanding than that of the former, the numerical efforts were so far restricted to connected diagrams only. One uses the fact that disconnected diagrams cancel when considering the flavor nonsinglet combination in the formulation of Lattice QCD with degenerate light quarks. The connected diagram that contributes to the three-point function is shown in Figure 10.

Figure 10: Diagram representing the three-point correlation function that needs to be evaluated to calculate quasi-PDFs. Source: arXiv version of [124], reprinted with permission by the Authors (article published under the terms of the Creative Commons Attribution 4.0 International license).

Special attention has to be paid to the Dirac structure of the insertion operator, because mixing appears among certain structures, as discovered in [106]. In particular, the originally suggested for the unpolarized PDF mixes with the scalar operator; see Section 7 for details. Such mixing can be taken into account by explicitly computing a mixing matrix of renormalization functions and matrix elements for both Dirac structures. However, in practice, this leads to much worse signal and finally to a much less precise estimate of the PDFs. For this reason, the strongly preferred choice is for the unpolarized quasi-PDF. Similar mixing occurs for the polarized cases for certain Dirac structures, with the choice of and or for helicity and transversity, respectively, guaranteeing that no mixing is present [106].

We now turn to describing the lattice computation in more detail. For the two-point function, Wick contractions lead to standard point-to-all propagators that can be obtained from inversions of the Dirac operator matrix on a point source. The computation of the three-point function is more complicated. Apart from the point-to-all propagator, it requires the knowledge of the all-to-all propagator. Two main techniques exist to evaluate this object: the sequential method [143] and the stochastic method [144]. In the former, one constructs a so-called sequential source from a suitable point-to-all propagator. Inverting the Dirac matrix on this source, the sequential propagator is obtained that enters in the three-point function. The other method employs stochastic noise sources on a single time slice, leading to a stochastic estimate of the all-to-all propagator upon inversion of the Dirac matrix. In principle, the second method is more flexible, as it allows for obtaining results for all Dirac structures and all momenta with the same inversions, the most costly part of the computation. The price to pay is the introduction of stochastic noise, but the overhead introduced by the necessity to suppress this noise is still more than compensated by the gain from flexibility, in principle. Using the sequential method or, more precisely, its fixed sink variant, implies that the momentum at the sink has to be fixed and separate inversions are needed for each nucleon boost, as well as for each Dirac structure due to different projectors.

In the early studies, both approaches were tested by ETMC [47], with the conclusion that they yield compatible results and the additional noise from the stochastic method can be suppressed by using 3-5 stochastic noise vectors. Given the flexibility of the stochastic method, ETMC decided to pursue studies with this approach in [100, 108]. In [108], the technique was changed to one involving the sequential propagator for reasons explained in the next subsection. The method for computing the all-to-all propagator was not revealed by the Authors of the other exploratory numerical study of quasi-PDFs in [46, 107].

Having computed the three-point and two-point functions, the relevant matrix elements can be obtained. The crucial issue that has to be paid special attention to is the contamination of the desired ground state matrix elements by excited states. Three major techniques are available: single-state (plateau), multistate, and summation fits. We briefly describe all of them below.(i) Plateau method. The most straightforward way of obtaining the matrix element from the three-point and two-point functions is to identify a region where their ratio is independent of the insertion time and fitting to a constant, which is the matrix element of the ground state. As can be seen from the spectral decomposition of the three-point function, excited states manifest themselves as curvature in the ratio of Equation (88) and also in the shift of its central value. Under realistic statistical uncertainties, it is, therefore, not always clear whether an actual plateau has been reached and, thus, it is not advisable to use this method as the sole method of extracting the ground state properties.(ii) Summation method. This approach [145, 146] consists in summing the ratios of three-point and two-point functions over the insertion time . By decomposing the correlators into sums of exponential terms, one obtains a geometric series, leading finally to where the source and sink timeslices are excluded avoiding contact terms and is a constant. The ground state matrix element, , is then extracted from a linear two-parameter fit to data at sufficiently large source-sink separations . The method has the advantage that excited states are suppressed by a faster-decaying exponential with respect to the plateau fits, but the statistical uncertainties are, typically, much larger.(iii) Multistate fits. A natural generalization of the plateau method is to include higher-order exponential terms in the decomposition of the two-point and three-point functions, typically the first excited state (two-state fits) or the lowest two excited states (three-state fits). In general, the two-point correlator can be written as with amplitudes and energies of subsequent states . The three-point function reads with matrix elements of the suitable operator in addition to parameters in the two-point correlator. Note that, in practice, it is difficult to consistently go beyond one or two excited states, as the number of fitting parameters is increasing faster than linearly with increased number of excited states taken into account, due to the presence of the mixed matrix elements, , for a growing number of pairs .

In principle, the multistate method (realistically two-state method) allows for a better control of excited states contamination. However, in realistic lattice situations, the interpolating operators used to create the nucleon from the vacuum excite numerous states with the same quantum numbers. This contamination increases with pion mass decreasing towards the physical point; see, e.g., [39], for an illustration. Moreover, the number of excited states increases with larger nucleon boosts. All the above imply that it is unlikely to achieve a regime of source-sink separations where precisely two states play a role. Thus, also relying solely on two-state fits should not be used as the only method. Instead, ground state dominance should be established by aiming at compatibility between all three methods of extracting the ground state matrix elements. Such compatibility ensures that the probed regime of values has enough suppression of excited states and excludes that many excited states mimic a single excited state. Numerically, it is hard to disentangle several excited states and the manifestation of many of them appearing would be clear incompatibility of two-state fits with plateau fits. Note also that, with exponentially decaying signal-to-noise ratio at larger source-sink separations, the danger of the two-state approach is that the fits may easily be dominated by data at the lower values, heavily contaminated by excited states. Thus, we are led to conclude that the most reliable estimates of ground state matrix elements ensue from compatible results obtained using all of the three above methods (with the summation method being, in most cases, inconclusive due to large statistical uncertainty). It is still important to bear in mind that excited states are never fully eliminated, but only exponentially suppressed. This means that the ground state dominance is always established only to some level of precision. Aiming at increased statistical precision, the previously reliable source-sink separation(s) may prove to be insufficient. Obviously, when targeting larger momenta and/or smaller pion masses, conclusions for the role of excited states at a smaller boost or a larger pion mass do not apply; hence, a careful analysis is always needed at least in the setup most prone to excited states.

Having extracted the relevant matrix elements, one is finally ready to calculate the quasi-PDF. We rewrite here the definition of quasi-PDFs with a discretized form of the Fourier transform:where the factor of ensures correct normalization for different momenta and is to be chosen such that the matrix elements have decayed to zero both in the real and in the imaginary part (see also Section 6.3).

6.2. Optimization of the Lattice Computation

In the previous subsection, we have established the framework for the computation of quasi-PDF matrix elements on the lattice. Now, we describe some more techniques that are usually used to perform the calculation as effectively as possible.

The first technique, commonly employed in lattice hadron structure computations, serves the purpose of optimizing the overlap of the interpolating operator that creates and annihilates the nucleon with the ground state. This can be achieved by employing Gaussian smearing [147, 148] of fermionic fields, which reflects the fact that hadrons are not point-like, but are extended objects. Moreover, the smearing is further optimized by combining it with a technique for reducing short-range (UV) fluctuations of the gauge fields; gauge links used in the quark fields smearing are subjected to APE smearing [149]. The procedure involves optimizing four parameters, the parameters regulating the “strength” of the Gaussian and APE smearing, and , respectively, and the number of Gaussian and APE smearing iterations. The typical criterion of optimization is that the root mean square (rms) radius of the proton should be around 0.5 fm.

Smearing techniques are used also to decrease UV fluctuations in gauge links entering the Wilson line in the operator insertion. In principle, any kind of smearing can be used for this purpose, with practical choices employed so far of HYP smearing [101] and stout smearing [150]. The smearing of the Wilson line has the additional effect of reducing the UV power divergence related to the Wilson line, i.e., shifting the values of renormalization factors towards their tree-level values, and thus suppressing the power-like divergence. Thus, a relatively large number of smearing iterations was used in the early works, which was necessary due to the absence of the renormalization. In principle, the renormalized matrix elements should not depend on the amount of smearing applied to the operator and it is an important consistency check to confirm this. We note that, before the advent of the full nonperturbative renormalization programme for quasi-PDFs [151], the role played by the Wilson line links smearing was somewhat different. Without explicit renormalization, the results were contaminated by the power divergence and the smearing had the task of subtracting a possibly large part of this divergence in the hope of obtaining preliminary results at not too small lattice spacings. After this premise lost its significance, this kind of smearing is applied only to reduce gauge noise to a certain extent. Alternatively, smearing of gauge links can also be applied to the whole gauge field, i.e., enter both the Wilson line and also the Dirac operator matrix. Note, however, that in this way it is not possible to check explicitly that the renormalized results are independent of the smearing level, at least without costly additional Dirac matrix inversions for different numbers of smearing iterations.

All the above techniques are rather standard and have been employed in the quasi-PDFs computations already in the very first exploratory studies. However, the recent progress that we review in Section 9 would not have been possible without the technique of so-called momentum smearing [109, 152]. It is a relatively simple extension of the quark fields smearing described above. The crucial observation is that a Gaussian-smeared nucleon state has maximal overlap with a nucleon at rest; i.e., it is centered around zero momentum in momentum space. It is, hence, enough to move this center to the desired momentum to obtain an improved signal for a boosted nucleon. The modification is the addition of a phase factor in the position space definition of the smearing, where is the desired nucleon momentum and a tunable parameter8. Explicitly, the modified Gaussian momentum smearing function readswhere are gauge links in the -direction. For optimal results, the parameter should be tuned separately for every momentum and every ensemble. In the context of quasi-PDFs, momentum smearing has first been applied in the ETMC study reported in Section 3.2 [108, 153]. By now, it has become a standard technique for enhancing the signal. We note, however, that momentum smearing does not fully solve the exponentially hard problem of decaying signal at large boosts, but rather it moves it towards larger momenta. Therefore, accessing highly boosted nucleon on the lattice, necessary for reliable matching to light-cone PDFs via LaMET, remains a challenge.

To finalize this subsection, we mention one more useful technique that is applied nowadays to decrease statistical uncertainties at fixed computing time. The most expensive part of the calculation of the correlation functions is the computation of the quark propagators, i.e., the inversion of the Dirac operator matrix on specified sources. This is typically done using specialized iterative algorithms, often tailored to the used fermion discretization. The iterative algorithm is run until the residual, quantifying the distance of the current solution with respect to the true solution, falls below some tolerance level, . The standard way is to set to a very small number, of order . However, obviously that may need iterating the solver for a long time. To save some considerable fraction of computing time, truncated solver methods have been invented, where the precision is relaxed to . Naturally, relaxed precision of the solver leads, in general, to a bias introduced in the solution. Hence, the second ingredient of these methods is bias correction. Below, we shortly describe one of such methods, the Covariant Approximation Averaging (CAA) [154]. One performs a certain number of low-precision (LP) inversions, , accompanied by a smaller number of standard, high-precision (HP) inversions, . The final correlation functions are defined as follows:where and denote correlation functions obtained from LP and HP inversions, respectively. To correct the bias properly, HP and LP inversions have to be done for the same source positions. The choice of the numbers of LP and HP inversions has to be tuned in such a way to maintain a large correlation coefficient (typically 0.99-0.999) between LP and HP correlators, which guarantees that the bias is properly subtracted.

6.3. Lattice Challenges

In this section, we discuss the challenges for lattice computations of quasi-PDFs. On the one side, this includes “standard” lattice challenges, like control over different kinds of systematic effects, some of them enhanced by the specifics of the involved observables. On the other side, the calculation of quasi-PDFs offered new challenges that had to or have to be overcome for the final reliable extraction of light-cone distributions. Below, we discuss these issues in considerable detail, starting with the “standard” ones and going towards more specific ones.(1) Discretization effects.Lattice simulations are, necessarily, performed at finite lattice spacings. Nevertheless, the goal is to extract properties or observables of continuum QCD. At finite lattice spacing, these are contaminated by discretization (cutoff) effects, which need to be subtracted in a suitable continuum limit extrapolation. Obviously, prior to taking the continuum limit, the observables need to be renormalized and we discuss this issue in Section 7. Assuming divergences have been removed in a chosen renormalization scheme, the continuum limit can be taken by simulating at three or more lattice spacings and fitting the data to an appropriate ansatz, typically linear in the leading discretization effects, of order or . In most Lattice QCD applications, -improved fermionic discretizations or observables are used. In many cases this, however, requires calculation of observable-specific improvement coefficients (e.g., for Wilson-clover fermions). It remains to be shown how to obtain improvement of quasi-PDFs at least for some of the fermionic discretizations. Up to date, quasi-PDFs studies have been performed for a single lattice spacing in a given setup and, hence, discretization effects have not been reliably estimated. Going to smaller lattice spacings remains a challenge for the future. It is not a problem in principle, but obviously it requires huge computational resources, especially at the physical pion mass. However, there are indirect premises that discretization effects are not large. Firstly, they have been relatively small in general lattice hadron structure calculations. Secondly, indirect evidence for the smallness of cutoff effects is provided by checks of the dispersion relation, i.e., the relation between energy of a boosted nucleon and its momentum (see Section 9). In the absence of large discretization effects, the continuum relativistic dispersion relation holds. Note, however, that cutoff effects can be enhanced if the nucleon boost becomes larger than the lattice UV cutoff; i.e., if . In principle, no energy scale on the lattice should exceed . Precisely for this reason, lattice calculations involving the heavy quark need its special treatment; with typical lattice spacings, the bottom quark mass exceeds and a reliable computation must involve an effective theory treatment, such as the one provided by HQET or by NRQCD.(2) Finite volume effects.Apart from finite lattice spacing, also the volume of a numerical simulation is necessarily finite. Thus, another lattice systematic uncertainty may stem from finite volume effects (FVE). FVE become important if the hadron size becomes significant in comparison with the box size. The hadron size is to a large extent dictated by the inverse mass of the lightest particle in the theory. Hence, leading-order FVE are related to the pion mass of the simulation and smaller pion masses require larger lattice sizes in physical units to suppress FVE. Usually, FVE are exponentially suppressed as , where is the spatial extent of the lattice. The typical rule adopted in lattice simulations is that this suppression is enough if . At nonphysical pion masses of order 300-400 MeV, this corresponds to a box size of 2-2.5 fm, which is easy to reach with typically used lattice spacings, 0.05-0.1 fm. When simulating at the physical pion mass, the minimal box size that yields is 6 fm and, thus, finer lattice spacings require huge lattices. Nevertheless, lattice hadron structure calculations have usually evinced rather small FVE already with . Still, an explicit check of FVE is highly advisable when aiming at a fully reliable computation.Above, the main source of FVE that we considered was related to the size of hadrons. However, it was pointed out in [155] that, for quasi-PDFs, a relevant source of FVE may be the size of the Wilson line in the operator inserted in the matrix elements defining quasidistributions. The Authors studied perturbatively a toy scalar model with a light degree of freedom (mimicking the pion in QCD) and a heavy one (corresponding to the nucleon). The studied matrix element involved a product of two currents displaced by a vector of length and they found two kinds of FVE: one decaying with and the other one with , where is the mass of the heavy state. Moreover, both exponentials have prefactors scaling as (with some exponents and ), that can further enhance FVE for larger displacements . In the case of pion matrix elements, the FVE may be particularly enhanced by . Even though the studied case concerned a product of two currents, not quark fields connected by a Wilson line, some enhancement of FVE may also occur for the latter case. In view of this, investigation of FVE in matrix elements for quasi-PDFs, especially ones with larger lengths of the Wilson line, is well motivated.It is also important to mention that finite lattice extent in the direction of the boost, , imposes a limit on the minimal Bjorken- that can be reached. The parton momentum is , which determines its correlation length to be of order . This value should be smaller than the physical size of the boost direction, . At the same time, the boost should be smaller than the lattice UV cutoff; i.e., . Replacing “” symbols in the above inequalities with “” signs, one arrives at the minimal accessible on the lattice: ; i.e., . Note that it is the number of sites in the boost direction that determines , not its physical size.(3) Pion mass dependence.The computational cost of Lattice QCD calculations depends on the pion mass. Hence, exploratory studies are usually performed with heavier-than-physical pions, as was also the case for quasi-PDFs (see Section 3.2). Obviously, this introduces a systematic effect. If no physical pion mass calculations are available, one can extrapolate to the physical point, if the fitting ansatz for this extrapolation is known (e.g., from chiral perturbation theory). However, the cleanest procedure is to simulate directly with pions of physical mass. Recently, quasi-PDFs computations with physical pions have become available; see Section 9 for their review including a direct comparison between ensembles with different pion mass [124].(4) Number of flavors, isospin breaking.QCD encompasses six flavors of quarks. However, due to the orders of magnitude difference between their masses, only the lightest two, three, or four flavors are included in lattice simulations. Moreover, the up and down quarks are often taken to be degenerate; i.e, one assumes exact isospin symmetry. One then speaks of a , , or setup, respectively. Differences among these setups are observable dependent, but usually smaller than other systematic uncertainties and the statistical errors. For examples of the small dependence on the number of dynamical quarks in various observables, see, e.g., the FLAG review [156]. Hence, for most applications, all these setups can be considered to be equivalently suitable. Only when aiming at total uncertainty, well beyond the current precision of the field of lattice PDFs, it may be necessary to include dynamical strange and charm quarks. Similar or smaller effects are expected from isospin breaking by the different up and down quark masses (QCD effect) and their different electric charges (QED effect). The order of magnitude of these effects can be deduced from the difference of proton and neutron masses, less than two per mille. Note that the setup with degenerate light quarks is very useful in lattice hadron structure calculations also for practical reasons; in such a setup, the disconnected contributions cancel in the flavor combination and, moreover, isovector PDFs do not mix under matching and renormalization. Thus, it is clear that all the effects discussed in this point are currently subleading but may become important in the future, when aiming at very precise extractions of PDFs.(5) Source-sink separation and excited states contamination.As already discussed in Section 6.1, a significant systematic effect may emerge in lattice matrix elements due to excited states contamination. From the correlation functions decomposition, one can see that excited states are suppressed with the source-sink separation, . Hence, a careful analysis of a few separations is needed to establish ground state dominance; see Section 6.1 for more details. The issue of reaching large values is nontrivial from the computational point of view, as the signal-to-noise ratio decays exponentially with increasing . For this reason, a compromise is needed to keep the computational cost under control. Yet, the compromise must not affect the reliability of the results.(6) Momentum boost and higher-twist effects.Contact with the IMF via LaMET is established at large nucleon momenta. Hence, it is desirable to use large nucleon boosts on the lattice. However, this is highly nontrivial for several reasons. First, the signal-to-noise ratio decays exponentially with increasing hadron momentum, necessitating increase of statistics to keep similar statistical precision at larger boosts. Second, excited states contamination increases considerably at larger momenta, calling for an increase of the source-sink separation to maintain suppression of excited states at the same level. As argued in the previous point, the increase of further decays the signal, enlarging the required statistics. Third, large hadron momenta may induce enhanced discretization effects, in particular when the boost becomes similar to or larger than the lattice UV cutoff, i.e., the inverse lattice spacing. Thus, momenta larger than the UV cutoffs of the currently employed lattice spacings, of order 2-2.5 GeV, should only be aimed at with ensembles at finer lattice spacings.We now consider effects that may appear if the nucleon momentum is too small. Looking at the formulation of LaMET, it is clear that higher-twist effects (HTE), suppressed as , may become sizable and hinder the extraction of leading-twist PDFs. In principle, one can compute the HTE explicitly and subtract them. This would be an interesting direction of further studies, especially that HTE are of interest in their own right. Alternatively, one may compute the leading functional dependence of HTE and extrapolate them away. An example of such computation was presented in [157], based on the study of renormalons in coefficient functions within the bubble-chain approximation. The result for quasi-PDFs is an correction. Note, however, that the matrix elements underlying the quasi-PDFs in this analysis are normalized to unity at zero momentum, as done in the pseudo-PDF approach (see Section 2.6). This suppresses HTE at small- at the price of enhancement for large-. Clearly, the renormalization programme employed for quasi-PDFs, e.g., based on a variant of RI/MOM (see Section 7), can lead to different functional form of HTE. Moreover, the Authors of [157] put another warning that a perturbative analysis might not see all sources of HTE and their results should rather be considered as a minimal model that may miss nonperturbative features. Note also that knowing the functional form of leading-order HTE (with unknown prefactors) does not clarify what the range of hadron momenta is where these terms are indeed leading. At too small momenta, it may still be that higher-order HTE are sizable and even change the overall sign of the correction, rendering the extrapolation unreliable.Another type of HTE is nucleon mass corrections (NMCs). These, in turn, can be exactly corrected by using the formulae derived by Chen et al. [107]. The calculation presented in this reference allowed us to obtain closed expressions for the mass corrections relevant for all types of quasi-PDFs. An important feature of these NMCs is that the particle number is conserved. We note that NMCs are already small at momenta not much larger than the nucleon mass, as also argued by Radyushkin [76] from a model calculation. It is important to remark that the NMCs for quasi-PDFs (also commonly referred to as TMCs) are different from TMCs in phenomenological analyses for standard PDFs (see, e.g., [158] for a review). NMCs in quasi-PDFs result from the nonzero ratio of the nucleon mass to its momentum (while this ratio is zero in the IMF), whereas TMCs in phenomenological analyses refer to corrections needed because of a nonzero mass of the target in a scattering experiment.At the level of matrix elements, the momentum dependence is manifested, inter alia, by the physical distance at which they decay to zero. This distance, entering in the limits of summation for the discretized Fourier transform in Equation (49), becomes smaller for larger values of . If it is too large, periodicity of the Fourier transform will induce nonphysical oscillations in the quasi-PDFs, especially at large . We note that these oscillations do not appear because of the truncation at finite , but rather because of a too large value of at low momenta. This effect can be naturally suppressed by simulating at larger nucleon boosts and indeed, as we show in Section 9, oscillations are dampened at larger . The uncertainty induced by this behavior can also result from uncertainties related to the renormalization of bare matrix elements. The large values of -factors amplify both the real and the imaginary part and, for complex -factors, also mix them with each other. The -factors should be purely real, but this feature holds only if conversion between the intermediate lattice renormalization scheme and the scheme is done to all orders in perturbation theory. Together with lattice artifacts appearing in the estimate of the intermediate scheme renormalization functions, this effectively induces a slower decay of matrix elements with the Wilson line length and shifts the value of where matrix elements become zero to larger distances. Hence, the combination of too small boost and uncertainties in -factors manifests itself in the oscillations. Note also that the problem may be more fundamental. It is not presently clear how reliable is a distribution reconstruction procedure from a set of necessarily limited data. This issue is being investigated in the context of pseudodistributions [159]. The reconstruction techniques, mentioned in the context of the hadronic tensor in Section 2.1, may be crucial for control of this aspect. It was also speculated [51] that the Fourier transform may be a fundamental limitation of the quasi- and pseudodistribution approaches and the fundamental object may be the renormalized matrix element, or ITD.A method to remove the nonphysical oscillations was proposed in [160] and was termed the derivative method. One rewrites the Fourier transform using integration by parts:where the derivative of the matrix elements with respect to the Wilson line length gives the name to the method. The integration by parts is exact and this definition of the Fourier transform is equivalent to the standard one if the matrix elements have decayed to zero at and up to discretization effects induced by the need to lattice size the continuous derivative. Otherwise, one neglects the surface term in Equation (52), which effectively absorbs oscillations. However, it is debatable whether the procedure is safe and the neglected surface term does not hide also physical contributions at a given nucleon boost. Also, the presence of an explicit factor in the surface term leads to an uncontrolled approximation for small values of . Other proposed methods to remove the oscillations are a low-pass filter [160], including a Gaussian weight in the Fourier transform [161]. However, they have not been used with real lattice data. Ideally, the nucleon momentum needs to be large enough to remove oscillations in a natural way, instead of attempting to suppress them artificially.(7) Other effects in the PDFs extraction procedure.For the sake of completeness, we mention other effects that can undermine the precision of lattice extraction of PDFs, although they are not challenges for the lattice per se.In the previous point, we have already mentioned uncertainties related to renormalization. In RI/MOM-type schemes, they manifest themselves in the dependence of -factors on RI scales from which they were extracted, even after evolution to a common scale. This can be traced back to the breaking of continuum rotational invariance () to a hypercubic subgroup . A way to overcome this problem is to subtract lattice artifacts computed in lattice perturbation theory, which can be done to all orders in the lattice spacing at the one-loop level; see [162] for more details about this method and an application to local -factors.Another renormalization-related issue is the perturbative conversion from the intermediate lattice scheme to the scheme and evolution to a reference scale. Although not mandatory, the aim of the whole programme is to provide PDFs in the scheme of choice for phenomenological applications, i.e., the scheme. The conversion and evolution are currently performed using one-loop formulae and, hence, subject to perturbative truncation effects. A two-loop calculation of these steps will shed light on the magnitude of truncation effects.Similarly, truncation effects emerge also in the matching of quasi-PDFs to light-cone PDFs, currently done to one-loop level; see Section 8 for a more thorough discussion on matching.(8) Finite and power-divergent mixings. A general feature of quantum field theory is that operator mixing under renormalization is bound to appear among operators that share the same symmetry properties. On the lattice, some continuum symmetries, that otherwise prevent mixing, are broken. For operators of the same dimension, the mixing is finite. Important example of such mixing was mentioned above; for some fermionic discretizations, operator with the Dirac structure (for unpolarized PDF) has the same symmetries as the analogous scalar operator [106] and hence mixes with it, while the structure has different symmetry properties and avoids the mixing. This mixing is a lattice effect stemming from chiral symmetry breaking by the lattice discretization and does not appear for lattice fermion formulations that preserve this symmetry, e.g., overlap fermions. We discuss this finite mixing in more detail in the next section.If the dimension of the operator with the same symmetries is lower, then the mixing will be power divergent in the lattice spacing; i.e., it will contribute a term , where is the difference in the dimension. The possibility that such mixings occur for quasi-PDFs, as well as pseudo-PDFs and LCSs, was considered by G.C. Rossi and M. Testa in [163, 164]. They considered a toy model, devoid of QCD complications irrelevant in the context of their argument, and showed that moments of quasi-PDFs evince power-divergent mixings with lower-dimensional operators coming from the trace terms. They argued that, for a proper lattice extraction, such mixings would have to be computed and subtracted.However, it was counterargued in three papers [84, 90, 165] that the problem actually does not exist. In [165], it was pointed out that indeed all moments, except for the zeroth, do not converge. However, the light-cone PDFs are extracted from the nonlocal quasidistributions that avoid the power divergence problem; i.e, moments of quasi-PDFs are never intended to be computed. They trace it back to the much simpler ultraviolet physics in the nonlocal formulation, where, apart from the Wilson-line-induced power divergence, shown to be renormalizable (see Sections 5 and 7), there are only logarithmic divergences. All of these divergences can be properly renormalized on the lattice, e.g., in a RI/MOM-type scheme.It was also argued by Rossi and Testa that divergent moments of quasi-PDFs, , necessarily imply divergent moments of extracted light-cone PDFs, , since the latter are proportional to the former. However, this argument ignores the presence of moments of the matching function, :It is exactly the matching function that makes the moments of standard PDFs finite after the subtraction of the UV differences between the two types of distributions. In other words, the divergence in the moments is exactly canceled by the divergence of moments , yielding finite moments of light-cone PDFs.Further explanations were provided in [84]. Radyushkin pointed out that Rossi and Testa rely on a Taylor expansion in . This expansion may be justified in the very soft case when all derivatives with respect to exist at . However, in the general case, the use of the Taylor expansion for the hard logarithm “amounts to just asking for trouble.” Crucially, it is the part that contributes slowly decreasing terms into the large- part of quasi-PDFs and these terms lead to the divergence of the quasidistribution moments. These terms are not eliminated by just taking the infinite momentum limit, but they disappear upon the matching procedure. As a result, one can calculate the moments of light-cone PDFs from the quasi-PDF data.Finally, J. Karpie, K. Orginos, and S. Zafeiropoulos demonstrated [90] explicitly that the problem does not appear for pseudo-PDFs, refuting claim thereof by Rossi and Testa. The reduced ITDs were OPE expanded in lattice-regularized twist-2 operators, which indeed have power divergences on the lattice, due to the breaking of the rotational symmetry. However, the Wilson coefficients in the OPE have exactly the same power divergences and cancel the power divergences of the matrix elements, order by order in the expansion, and the final series is finite to all orders. The Authors also provided an explicit numerical demonstration for the first two moments, obtaining compatibility within errors with an earlier lattice calculation in the same quenched setup; see Section 11.4 for more details.With all these developments, it has been convincingly established that the problem advocated by Rossi and Testa does not hinder the lattice extraction of light-cone PDFs. Thus, power-divergent mixings only manifest themselves in certain quantities, like moments of quasi-PDFs, which are nonphysical. In turn, finite mixings can be avoided by a proper Dirac structure of matrix elements.

We finalize this section with a schematic flowchart (Figure 11), prepared by C. Monahan, representing the various steps on the way from bare matrix elements to final light-cone PDFs. Some of the discussed above challenges for the lattice computations are indicated. We refer also to [51] for another discussion of systematic effects.

Figure 11: Schematic representation of different steps needed to extract light-cone PDFs from quasi-PDFs and of the challenges encountered at these steps. Source: [51] (arXiv), reprinted with permission by the Author.

7. Renormalization of Nonlocal Operators

The renormalization of nonlocal operators that include a Wilson line is a main component of the lattice calculation related to quasi-PDFs. Lattice results from the numerical simulations can only be related to physical quantities upon appropriate renormalization and only then comparison with experimental and phenomenological estimates becomes a real possibility. As discussed in Section 5.2, the renormalizability of the straight Wilson line bilinear operators has been investigated early on by Ji and Zhang [133] to one loop in perturbation theory, concluding that such operators are multiplicatively renormalizable. The argument was also extended to two-loop level. Ishikawa et al. showed in [132] the feasibility of the subtraction of the power divergence present in the operators under study to achieve a well-defined matching between the quasi-PDFs with the light-cone PDFs. These studies were later expanded to prove renormalizability of the operators to all orders in perturbation theory [134, 135], including the lattice regularization.

Since the proposal of Ji in 2013, several aspects of quasi-PDFs have been investigated, such as the feasibility of a calculation from Lattice QCD. This includes algorithmic developments [109, 152, 154] that lead to simulations at the physical point, and the matching between the quasi- and light-cone PDFs [48, 83, 124, 142, 166]. Thus, the lattice calculations have progressed with a missing ingredient: its renormalization. It is not until 2017 that a proper renormalization prescription has been proposed, despite the theoretical developments on the renormalizability of the nonlocal operators of interest. This has proven to be a challenging and delicate process due to the presence of the Wilson line that brings in additional power divergences, the nonlocality and the complex nature of the matrix elements. As a consequence, the first studies of quasi-PDFs on the lattice were either neglecting renormalization [46] or multiplying naively the matrix elements with the renormalization function of the corresponding local operators [100, 107, 108], a procedure understood as normalization.

7.1. Power Divergence

Among the first attempts to understand the renormalization of nonlocal operators was to address the power divergence inherited from the Wilson line in the static potential approach, as described in this subsection. Eliminating the power divergence results in a well-defined matching between the quasi-PDFs and the light-cone PDFs.

The renormalization of nonlocal operators in gauge theories has been investigated long time ago [167170], and later in the 1980’s and 1990’s [140, 141, 171176]. In these seminal works, it was identified that Wilson loops along a smooth contour with length , computed in dimensional regularization (DR), are finite functions of the renormalized coupling constant, while other regularization schemes may lead to additional renormalization functions, that isIn the above expression, the subscript indicates the distance between the end points of the Wilson line, whereas is mass renormalization of a test particle that moves along . Also, the logarithmic divergences can be factorized within , and the power divergence is included in . In particular, in the lattice regularization (LR), the latter divergence manifests itself in terms of a power divergence with respect to the UV cutoff, the inverse of the lattice spacing ,where is dimensionless. This is analogous to the heavy quark approach, where similar characteristics are observed. For instance, a straight Wilson line may represent a static heavy quark propagator, and a corresponding mass shift. Inspired by this analogy, Chen et al. [177] and Ishikawa et al. [132] proposed a modification such that spacelike correlators do not suffer from any power divergence. In their work, the matrix element appearing in Equation (25) can be replaced bythat has only logarithmic divergence in the lattice regulator. Note that, in the above expression, a general Dirac structure appears, as this methodology is applicable for all types of PDFs. A necessary component of this improved matrix elements is the calculation of the mass counterterm . First attempts to obtain appear in the literature [132, 177] and are based on adopting a static potential [178].

Following the notation of [132], we define the static potential for separation , via an Wilson loop: where is large. The Wilson loop is renormalized as where is due to the corners of the chosen rectangle. Using Equations (57)-(58), one can relate the desired mass counterterm to the static potential, An additional condition is necessary to determine , which can be fixed using where the choice of depends on the scheme of choice. The appearance of an arbitrary scale is not a surprise and is in accordance with the work of R. Sommer [179], suggesting a further finite dimensionful scale that appears in the exponential of Equation (55), based on arguments from Heavy Quark Effective Theory.

A proper determination of requires a nonperturbative evaluation on the same ensemble used for the calculation of the quasi-PDF. This is essential in order to eliminate a source of systematic uncertainty related to the truncation of a perturbative evaluation, which is limited to typically one to two loops. Furthermore, such a quantity can be used for a purely nonperturbative matrix element. Nevertheless, this quantity has been computed to one-loop level in perturbation theory [132, 177] in an effort to qualitatively understand the structure of the power divergence. Within these works, it was demonstrated that such a mass counterterm removes the power divergence to all orders in perturbation theory.

7.2. Lattice Perturbation Theory

The promising results from the first exploratory studies of the quasi-PDFs [46, 47] have led to an interest in developing a renormalization prescription appropriate for Lattice QCD. Several features of the quasi-PDFs have been studied in continuum perturbation theory (see, e.g., Section 5.2), but more recently there appeared also calculations in lattice perturbation theory. Such development is highly desirable, as the ultimate goal is to relate quasi-PDFs extracted from numerical simulations in Euclidean spacetime to standard PDFs in continuum Minkowski space. In this subsection, we highlight three calculations that provided important insights into quasi-PDFs.

7.2.1. IR Divergence in Lattice and Continuum

X. Xiong et al. have computed in [180] the unpolarized quasi-PDF in lattice perturbation theory using clover fermions and Wilson gluons. The calculation was performed in Feynman gauge and included a nonzero quark mass. This allowed the study of the matching between lattice and continuum, and, as discussed in that work, the massless and continuum limits do not commute, leading to different IR behaviors. The calculation contained the one-loop Feynman diagrams shown in Figure 12, where the quark () and internal gluon () momenta are shown explicitly.

Figure 12: One-loop diagrams for the calculation of Green’s function of nonlocal operators. The double line represents the gauge link in the operator. Source: [180] (arXiv), reprinted with permission by the Authors.

Here, we do not provide any technical details and focus only on the qualitative conclusions, but we encourage the interested Reader to consult [180] for further details. The one-loop results show that a correct recovery of the IR divergence of the continuum quasi-PDFs can only be achieved for and ; the complete continuum quasi-PDF is obtained from . However, it is stressed that this is necessary only for perturbative calculations, as the nonperturbative ones do not contain collinear divergence. This is encouraging and serves as a proof of the matrix elements in Euclidean space being the same as the ones in Minkowski space. Same conclusions have been obtained from [131, 165].

7.2.2. Renormalization of Nonlocal Operators in Lattice Perturbation Theory

M. Constantinou and H. Panagopoulos have calculated in [106] the renormalization functions for the nonlocal operators in perturbation theory in lattice regularization (LR). The calculation was performed to one-loop level, in which the diagrams shown in Figure 12 were evaluated for clover fermions and a variety of Symanzik-improved gluon actions, including Iwasaki and tree-level Symanzik. Note that the schematic representation of the diagrams shown in Figure 12 appears to be the same for dimensional and lattice regularizations, but a calculation in LR is by far more complicated numerically. This is a consequence of the QCD Lagrangian discretization, coupled with the additional divergences that depend on the lattice regulator. The renormalization functions were computed for massless fermions in the scheme for general values of the action parameters and general gauge. The latter has served as a cross-check for gauge-independent quantities. In addition to the calculation in LR, Green’s function of the nonlocal operators has been obtained in dimensional regularization (DR), which, in combination with the corresponding lattice results, gives a direct definition of the renormalization functions in the scheme.

The operator under study includes a straight Wilson line in the direction and has the general form In the above operator, only is to be considered, due to the appearance of contact terms beyond tree level, making the limit nonanalytic. Green’s functions of the above operators are evaluated for all independent combinations of the Dirac matrices, ; that is, Note that the above includes twist-2 operators as well as higher twist. We will later see that it is important to distinguish between the cases in which the spin index is in the same direction as the Wilson line (), or perpendicular to it ().

One of the main findings of this work is the difference between the bare lattice Green’s functions and the -renormalized ones (from DR). This contributes to the renormalization of the operator in LR and was found to receive two contributions, one proportional to the tree level of the operator (), and one that has a different Dirac structure (); that is, where are numerical coefficients that depend on the action parameters. Note that the term proportional to is the one-loop counterpart of the power divergence discussed in the previous subsection, and its numerical coefficient has been computed in [106]. Perturbation theory is not reliable in providing numerical values for mixing and power coefficients; nevertheless it provides crucial qualitative input for the quantities under study.

The conclusion from Equation (63) is that the operator with structure will renormalize multiplicatively only if the anticommutator between and vanishes. This is true for the axial and tensor operators that are used for the helicity () and transversity () PDFs, which have one index in the direction of the Wilson line. On the contrary, the vector current turns out to mix with the scalar, a higher-twist operator. This finding impacted significantly all numerical simulations of the unpolarized PDFs, as they were using this particular operator [46, 100, 107, 108, 181], unaware of the aforementioned mixing. With this work, the Authors proposed to use a vector operator with the spin index perpendicular to the Wilson line and the ideal candidate is the temporal direction () for reasons beyond the mixing. First, the matching procedure between the quasi-PDFs and normal-PDFs also holds for , as it belongs to the same universality class of [182]. In addition, the temporal vector operator offers a faster convergence to the light-cone PDFs, as discussed in [75]. Note, however, that and do not share the same matching formula, with the latter being calculated much later. Detailed discussion on the matching to light-cone PDFs is provided in Section 8.

The work of [106] has led to a number of useful information not only on the renormalization pattern of nonlocal operators, but also on the conversion from a renormalization scheme of choice () to the scheme. This is extracted from the ratio of renormalization functions in the two schemes computed in DR, and multiplies nonperturbative estimates of in order to obtain . Due to the mixing found in the lattice calculation, a convenient scheme which is applicable nonperturbatively is an RI-type [183]. A well-defined prescription within RI-type schemes exists for both the multiplicative and the mixing coefficients, as described in Section 7.3. The is a natural choice for nonperturbative evaluations of renormalization functions, because it does not require to separate finite contributions with tensor structures which are distinct from those at tree level (typically denoted by that appears in the local vector and axial operators in the limit of zero quark mass). The conversion factor has been computed and was used in the renormalization program of the ETMC [151]. The conversion factor shares certain features with the matrix elements; that is, it is complex and symmetric/antisymmetric in the real/imaginary part. A representative example is shown in Figure 13 for the vector, axial, and tensor operators that have a Dirac index in the same direction as the Wilson line.