pith. machine review for the scientific record. sign in

arxiv: 2604.12416 · v1 · submitted 2026-04-14 · ✦ hep-lat · cs.LG

Recognition: unknown

Machine learning for four-dimensional SU(3) lattice gauge theories

Authors on Pith no claims yet

Pith reviewed 2026-05-10 14:21 UTC · model grok-4.3

classification ✦ hep-lat cs.LG
keywords machine learninglattice gauge theorySU(3)fixed-point actionnormalizing flowsrenormalization groupgauge-equivariant networkscontinuum limit
0
0 comments X

The pith

Machine learning produces fixed-point gauge actions that scale to the continuum limit in four-dimensional SU(3) lattice theory.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The review explains how generative models such as normalizing flows and diffusion processes, together with renormalization-group transformations implemented by gauge-equivariant neural networks, can sample gauge-field configurations more efficiently than standard Monte Carlo methods. It presents concrete scaling tests of one such machine-learned fixed-point action in four-dimensional SU(3) gauge theory, using observables built from gradient-flow scales that eliminate all tree-level lattice artifacts. These tests also cover the static quark potential and the deconfinement transition. A sympathetic reader would care because the approach directly attacks the critical slowing down and autocorrelation problems that currently limit the reach of lattice calculations toward finer spacings and larger volumes.

Core claim

A fixed-point action obtained by training gauge-equivariant convolutional neural networks on renormalization-group transformations yields ensembles whose gradient-flow scales, static potential, and deconfinement observables approach their continuum values without residual tree-level discretization errors, as demonstrated by explicit scaling studies in four-dimensional SU(3) gauge theory.

What carries the argument

Gauge-equivariant convolutional neural networks that learn renormalization-group improved fixed-point actions, paired with generative models for unbiased configuration sampling.

If this is right

  • Continuum extrapolations of physical quantities become feasible with smaller lattice artifacts.
  • Autocorrelation times and critical slowing down are reduced, allowing finer lattices to be simulated at comparable cost.
  • The same learned-action framework can be applied to other observables and to theories with dynamical fermions.
  • Direct comparison of the learned fixed-point action with perturbative and non-perturbative benchmarks becomes a quantitative test of the method.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The technique could be extended to full QCD with sea quarks once the pure-gauge case is validated.
  • Learned actions might reveal non-perturbative fixed-point structures not captured by conventional blocking schemes.
  • Integration with existing lattice QCD codes would allow immediate use in production calculations of hadron spectroscopy or thermodynamics.

Load-bearing premise

The machine-learned fixed-point action and generative models produce unbiased ensembles whose continuum extrapolations are controlled by the stated observables without residual systematic errors from the training procedure or the neural network architecture.

What would settle it

A clear mismatch between the machine-learned action and established continuum results for the gradient-flow scales or the static potential when the lattice spacing is decreased further while keeping physical volume fixed.

Figures

Figures reproduced from arXiv: 2604.12416 by Urs Wenger.

Figure 1
Figure 1. Figure 1: Illustration of the continuum limit for asymptotically free lattice field theories: the lattice spacing decreases from right to left, 𝑎 < 𝑎 ′ < 𝑎 ′′ , as the coupling decreases, 𝑔 < 𝑔 ′ < 𝑔 ′′ , or equivalently 𝛽 > 𝛽′ > 𝛽′′ increases. In the limit 𝛽 → ∞ the lattice spacing 𝑎 → 0 vanishes, which for a fixed physical length scale 𝜉 is equivalent to 𝜉/𝑎 → ∞, i.e., the continuum limit is realized as a second-o… view at source ↗
Figure 2
Figure 2. Figure 2: Illustration of the forward diffusion process where stochastic noise is added and the backward denoising process employed in generative diffusion models. Figure taken from Ref. [17]. 5 [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Illustration of the stochastic non-equilibrium MCMC scheme. Left plot: Configurations obtained on a lattice with open boundary conditions (OBC) using standard MCMC updates are connected via non￾equilibrium MCMC updates to configurations on a lattice with periodic boundary conditions (PBC). Right plot: Connecting OBC to PBC on a localized defect using machine-learned stochastic normalizing flows on the link… view at source ↗
Figure 4
Figure 4. Figure 4: Illustration of the RGT flow. The plane at 𝛽 → ∞ represents the critical surface where all the irrelevant couplings {𝑐𝛼} flow to the fixed point (FP). An action perturbed from the FP in direction of the only relevant coupling 𝑔 2 ∼ 1/𝛽 flows, under repeated (continuous) RGTs, along the renormalized trajectory (RT) defining quantum perfect actions with no lattice artefacts at finite lattice spacing. The act… view at source ↗
Figure 5
Figure 5. Figure 5: Illustrations of the L-CNN: the convolutional layer (L-Conv) in the left panel parallel transports gauge-covariant objects to a common position (red dot), where they are combined in the bilinear layer (L￾Bilin) in the middle panel and eventually mapped to gauge invariant objects with the trace layer (L-Tr) in the right panel. The lower row depicts simple examples of applying the sequence of the layers abov… view at source ↗
Figure 6
Figure 6. Figure 6: Illustrations of the architecture search. Shown are the relative action error (top row) and derivative error (bottom row) of the L-CNN w.r.t. to the exact FP values as a function of the architecture parameters, i.e., the number of layers (left plots), the total number of channels (middle plots), and the kernel size (right plots). is finite, the L-CNN produces an ultralocal action and therefore a truncated … view at source ↗
Figure 7
Figure 7. Figure 7: Continuum-limit extrapolations for the ratios 𝑡0.3/𝑤 2 0.3 and 𝑡0.5/𝑡0.3. Results from Wilson and Symanzik MC simulations are shown using plaquette and clover discretizations of the action density. shown in [PITH_FULL_IMAGE:figures/full_fig_p012_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Left plot: Comparison of various PDFs obtained from AIC-weighted continuum limits of the ratio 𝑡0.5/𝑤 2 0.5 using a variety of fit functions and ranges. Right plot: Comparison of continuum predictions for four-dimensional SU(3) gauge theory from MC simulations using either the FP, Wilson or tree-level Symanzik improved lattice action. The 𝛽-function results are rescaled by a factor of 50 for visibility. 12… view at source ↗
Figure 9
Figure 9. Figure 9: Left plot: Static quark-antiquark potential obtained on lattices with lattice spacings ranging between 0.08 fm ≤ 𝑎 ≤ 0.3 fm. Right plot: Thermodynamic limit of the critical coupling 𝛽𝑐 obtained from the position of the Polyakov loop susceptibility at a lattice spacing of 𝑎 ≃ 0.3 fm corresponding to a temporal lattice extent of 𝐿𝑡 = 2. shows the AIC-weighted PDFs of the continuum limits for the ratio 𝑡0.5/𝑤… view at source ↗
read the original abstract

In this review I summarize how machine learning can be used in lattice gauge theory simulations and what ap\-proaches are currently available to improve the sampling of gauge field configurations, with a focus on applications in four-dimensional SU(3) gauge theories. These include approaches based on generative machine-learning models such as (stochastic) normalizing flows and diffusion processes, and an approach based on renormalization group (RG) transformations, more specifically the machine learning of RG-improved gauge actions using gauge-equivariant convolutional neural networks. In particular, I present scaling results for a machine-learned fixed-point action in four-dimensional SU(3) gauge theory towards the continuum limit. The results include observables based on the classically perfect gradient-flow scales, which are free of tree-level lattice artefacts to all orders, and quantities related to the static potential and the deconfinement transition.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The paper reviews machine learning methods for lattice gauge theory simulations in four-dimensional SU(3) gauge theories, covering generative models (normalizing flows, diffusion processes) for configuration sampling and renormalization-group improved actions learned via gauge-equivariant convolutional neural networks. It presents explicit scaling results for a machine-learned fixed-point action, demonstrating controlled continuum extrapolations using classically perfect gradient-flow scales (free of tree-level artifacts to all orders), static-potential observables, and the deconfinement transition.

Significance. If the scaling results hold with controlled systematics, the work demonstrates that ML-derived actions and sampling methods can achieve reliable continuum limits in 4D SU(3) theories, with the choice of classically perfect observables providing a clear advantage in eliminating discretization errors. This strengthens the case for ML techniques as practical tools in lattice QCD simulations.

major comments (2)
  1. [§4] §4 (scaling results): The continuum extrapolations for the deconfinement transition and static potential rely on the assumption that the machine-learned action produces unbiased ensembles; the manuscript should include a quantitative assessment of residual training biases (e.g., via comparison of autocorrelation times or reweighting factors) to confirm that these do not affect the extrapolated values at the reported precision.
  2. [§3.2] §3.2 (equivariant CNN architecture): The claim that the learned action approximates a fixed-point action requires explicit verification that the RG transformation converges under iteration; without a demonstration that the learned coupling flow stabilizes (e.g., via a plot of effective couplings after multiple RG steps), the 'fixed-point' designation remains approximate rather than demonstrated.
minor comments (3)
  1. [Abstract] The abstract states that results are 'free of tree-level lattice artefacts to all orders' but does not specify the lattice spacings or number of ensembles used in the scaling study; adding these details would improve clarity.
  2. [Figures in §4] Figure captions for the gradient-flow scale plots should include the fit form and goodness-of-fit metrics to allow readers to assess the quality of the continuum extrapolation.
  3. [§2.3] Notation for the neural-network layers (e.g., the precise definition of gauge-equivariant convolutions) is introduced without a compact summary table; a small table listing layer types, channel dimensions, and activation functions would aid readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading of our manuscript and the constructive comments. We are pleased with the positive assessment of the work and the recommendation for minor revision. We address each major comment below and will update the manuscript accordingly.

read point-by-point responses
  1. Referee: [§4] §4 (scaling results): The continuum extrapolations for the deconfinement transition and static potential rely on the assumption that the machine-learned action produces unbiased ensembles; the manuscript should include a quantitative assessment of residual training biases (e.g., via comparison of autocorrelation times or reweighting factors) to confirm that these do not affect the extrapolated values at the reported precision.

    Authors: We agree that a quantitative assessment of residual training biases is important to confirm the reliability of the continuum extrapolations. While the current manuscript presents the scaling results under the assumption of unbiased ensembles from the generative models, we will add an explicit analysis in the revised §4. This will include comparisons of autocorrelation times for the gradient-flow scales and static-potential observables, as well as estimates of reweighting factors, to demonstrate that any residual biases do not affect the extrapolated values at the reported precision. revision: yes

  2. Referee: [§3.2] §3.2 (equivariant CNN architecture): The claim that the learned action approximates a fixed-point action requires explicit verification that the RG transformation converges under iteration; without a demonstration that the learned coupling flow stabilizes (e.g., via a plot of effective couplings after multiple RG steps), the 'fixed-point' designation remains approximate rather than demonstrated.

    Authors: We acknowledge that an explicit demonstration of convergence under iterated RG transformations would strengthen the fixed-point claim. The manuscript supports the approximation through the gauge-equivariant CNN training objective and the observed continuum scaling behavior. To address this directly, we will add to the revised §3.2 a plot of effective couplings after multiple RG steps, along with a discussion showing stabilization of the coupling flow. revision: yes

Circularity Check

0 steps flagged

No significant circularity in derivation or scaling claims

full rationale

The paper is a review summarizing ML methods for lattice gauge theory in 4D SU(3), covering generative models like normalizing flows and an RG-based approach using gauge-equivariant CNNs to learn improved actions. It presents scaling results for a machine-learned fixed-point action, employing classically perfect gradient-flow scales (free of tree-level artifacts) plus static potential and deconfinement observables for continuum extrapolations. No load-bearing step reduces a claimed prediction or result to a fitted parameter or self-citation by construction; the observables are chosen precisely to control artifacts independently of the training procedure. Self-citations to prior literature on flows and equivariant networks are present but not used to justify uniqueness or force the central scaling demonstrations, which remain empirically grounded and externally falsifiable via the stated observables.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Only the abstract is available. No free parameters, axioms, or invented entities are explicitly introduced in the provided text; the work relies on standard lattice gauge theory and machine-learning assumptions not detailed here.

pith-pipeline@v0.9.0 · 5431 in / 1139 out tokens · 58338 ms · 2026-05-10T14:21:25.779101+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Testing machine-learned distributions against Monte Carlo data for the QCD chiral phase transition

    hep-lat 2026-05 unverdicted novelty 7.0

    Conditional MAFs interpolate QCD chiral phase structure across coupling, mass, and volume, reproducing reweighting while cutting required ensembles despite bias near transitions.

  2. Lattice fermion formulation via Physics-Informed Neural Networks: Ginsparg-Wilson relation and Overlap fermions

    hep-lat 2026-05 unverdicted novelty 7.0

    Physics-informed neural networks construct overlap fermions by optimizing to the Ginsparg-Wilson relation and autonomously discover both the standard and generalized Fujikawa-type versions of the relation.

  3. Lattice fermion formulation via Physics-Informed Neural Networks: Ginsparg-Wilson relation and Overlap fermions

    hep-lat 2026-05 unverdicted novelty 7.0

    Physics-Informed Neural Networks construct lattice Dirac operators satisfying the Ginsparg-Wilson relation, reproducing overlap fermions to high accuracy and discovering a Fujikawa-type generalized relation via algebr...

Reference graph

Works this paper leans on

53 extracted references · 50 canonical work pages · cited by 2 Pith papers · 2 internal anchors

  1. [1]

    Lawrence,Machine-learning approaches to accelerating lattice simulations,PoS LATTICE2024(2025) 010 [2502.02670]

    S. Lawrence,Machine-learning approaches to accelerating lattice simulations,PoS LATTICE2024(2025) 010, [2502.02670]

  2. [2]

    Abbott, D

    R. Abbott, D. Boyda, Y. Fu, D. C. Hackett, G. Kanwar, F. Romero-López et al.,Variance reduction in lattice QCD observables via normalizing flows,2603.02984. 14 Machine learning for 4𝑑SU(3) lattice gauge theoriesUrs Wenger

  3. [3]

    Detmold, G

    W. Detmold, G. Kanwar, Y. Lin, P. E. Shanahan and M. L. Wagman,Exploring gauge-fixing conditions with gradient-based optimization, in41st International Symposium on Lattice Field Theory, 10, 2024.2410.03602

  4. [4]

    Wilson loops with neural networks

    V. Bellscheidt, N. Brambilla, A. S. Kronfeld and J. Mayer-Steudte,Wilson loops with neural networks,2602.02436

  5. [5]

    Spriggs, E

    T. Spriggs, E. Greplova, J. Carrasquilla and J. Nys,Accurate Ground States of SU(2) Lattice Gauge Theory in 2+1D and 3+1D,Phys. Rev. Lett.136(2026) 101902, [2509.12323]

  6. [6]

    Romiti,SU(N) lattice gauge theories with physics-informed neural networks,Phys

    S. Romiti,SU(N) lattice gauge theories with physics-informed neural networks,Phys. Rev. D 113(2026) 054511, [2510.26904]

  7. [7]

    Kanwar,Flow-based sampling for lattice field theories, in40th International Symposium on Lattice Field Theory, 1, 2024.2401.01297

    G. Kanwar,Flow-based sampling for lattice field theories, in40th International Symposium on Lattice Field Theory, 1, 2024.2401.01297

  8. [8]

    Kanwar, M

    G. Kanwar, M. S. Albergo, D. Boyda, K. Cranmer, D. C. Hackett, S. Racanière et al., Equivariant flow-based sampling for lattice gauge theory,Phys. Rev. Lett.125(2020) 121601, [2003.06413]

  9. [9]

    Boyda, G

    D. Boyda, G. Kanwar, S. Racanière, D. J. Rezende, M. S. Albergo, K. Cranmer et al., Sampling using𝑆𝑈(𝑁)gauge equivariant flows,Phys. Rev. D103(2021) 074504, [2008.05456]

  10. [10]

    Abbott et al.,Gauge-equivariant flow models for sampling in lattice field theories with pseudofermions,Phys

    R. Abbott et al.,Gauge-equivariant flow models for sampling in lattice field theories with pseudofermions,Phys. Rev. D106(2022) 074506, [2207.08945]

  11. [11]

    Abbott, et al., Normalizing flows for lattice gauge theory in arbitrary space-time dimension (2023)

    R. Abbott et al.,Normalizing flows for lattice gauge theory in arbitrary space-time dimension,2305.02402

  12. [12]

    Abbott, D

    R. Abbott, D. Boyda, D. C. Hackett, G. Kanwar, F. Romero-López, P. E. Shanahan et al., Practical applications of machine-learned flows on gauge fields,PoSLATTICE2023(2024) 011, [2404.11674]

  13. [13]

    Abbott, D

    R. Abbott, D. Boyda, G. Kanwar, F. Romero-López, D. C. Hackett, P. E. Shanahan et al., Progress in Normalizing Flows for 4d Gauge Theories,PoSLATTICE2024(2025) 066, [2502.00263]

  14. [14]

    R.Abbott, A.Botev, D.Boyda, D.C.Hackett, G.Kanwar, S.Racanièreetal.,Applicationsof flow models to the generation of correlated lattice QCD ensembles,Phys. Rev. D109(2024) 094514, [2401.10874]

  15. [15]

    Abbottet al., Eur

    R. Abbott et al.,Aspects of scaling and scalability for flow-based sampling of lattice QCD, Eur. Phys. J. A59(2023) 257, [2211.07541]

  16. [16]

    C.-H. Lai, Y. Song, D. Kim, Y. Mitsufuji and S. Ermon,The principles of diffusion models, 2025. 15 Machine learning for 4𝑑SU(3) lattice gauge theoriesUrs Wenger

  17. [17]

    L. Wang, G. Aarts and K. Zhou,Diffusion models as stochastic quantization in lattice field theory,JHEP05(2024) 060, [2309.17082]

  18. [18]

    Fukushima, S

    K. Fukushima, S. Kamata and Y. Hirono,Stochastic Quantization and Diffusion Models,J. Phys. Soc. Jap.94(2025) 031010, [2411.11297]

  19. [19]

    Q. Zhu, G. Aarts, W. Wang, K. Zhou and L. Wang,Physics-conditioned diffusion models for lattice gauge theory,JHEP03(2026) 111, [2502.05504]

  20. [20]

    Aarts, D

    G. Aarts, D. E. Habibi, L. Wang and K. Zhou,Combining complex Langevin dynamics with score-based and energy-based diffusion models,JHEP12(2025) 160, [2510.01328]

  21. [21]

    O. Vega, J. Komijani, A. El-Khadra and M. Marinkovic,Group-Equivariant Diffusion Models for Lattice Field Theory,2510.26081

  22. [22]

    A. Lou, M. Xu and S. Ermon,Scaling riemannian diffusion models, 2023

  23. [23]

    Kanwar and O

    G. Kanwar and O. Vega,Spectral Diffusion for Sampling onSU(𝑁), in42th International Symposium on Lattice Field Theory, 12, 2025.2512.19877

  24. [24]

    Aarts, D.E

    G. Aarts, D. E. Habibi, A. Ipp, D. I. Müller, T. R. Ranner, L. Wang et al.,Generalizable Equivariant Diffusion Models for Non-Abelian Lattice Gauge Theory,2601.19552

  25. [25]

    Alharazin, J.Y

    H. Alharazin, J. Y. Panteleeva and B. D. Sun,Diffusion Models for SU(2) Lattice Gauge Theory in Two Dimensions,2602.09045

  26. [26]

    Bonanno, A

    C. Bonanno, A. Nada and D. Vadacchino,Mitigating topological freezing using out-of-equilibrium simulations,JHEP04(2024) 126, [2402.06561]

  27. [27]

    Vadacchino, A

    D. Vadacchino, A. Nada and C. Bonanno,Topological susceptibility of SU(3) pure-gauge theoryfromout-of-equilibriumsimulations,PoSLATTICE2024(2025)415,[2411.00620]

  28. [28]

    Hasenbusch,Fighting topological freezing in the two-dimensional𝐶𝑃𝑁−1 model,Phys

    M. Hasenbusch,Fighting topological freezing in the two-dimensional CPN-1 model,Phys. Rev. D96(2017) 054504, [1706.04443]

  29. [29]

    Bonanno, C

    C. Bonanno, C. Bonati and M. D’Elia,Large-𝑁 𝑆𝑈(𝑁)Yang-Mills theories with milder topological freezing,JHEP03(2021) 111, [2012.14000]

  30. [30]

    Jarzynski,Nonequilibrium equality for free energy differences,Phys

    C. Jarzynski,Nonequilibrium equality for free energy differences,Phys. Rev. Lett.78(Apr,

  31. [31]

    2690–2693, [cond-mat/9610209]

  32. [32]

    Caselle, G

    M. Caselle, G. Costagliola, A. Nada, M. Panero and A. Toniato,Jarzynski’s theorem for lattice gauge theory,Phys. Rev. D94(2016) 034503, [1604.05544]

  33. [33]

    Bonanno, A

    C. Bonanno, A. Bulgarelli, E. Cellini, A. Nada, D. Panfalone, D. Vadacchino et al.,A scalable flow-based approach to mitigate topological freezing, in42th International Symposium on Lattice Field Theory, 1, 2026.2601.20708

  34. [34]

    Caselle, E

    M. Caselle, E. Cellini, A. Nada and M. Panero,Stochastic normalizing flows as non-equilibrium transformations,JHEP07(2022) 015, [2201.08862]. 16 Machine learning for 4𝑑SU(3) lattice gauge theoriesUrs Wenger

  35. [35]

    Bulgarelli, E

    A. Bulgarelli, E. Cellini and A. Nada,Sampling SU(3) pure gauge theory with Stochastic Normalizing Flows,PoSLATTICE2024(2025) 040, [2409.18861]

  36. [36]

    Bulgarelli, E

    A. Bulgarelli, E. Cellini and A. Nada,Scaling of stochastic normalizing flows in SU(3) lattice gauge theory,Phys. Rev. D111(2025) 074517, [2412.00200]

  37. [37]

    Bulgarelli, E

    A. Bulgarelli, E. Cellini, K. Jansen, S. Kühn, A. Nada, S. Nakajima et al.,Flow-Based Sampling for Entanglement Entropy and the Machine Learning of Defects,Phys. Rev. Lett. 134(2025) 151601, [2410.14466]

  38. [38]

    Scaling flow-based approaches for topology sampling in $\mathrm{SU}(3)$ gauge theory

    C. Bonanno, A. Bulgarelli, E. Cellini, A. Nada, D. Panfalone, D. Vadacchino et al.,Scaling flow-based approaches for topology sampling in SU(3) gauge theory,JHEP04(2026) 051, [2510.25704]

  39. [39]

    Fixed point actions from convolutional neural networks,

    K. Holland, A. Ipp, D. I. Müller and U. Wenger,Fixed point actions from convolutional neural networks,PoSLATTICE2023(2024) 038, [2311.17816]

  40. [40]

    Machine learning a fixed point action for SU(3) 20 gauge theory with a gauge equivariant convolutional neural network,

    K. Holland, A. Ipp, D. I. Müller and U. Wenger,Machine learning a fixed point action for SU(3) gauge theory with a gauge equivariant convolutional neural network,Phys. Rev. D110 (2024) 074502, [2401.06481]

  41. [41]

    Perfect lattice action for asymptotically free theories

    P. Hasenfratz and F. Niedermayer,Perfect lattice action for asymptotically free theories, Nucl. Phys. B414(1994) 785–814, [hep-lat/9308004]

  42. [42]

    Blatter, R

    M. Blatter, R. Burkhalter, P. Hasenfratz and F. Niedermayer,Instantons and the fixed point topological charge in the two-dimensional O(3) sigma model,Phys. Rev. D53(1996) 923–932, [hep-lat/9508028]

  43. [43]

    T. A. DeGrand, A. Hasenfratz, P. Hasenfratz and F. Niedermayer,The Classically perfect fixed point action for SU(3) gauge theory,Nucl. Phys. B454(1995) 587–614, [hep-lat/9506030]

  44. [44]

    T. A. DeGrand, A. Hasenfratz, P. Hasenfratz and F. Niedermayer,Nonperturbative tests of the fixed point action for SU(3) gauge theory,Nucl. Phys. B454(1995) 615–637, [hep-lat/9506031]

  45. [45]

    T. A. DeGrand, A. Hasenfratz, P. Hasenfratz and F. Niedermayer,Fixed point actions for SU(3) gauge theory,Phys. Lett. B365(1996) 233–238, [hep-lat/9508024]

  46. [46]

    HMC and gradient flow with machine-learned classically perfect fixed point actions,

    U. Wenger, K. Holland, A. Ipp and D. I. Müller,HMC and gradient flow with machine-learned classically perfect fixed point actions,PoSLATTICE2024(2025) 466, [2502.03315]

  47. [47]

    Machine-Learned Renormalization-Group-Improved Gauge Actions and Classically Perfect Gradient Flows,

    K. Holland, A. Ipp, D. I. Müller and U. Wenger,Machine-Learned Renormalization-Group-Improved Gauge Actions and Classically Perfect Gradient Flows, Phys. Rev. Lett.136(2026) 031901, [2504.15870]

  48. [48]

    Blatter and F

    M. Blatter and F. Niedermayer,New fixed point action for SU(3) lattice gauge theory,Nucl. Phys. B482(1996) 286–304, [hep-lat/9605017]. 17 Machine learning for 4𝑑SU(3) lattice gauge theoriesUrs Wenger

  49. [49]

    Niedermayer, P

    F. Niedermayer, P. Rufenacht and U. Wenger,Fixed point gauge actions with fat links: Scaling and glueballs,Nucl. Phys. B597(2001) 413–450, [hep-lat/0007007]

  50. [50]

    Favoni, A

    M. Favoni, A. Ipp, D. I. Müller and D. Schuh,Lattice Gauge Equivariant Convolutional Neural Networks,Phys. Rev. Lett.128(2022) 032003, [2012.12901]

  51. [51]

    Properties and uses of the Wilson flow in lattice QCD,

    M. Lüscher,Properties and uses of the Wilson flow in lattice QCD,JHEP08(2010) 071, [1006.4518]. [51]BMWcollaboration, S. Borsányi, S. Dürr, Z. Fodor, C. Hoelbling, S. D. Katz, S. Krieg et al., High-precision scale setting in lattice QCD,JHEP09(2012) 010, [1203.4469]

  52. [52]

    Ramos and S

    A. Ramos and S. Sint,Symanzik improvement of the gradient flow in lattice gauge theories, Eur. Phys. J. C76(2016) 15, [1508.05552]

  53. [53]

    Ihssen, R

    F. Ihssen, R. Kapust and J. M. Pawlowski,Generative sampling with physics-informed kernels,2510.26678. 18