pith. machine review for the scientific record. sign in

arxiv: 2603.22407 · v2 · submitted 2026-03-23 · ✦ hep-ph

Recognition: no theorem link

MadNIS at NLO

Authors on Pith no claims yet

Pith reviewed 2026-05-15 00:28 UTC · model grok-4.3

classification ✦ hep-ph
keywords neural importance samplingNLO calculationsamplitude surrogatesFKS subtractionmulti-jet processesMonte Carlo integrationvariance reductionelectron-positron scattering
0
0 comments X

The pith

Combining neural importance sampling with amplitude surrogates accelerates NLO calculations while preserving precision.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that fast learned surrogates for scattering amplitudes can be paired with neural importance sampling to speed up next-to-leading-order Monte Carlo integrations. For virtual corrections the method learns a ratio to the Born matrix element and supplies calibrated uncertainties that remain reliable over the full phase space. For real-emission contributions it retains standard FKS subtraction but trains sector-conditioned surrogates on the regularized integrands. Validation on electron-positron scattering into three and four jets shows clear gains in integration speed and variance reduction. The approach keeps the full multi-channel structure and FKS sector information as conditioning inputs to the sampler.

Core claim

MadNIS at NLO trains fast amplitude surrogates and feeds them into a neural importance sampler whose channels are conditioned on the usual multi-channel mappings and FKS sectors. Virtual pieces are handled by learning a ratio to the Born matrix element together with per-point uncertainty estimates; real-emission pieces use ordinary FKS subtraction but replace the regularized integrand with a sector-specific surrogate. The resulting integrator delivers substantial speed-ups and variance reduction on e+e- to three- and four-jet final states while the calibrated uncertainties keep the precision under control across phase space.

What carries the argument

Neural importance sampler conditioned on multi-channel mappings and FKS sectors, using learned ratios of virtual amplitudes to the Born matrix element with attached uncertainty estimates.

If this is right

  • NLO cross sections for multi-jet final states can be obtained with far fewer Monte Carlo events for the same target precision.
  • The same trained surrogates can be reused across different integration channels and observables without retraining.
  • The calibrated per-point uncertainties propagate directly into the final error budget of the NLO prediction.
  • The method remains compatible with existing FKS subtraction and multi-channel infrastructure.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same surrogate-plus-neural-sampler pattern could be applied to higher-multiplicity or hadron-collider processes once the training cost is amortized.
  • Because the surrogates are fast and differentiable they open the door to gradient-based optimization of analysis cuts inside the NLO integration.
  • If the uncertainty calibration generalizes, the approach could reduce the need for large fixed-order event samples in phenomenological studies.

Load-bearing premise

The learned ratio to the Born matrix element plus its calibrated uncertainties remains accurate enough everywhere in phase space to control the final integration error.

What would settle it

Re-running the three- and four-jet validation integrals with the surrogate switched off should recover the original MadGraph or MadEvent timing and variance; any claimed speed-up or variance reduction must disappear when the surrogate is removed.

read the original abstract

We combine fast amplitude surrogates with neural importance sampling to accelerate NLO calculations. For virtual corrections, a learned ratio to the Born matrix element with calibrated uncertainties guarantees reliable precision across phase space. For real emission, we stick to the standard FKS subtraction and train sector-conditioned surrogates of the regularized integrands away from divergences. MadNIS then uses multi-channel mappings and FKS sectors as conditions. We validate our approach for electron-positron scattering to three and four jets and find significant speed-ups and variance reduction in the integration.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces MadNIS at NLO, combining fast neural amplitude surrogates with neural importance sampling to accelerate NLO QCD calculations. Virtual corrections are handled via a learned ratio of the virtual amplitude to the Born matrix element equipped with calibrated uncertainties; real-emission contributions retain standard FKS subtraction and employ sector-conditioned surrogates of the regularized integrands. Multi-channel mappings and FKS sectors serve as conditioning inputs to the neural importance sampler. The approach is validated on e+e− → 3-jet and 4-jet processes, where significant speed-ups and variance reduction are reported.

Significance. If the surrogate uncertainty calibration proves reliable and the method preserves NLO accuracy, the framework could materially reduce the computational cost of multi-jet NLO phenomenology, enabling studies at higher multiplicities or with more differential observables than are currently feasible with exact amplitudes. The reuse of established FKS subtraction together with learned surrogates is a concrete and potentially reusable advance.

major comments (2)
  1. [Abstract] Abstract and validation section: the assertion that 'a learned ratio to the Born matrix element with calibrated uncertainties guarantees reliable precision across phase space' is load-bearing for the NLO claim, yet the manuscript supplies no description of the calibration procedure, coverage diagnostics, bias tests, or extrapolation checks (especially near soft/collinear boundaries). Without these, the cancellation between virtual and real contributions cannot be shown to remain under control at the per-mille level required for NLO cross sections.
  2. [Validation] Validation for 3- and 4-jet processes: the reported speed-ups and variance reduction are presented without quantitative error budgets, direct comparisons to exact virtual amplitudes on the same phase-space points, or propagation of surrogate uncertainties into the final integrated results and differential distributions. This leaves the central claim of maintained NLO precision only partially supported.
minor comments (2)
  1. Clarify how the neural importance sampler conditions on both multi-channel mappings and individual FKS sectors; an explicit equation or pseudocode block would remove ambiguity.
  2. Add a short table comparing wall-clock time and statistical uncertainty per event against a standard MadGraph5_aMC@NLO run for the same processes.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading and constructive feedback on our manuscript. We address the major comments point by point below and will revise the manuscript to strengthen the documentation of our methods.

read point-by-point responses
  1. Referee: [Abstract] Abstract and validation section: the assertion that 'a learned ratio to the Born matrix element with calibrated uncertainties guarantees reliable precision across phase space' is load-bearing for the NLO claim, yet the manuscript supplies no description of the calibration procedure, coverage diagnostics, bias tests, or extrapolation checks (especially near soft/collinear boundaries). Without these, the cancellation between virtual and real contributions cannot be shown to remain under control at the per-mille level required for NLO cross sections.

    Authors: We agree that the calibration procedure requires explicit documentation to support the NLO accuracy claim. While the manuscript states that uncertainties are calibrated, the detailed methodology, coverage diagnostics, bias tests, and boundary checks were not included. In the revised manuscript we will add a dedicated subsection describing the calibration protocol, including how coverage is verified, bias is assessed, and extrapolation is controlled near soft/collinear limits, thereby demonstrating control of the virtual-real cancellation at the per-mille level. revision: yes

  2. Referee: [Validation] Validation for 3- and 4-jet processes: the reported speed-ups and variance reduction are presented without quantitative error budgets, direct comparisons to exact virtual amplitudes on the same phase-space points, or propagation of surrogate uncertainties into the final integrated results and differential distributions. This leaves the central claim of maintained NLO precision only partially supported.

    Authors: We acknowledge that the validation would be more convincing with quantitative error budgets and direct comparisons. In the revised version we will add tables and figures showing point-by-point comparisons of surrogate versus exact virtual amplitudes, explicit error budgets for the surrogate contributions, and the propagation of these uncertainties through the Monte Carlo integration into both total cross sections and differential distributions. revision: yes

Circularity Check

0 steps flagged

Standard FKS and external Born inputs; neural surrogates trained externally with empirical validation

full rationale

The paper combines amplitude surrogates with neural importance sampling using standard FKS subtraction for real emissions and external Born matrix elements for virtual corrections. The learned ratio surrogate for virtuals is trained on data with asserted calibrated uncertainties, but the reported speed-ups and variance reductions are presented as empirical outcomes from validation on e+e- to 3/4 jets rather than quantities that reduce by construction to the fitted parameters. No self-definitional loops, fitted inputs renamed as predictions, or load-bearing self-citation chains appear in the derivation; self-citations to prior MadNIS work support the method but do not force the central results.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract supplies no explicit free parameters, axioms, or invented entities; all technical details are omitted.

pith-pipeline@v0.9.0 · 5397 in / 983 out tokens · 31533 ms · 2026-05-15T00:28:58.902056+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Open LHC Monte Carlo Event Generation

    hep-ph 2026-05 unverdicted novelty 2.0

    A review of initiatives to make LHC Monte Carlo event generations available as open data to minimize redundant simulations and resource use.