pith. machine review for the scientific record. sign in

arxiv: 2512.13255 · v3 · submitted 2025-12-15 · 💻 cs.LG

Recognition: no theorem link

B\'ezierFlow: Learning B\'ezier Stochastic Interpolant Schedulers for Few-Step Generation

Authors on Pith no claims yet

Pith reviewed 2026-05-16 21:47 UTC · model grok-4.3

classification 💻 cs.LG
keywords Béz ierFlowstochastic interpolant schedulersfew-step generationdiffusion modelsflow modelstrajectory learninglightweight trainingsampling optimization
0
0 comments X

The pith

Béz ierFlow learns Béz ier schedulers to transform sampling trajectories and outperform discrete timestep selection for few-step generation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents Béz ierFlow, a lightweight training method that optimizes sampling trajectories for pretrained diffusion and flow models by parameterizing stochastic interpolant schedulers as Béz ier functions. This parameterization uses control points to automatically satisfy boundary conditions, differentiability, and monotonicity of the signal-to-noise ratio, turning the problem into learning an ordered set of points rather than discrete timesteps. The approach requires only 15 minutes of training yet delivers 2-3 times better performance when generating samples with 10 or fewer steps. By expanding the search space from isolated points to continuous curve transformations, the method consistently exceeds prior timestep-learning techniques across multiple models.

Core claim

Béz ierFlow represents scheduler functions as Béz ier curves whose control points are learned, converting the task of selecting optimal timesteps into the task of learning continuous trajectory transformations that meet all required constraints for stochastic interpolant schedulers and thereby improve few-step sampling quality.

What carries the argument

Béz ier functions parameterizing stochastic interpolant schedulers, where ordered control points enforce boundary conditions, differentiability, and SNR monotonicity.

If this is right

  • Outperforms prior timestep-learning methods in sample quality at 10 or fewer NFEs across diffusion and flow models.
  • Adapts pretrained models in 15 minutes without changing their weights.
  • Expands optimization from discrete points to continuous Béz ier-based trajectory transformations.
  • Maintains all scheduler constraints by construction through the choice of control points.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Continuous curve-based parameterizations may prove more expressive than discrete selection for scheduler optimization in other generative settings.
  • The same control-point learning idea could be tested with different curve families to check whether Béz ier is uniquely effective.
  • Quick adaptation of this form may enable practical use in latency-sensitive applications that still rely on pretrained backbones.

Load-bearing premise

Béz ier functions can represent the optimal transformation of the sampling trajectory while satisfying boundary conditions, differentiability, and monotonicity of the SNR.

What would settle it

A controlled comparison on a held-out pretrained model in which the best discrete timestep selection matches or exceeds the performance of any learned Béz ier parameterization would falsify the advantage.

Figures

Figures reproduced from arXiv: 2512.13255 by Juil Koo, Minhyuk Sung, Seungwoo Yoo, Yunhong Min.

Figure 1
Figure 1. Figure 1: Illustration of Sampling Trajectories and Bezier Functions. ´ On the left, we visual￾ize different sampling trajectories. While initial trajectories deviate from the target distribution, BezierFlow aligns them with those of the teacher using only NFE=3. On the right, we present ex- ´ amples of 8-degree Bezier functions with different arrangements of control points. ´ path (the student), where {ti} N i=1 an… view at source ↗
Figure 2
Figure 2. Figure 2: Qualitative comparisons of samples generated using NFEs 6 and 8 on FFHQ and AFHQv2 datasets. We use UniPC solver as the base solver for both cases. NFE RK2 DMN GITS Bespoke LD3 BF CIFAR-10 32 × 32 with ReFlow (Liu et al., 2023) 6 8 NFE RK2 DMN GITS Bespoke LD3 BF ImageNet 256 × 256 with FlowDCN (Wang et al., 2024) 6 8 [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Qualitative comparisons of samples generated using NFEs 6 and 8 on CIFAR-10 and ImageNet datasets. We use RK2 solver as the base solver for both cases. sharper details and fewer artifacts at low NFEs. Notably, in the last row of [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Effect of the degree of Bezier functions. ´ Each line reports FID scores on CIFAR-10 across Bezier function degrees ´ n from 4 to 32. Higher degrees yield lower FID, with convergence around n = 32. Effect of the Degree of Bezier Functions. ´ [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative comparisons of samples generated using NFEs 6 and 8 on FFHQ and AFHQv2 datasets. We use UniPC solver as the base solver for both cases. NFE RK2 DMN GITS Bespoke LD3 BF CIFAR-10 32 × 32 with ReFlow (Liu et al., 2023) 6 8 NFE RK2 DMN GITS Bespoke LD3 BF ImageNet 256 × 256 with FlowDCN (Wang et al., 2024) 6 8 [PITH_FULL_IMAGE:figures/full_fig_p018_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative comparisons of samples generated using NFEs 6 and 8 on CIFAR-10 and ImageNet datasets. We use RK2 solver as the base solver for both cases. E.3 COMPARISON ON TRAINING EFFICIENCY WITH FEW-STEP GENERATION METHODS For a more comprehensive and fair comparison of training efficiency beyond the Tab. 4, we report additional results at matched NFEs with varying training budgets in Tab. 10. As shown, un… view at source ↗
Figure 7
Figure 7. Figure 7: Qualitative comparisons of samples generated using NFEs 6 and 8 with Stable Diffu￾sion v3.5 Esser et al. (2024). We use RK1 and RK2 as the base solver. 19 [PITH_FULL_IMAGE:figures/full_fig_p019_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Qualitative comparisons of 3D point cloud samples generated using NFEs 6 and 8 with PVD (Zhou et al., 2021). We use iPNDM as the base solver [PITH_FULL_IMAGE:figures/full_fig_p021_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Qualitative comparisons of layout samples generated using NFEs 6 and 8 with Lay￾outFlow (Guerreiro et al., 2024). We use RK1 as the base solver. The rightmost column shows teacher samples from RK45 solver [PITH_FULL_IMAGE:figures/full_fig_p022_9.png] view at source ↗
read the original abstract

We introduce B\'ezierFlow, a lightweight training approach for few-step generation with pretrained diffusion and flow models. B\'ezierFlow achieves a 2-3x performance improvement for sampling with $\leq$ 10 NFEs while requiring only 15 minutes of training. Recent lightweight training approaches have shown promise by learning optimal timesteps, but their scope remains restricted to ODE discretizations. To broaden this scope, we propose learning the optimal transformation of the sampling trajectory by parameterizing stochastic interpolant (SI) schedulers. The main challenge lies in designing a parameterization that satisfies critical desiderata, including boundary conditions, differentiability, and monotonicity of the SNR. To effectively meet these requirements, we represent scheduler functions as B\'ezier functions, where control points naturally enforce these properties. This reduces the problem to learning an ordered set of points in the time range, while the interpretation of the points changes from ODE timesteps to B\'ezier control points. Across a range of pretrained diffusion and flow models, B\'ezierFlow consistently outperforms prior timestep-learning methods, demonstrating the effectiveness of expanding the search space from discrete timesteps to B\'ezier-based trajectory transformations.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces Béz ierFlow, a lightweight training method for few-step sampling from pretrained diffusion and flow models. It parameterizes stochastic interpolant (SI) schedulers as Béz ier curves, learning an ordered set of control points to transform the sampling trajectory. This is claimed to naturally satisfy boundary conditions, differentiability, and SNR monotonicity, reducing the problem to point learning and yielding 2-3x performance gains over prior timestep-learning methods with ≤10 NFEs after only 15 minutes of training.

Significance. If the central claims hold, the work would offer a practical advance in efficient generative sampling by expanding the search space from discrete timesteps to continuous Béz ier-based trajectory transformations. The short training time and applicability across model classes could make few-step generation more accessible, provided the parameterization reliably produces valid schedulers.

major comments (2)
  1. [§3.2] §3.2 (Béz ier parameterization): The assertion that Béz ier functions with ordered control points in [0,1] automatically guarantee monotonicity of the SNR (in addition to boundary conditions and differentiability) lacks a formal derivation or explicit verification. SNR monotonicity is a derived property depending on the specific SI interpolant and curve-to-scheduler mapping; without a proof or post-training check on learned points, invalid schedulers could be produced, directly undermining the validity of the reported gains.
  2. [§5] §5 (Experiments): The central claim of consistent 2-3x outperformance with ≤10 NFEs rests on assertions of superiority over timestep-learning baselines, yet no quantitative metrics, error bars, number of runs, or detailed controls are referenced. This absence makes it impossible to assess whether the Béz ier expansion of the search space is responsible for the gains or if results are sensitive to particular model choices.
minor comments (2)
  1. [Abstract] Abstract: Consistently format 'Béz ier' with the proper accent throughout; the current mixed rendering reduces readability.
  2. [§2] §2: Provide a brief reminder of the SNR definition for stochastic interpolants before discussing monotonicity constraints, as readers from the broader diffusion community may need the reminder.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive comments. We agree that additional formal justification for the Béz ier parameterization and more rigorous experimental reporting are needed. We will revise the manuscript to incorporate a derivation of SNR monotonicity and expanded experimental details with quantitative metrics and controls. Point-by-point responses follow.

read point-by-point responses
  1. Referee: [§3.2] §3.2 (Béz ier parameterization): The assertion that Béz ier functions with ordered control points in [0,1] automatically guarantee monotonicity of the SNR (in addition to boundary conditions and differentiability) lacks a formal derivation or explicit verification. SNR monotonicity is a derived property depending on the specific SI interpolant and curve-to-scheduler mapping; without a proof or post-training check on learned points, invalid schedulers could be produced, directly undermining the validity of the reported gains.

    Authors: We acknowledge that the current manuscript presents the monotonicity property as following from the ordered control points without an explicit derivation. In the revised version we will add a short appendix section deriving that, for the SI interpolants and scheduler mapping used throughout the paper, any Béz ier curve with strictly ordered control points in [0,1] produces a strictly increasing scheduler function; because the SNR is a strictly decreasing function of the scheduler value under these interpolants, monotonicity of the SNR is thereby guaranteed. We will also report a post-training verification step that confirms all learned schedulers satisfy the monotonicity condition on the evaluation benchmarks. These additions directly address the possibility of invalid schedulers. revision: yes

  2. Referee: [§5] §5 (Experiments): The central claim of consistent 2-3x outperformance with ≤10 NFEs rests on assertions of superiority over timestep-learning baselines, yet no quantitative metrics, error bars, number of runs, or detailed controls are referenced. This absence makes it impossible to assess whether the Béz ier expansion of the search space is responsible for the gains or if results are sensitive to particular model choices.

    Authors: We agree that the experimental reporting is insufficient for rigorous assessment. In the revision we will expand Section 5 to include: (i) full FID (or equivalent) tables with mean and standard deviation over at least three independent runs per method and model, (ii) explicit statement of the number of runs and random seeds, (iii) additional controls that isolate the effect of the Béz ier parameterization versus the baseline timestep optimization under identical training budgets and model checkpoints, and (iv) results across the full set of pretrained diffusion and flow models mentioned in the abstract. These changes will allow readers to evaluate both the magnitude and robustness of the reported gains. revision: yes

Circularity Check

0 steps flagged

No circularity: parameterization and learning are independent of outputs

full rationale

The paper selects Bézier curves as a parameterization for SI schedulers because their control points are asserted to enforce boundary conditions, differentiability, and SNR monotonicity by the mathematical properties of the curve family itself. Control points are then optimized via a training objective that measures downstream sampling quality on pretrained models. This choice does not define the target scheduler in terms of the learned points, nor does any equation reduce a performance metric to a fitted quantity by construction. No self-citations appear in the provided text as load-bearing premises, and the method does not rename or smuggle in prior results. The derivation therefore remains self-contained: the search space expansion is a modeling decision whose validity is tested empirically rather than tautologically.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on the assumption that Béz ier curves inherently satisfy the required scheduler properties and that learning their control points yields a superior trajectory transformation.

free parameters (1)
  • Béz ier control points
    Positions of the ordered control points are learned during the short training procedure to define the scheduler function.
axioms (1)
  • domain assumption Béz ier functions satisfy boundary conditions, differentiability, and monotonicity of SNR when used as scheduler functions
    Invoked to justify the parameterization choice over direct timestep selection.

pith-pipeline@v0.9.0 · 5522 in / 1155 out tokens · 30136 ms · 2026-05-16T21:47:31.541793+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

26 extracted references · 26 canonical work pages · 2 internal anchors

  1. [1]

    Tract: Denoising diffusion models with transitive closure time-distillation.arXiv preprint arXiv:2303.04248,

    David Berthelot, Arnaud Autef, Jierui Lin, Dian Ang Yap, Shuangfei Zhai, Siyuan Hu, Daniel Zheng, Walter Talbott, and Eric Gu. Tract: Denoising diffusion models with transitive closure time-distillation.arXiv preprint arXiv:2303.04248,

  2. [2]

    Elucidating the Design Space of Diffusion-Based Generative Models

    URLhttps://arxiv.org/abs/2206.00364. Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yutong He, Yuki Mitsufuji, and Stefano Ermon. Consistency trajectory models: Learning probability flow ode trajectory of diffusion. InICLR,

  3. [3]

    Flow Matching Guide and Code

    URLhttps://arxiv. org/abs/2412.06264. Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. InICLR,

  4. [4]

    Since the pair of endpoints(x 0, x1) does not change under the sampling path transformation, the endpoint marginals also coincide:p0 = ¯p0 andp 1 = ¯p1

    Hence,¯xs=0 =x t=0 and¯xs=1 =x t=1. Since the pair of endpoints(x 0, x1) does not change under the sampling path transformation, the endpoint marginals also coincide:p0 = ¯p0 andp 1 = ¯p1. Proposition A.2.(Training objective invariance) For any two schedulers(α t, σt)and(¯αs,¯σs)with matching SNR endpoints, training an SI modelS ϕ under either scheduler m...

  5. [5]

    Sinceρ(t) = α(t) σ(t) and ¯ρ(s) =¯α(s) ¯σ(s)are strictly increasing, the mapst7→ν=ρ(t) 2 ands7→ν= ¯ρ(s) 2 are bijections onto the common interval[ν min, νmax]

    Then, Lν[ϕ; (αt, σt)] = 1 2 Z νmax νmin Ex1∼pdata, x0∼p0 ∥x1 −ˆxϕ(xν, ν)∥2 2 dν.(15) Now consider another schedule(¯α s,¯σs)with the same SNR endpoints. Sinceρ(t) = α(t) σ(t) and ¯ρ(s) =¯α(s) ¯σ(s)are strictly increasing, the mapst7→ν=ρ(t) 2 ands7→ν= ¯ρ(s) 2 are bijections onto the common interval[ν min, νmax]. Now, for each fixedν, we have ν= α(t)2 σ(t)2...

  6. [6]

    To ablate the effect of different training objectives and focus solely on parameterization, we report additional quantitative results in Tab

    employs discrete per-step parameterization and minimizes step-wiseℓ 2 errors against 14 Preprint teacher outputs, whereas B ´ezierFlow adopts a B ´ezier-based continuous parameterization and is trained with a global truncation loss, computed along the full trajectory fromx 0 tox 1, with LPIPS (Zhang et al., 2018). To ablate the effect of different trainin...

  7. [7]

    At the end of each epoch, we perform validation and select the checkpoint with the best validation score for final evaluation

    and Stable Diffusion v3.5 (Esser et al., 2024). At the end of each epoch, we perform validation and select the checkpoint with the best validation score for final evaluation. We use LPIPS (Zhang et al.,

  8. [8]

    Evaluation.We report Fr ´echet Inception Distance (FID) (Heusel et al.,

    and B ´ezierFlow, and RMSE for Bespoke Solver (Shaul et al., 2024). Evaluation.We report Fr ´echet Inception Distance (FID) (Heusel et al.,

  9. [9]

    On ImageNet, generated sam- ples are drawn to match the class distribution of the reference set

    scores computed against the reference set using 50K randomly generated samples. On ImageNet, generated sam- ples are drawn to match the class distribution of the reference set. For SD3.5, both reference and generated samples are constructed from disjoint subsets of 30K text prompts from the MS-COCO validation set, following the setup of LD3 (Tong et al., ...

  10. [10]

    The learning rate is5×10 −3 for CIFAR-10, FFHQ, and AFHQv2, and1×10 −3 for ImageNet and Stable Diffusion v3.5. For the decoupled timesteps, we use SGD with a learning rate of1×10−1 for CIFAR-10 and ImageNet,1×10 −2 for FFHQ and AFHQv2, and5×10 −4 for Stable Diffusion v3.5, each further scaled by1/NFE. We apply gradient clipping with a global norm threshol...

  11. [11]

    Bespoke Solver (Shaul et al., 2024).Since no official implementation code is publicly available, we re-implemented the method based on the descriptions in the original paper. We employ Adam optimizer with a learning rate of1×10 −4, as we observed that the learning rate reported in the paper (2×10 −3) caused divergence and very high FID scores when trainin...

  12. [12]

    Gray cells indicate the base ODE solvers

    16 Preprint Table 9:Quantitative comparison of few-step generation on text–image alignment with Stable Diffusion (Esser et al., 2024).Results for the base solvers are reported on each top rows.Bold indicates the best results, and underline marks the second best. Gray cells indicate the base ODE solvers. Method NFE=4 NFE=6 NFE=8 NFE=10 CLIP↑PickScore↑CLIP↑...

  13. [13]

    “Time” denotes wall-clock training time, where s/m/d denote seconds/minutes/days, respectively

    RK1 0.240 0.206 0.252 0.212 0.257 0.215 0.260 0.217 + DMN 0.225 0.199 0.246 0.209 0.253 0.213 0.256 0.215 + Bespoke 0.241 0.206 0.243 0.212 0.251 0.214 0.252 0.216 + GITS 0.234 0.204 0.247 0.210 0.252 0.213 0.255 0.214 + LD3 0.244 0.208 0.249 0.212 0.258 0.2170.258 0.217 + B´ezierFlow 0.245 0.209 0.253 0.2140.2560.2170.258 0.217 RK2 0.244 0.208 0.255 0.21...

  14. [14]

    As shown in Tab

    and PickScore (Kirstain et al., 2023), both of which measure the alignment between the given text prompt and the generated image. As shown in Tab. 9, B´ezierFlow achieves the best or second-best performance across various NFEs, solvers, and evaluation metrics except for the CLIP Score at NFE=8 with the RK1 solver. These additional results further corrobor...

  15. [15]

    and 2-Rectified Flow (2-RF) (Liu et al., 2023)) yield notably worse FID under the same lightweight training budget (15 minutes) and requiresubstantially longertraining time (2-6 days) to achieve FID comparable to B´ezierFlow, corresponding to roughly200-600×more training time. These results underscore B´ezierFlow’s highly training-efficient acceleration, ...

  16. [16]

    A man standing up against a wall with his hands clasped together

    6 “A man standing up against a wall with his hands clasped together. ” “A laptop computer sitting on top of a wooden table. ” 8 “Computer on the desk at nighttime in front of a window. ” “A brown dog hanging it’s head out of a car window. ” NFE RK2 DMN GITS Bespoke LD3 B ´ezierFlow 6 “A bus stopped in front of a tall red building. ” “A few pieces of lugga...

  17. [17]

    Gray cells indicate the base ODE solvers

    Results for the base solvers are reported on each top rows.Boldindicates the best results, and underline marks the second best. Gray cells indicate the base ODE solvers. Method NFE=4 NFE=6 NFE=8 NFE=10 MMD↓COV↑JSD↓MMD↓COV↑JSD↓MMD↓COV↑JSD↓MMD↓COV↑JSD↓ UniPC 2.50 3.21 0.46 1.25 8.89 0.30 0.95 17.03 0.25 0.79 20.25 0.22 + DMN 1.10 16.79 0.27 0.68 26.42 0.231...

  18. [18]

    Across both model classes, B´ezierFlow (BF) consistently yields clearer structures and more faithful details compared to baselines under low NFEs. G EXTENSION TOOTHERDOMAINS B´ezierFlow is a generic framework applicable not only to image synthesis but also to various gener- ative tasks within the stochastic interpolant framework. To demonstrate the versat...

  19. [19]

    Results.Tab

    Evaluation Metrics.Following the evaluation protocol of PVD (Zhou et al., 2021), we assess the quality of generated samples using three metrics based on the Chamfer Distance (CD): Mini- mum Matching Distance (CD-MMD), Coverage Score (CD-COV), and Jensen-Shannon Divergence (JSD). Results.Tab. 11 presents the quantitative results. B ´ezierFlow consistently ...

  20. [20]

    Table 12:Quantitative comparison on unconditional layout generation with Layout- Flow (Guerreiro et al., 2024).Lower is better for FID, Alignment (denoted as Align.), Overlap

    6 8 Figure 8:Qualitative comparisons of 3D point cloud samples generated using NFEs 6 and 8 with PVD (Zhou et al., 2021).We use iPNDM as the base solver. Table 12:Quantitative comparison on unconditional layout generation with Layout- Flow (Guerreiro et al., 2024).Lower is better for FID, Alignment (denoted as Align.), Overlap. Results for the base solver...

  21. [21]

    Evaluation Metrics.Following the evaluation protocol of LayoutFlow (Guerreiro et al., 2024), we assess generation quality using Fr ´echet Inception Distance (FID) adapted for layouts, alongside 21 Preprint NFE RK1 DMN GITS Bespoke LD3 B ´ezierFlow Teacher RICO with LayoutFlow (Guerreiro et al.,

  22. [22]

    The rightmost column shows teacher samples from RK45 solver

    6 8 Figure 9:Qualitative comparisons of layout samples generated using NFEs 6 and 8 with Lay- outFlow (Guerreiro et al., 2024).We use RK1 as the base solver. The rightmost column shows teacher samples from RK45 solver. Table 13:FID comparison of VDM, Multi-marginal SI (denoted as MMSI), and B ´ezierFlow on CIFAR-10.Results for the base solvers are reporte...

  23. [23]

    For FID calculation, we employ the feature extractor from Layout- Diffusion (Zheng et al., 2023)

    (Teacher FID: 2.70) RK1 52.78 26.30 17.40 13.30 RK2 25.36 12.12 9.17 7.89 + VDM 54.72 22.06 19.10 19.00 + VDM 36.24 25.74 16.37 12.39 + MMSI 22.89 12.06 7.59 5.86 + MMSI 20.82 9.03 7.57 7.79 + B´ezierFlow 20.64 9.67 7.30 5.51 + B´ezierFlow 13.18 6.00 4.31 3.74 Alignment and Overlap scores. For FID calculation, we employ the feature extractor from Layout- ...

  24. [24]

    22 Preprint H COMPARISON WITHOTHERSCHEDULERPARAMETERIZATIONS In this section, we discuss prior work (Kingma et al., 2023; Albergo et al.,

    B ´ezierFlow produces layouts that most closely follow the teacher trajectory, preserving the spatial arrangement and aspect ratios of objects. 22 Preprint H COMPARISON WITHOTHERSCHEDULERPARAMETERIZATIONS In this section, we discuss prior work (Kingma et al., 2023; Albergo et al.,

  25. [25]

    that also learns op- timal SI schedulers and compare them against B´ezierFlow. We first clarify how these methods differ in their scheduler parameterizations, and then experimentally show that B ´ezierFlow achieves supe- rior performance due to its compact parameterization that explicitly satisfies the core SI scheduler requirements: boundary conditions, ...

  26. [26]

    for VDM (Kingma et al., 2023), Multi-Marginal SI (Albergo et al., 2024), and B ´ezierFlow. For VDM, we parameterize the SNR neural network as a 3-layer MLP with hidden size1024, following the original configuration in the paper, and set the trigonometric order of Multi-Marginal SI toK= 32 so that its number of scheduler degrees of freedom matches ourn= 32...