pith. machine review for the scientific record. sign in

arxiv: 2601.21764 · v2 · submitted 2026-01-29 · 🧮 math.NA · cs.NA· math.OC

Recognition: 2 theorem links

· Lean Theorem

Solving Hamilton-Jacobi equations by minimizing residuals of monotone discretizations

Authors on Pith no claims yet

Pith reviewed 2026-05-16 09:39 UTC · model grok-4.3

classification 🧮 math.NA cs.NAmath.OC
keywords Hamilton-Jacobi equationsmonotone finite differencesresidual minimizationviscosity solutionsa posteriori error estimatesnumerical optimizationEikonal equations
0
0 comments X

The pith

Minimizing the residual of monotone finite-difference schemes yields the exact discrete solution to Hamilton-Jacobi equations.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a method to solve Hamilton-Jacobi equations by minimizing the least-squares residual of monotone finite-difference discretizations. Under monotonicity conditions, every critical point of this loss functional is the unique global minimizer and equals the solution of the discrete scheme. This supplies explicit a posteriori error bounds relating the approximation error directly to the residual size. The same framework covers time-dependent problems with implicit time stepping and second-order elliptic or parabolic equations. When combined with the Barles-Souganidis convergence theorem, the computed solutions approach the unique viscosity solution as the mesh is refined.

Core claim

We establish a well-posedness and error-estimation framework that solves Hamilton-Jacobi equations by minimizing the least-squares residual of monotone finite-difference discretizations. Under suitable monotonicity conditions, every critical point of the residual loss functional is the unique global minimizer and coincides with the solution of the discrete scheme. A posteriori error estimates bound the approximation error by the magnitude of the residual with explicit, computable constants. The analysis extends to time-dependent problems with implicit discretization of the time derivatives. A spectral analysis shows that the condition number scales as O(Δx^{-1}) for proper schemes.

What carries the argument

The residual loss functional built from monotone finite-difference discretizations of the Hamilton-Jacobi equation.

If this is right

  • A posteriori error estimates bound the approximation error by the magnitude of the residual with explicit, computable constants.
  • The analysis extends to time-dependent problems with implicit discretization of the time derivatives.
  • The condition number of the linearized system scales as O(Δx^{-1}) for proper schemes and as O(exp(Δx^{-1})) under uniform ellipticity.
  • Combined with the Barles-Souganidis convergence theorem for monotone and consistent schemes, the obtained solutions converge to the unique viscosity solution as the mesh is refined.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The residual-minimization approach may extend naturally to other nonlinear PDEs that admit monotone consistent discretizations.
  • Progressive multi-level warm-start strategies could be tested on problems with discontinuities to measure practical scaling beyond the spectral analysis.
  • The same framework offers a route to high-dimensional stochastic control problems whose value functions satisfy Hamilton-Jacobi-Isaacs equations with second-order diffusion.

Load-bearing premise

The finite-difference schemes must satisfy monotonicity conditions so that critical points of the residual are exactly the discrete solutions.

What would settle it

A numerical example in which a critical point of the residual loss functional differs from the solution of the corresponding monotone discrete scheme would falsify the main well-posedness result.

Figures

Figures reproduced from arXiv: 2601.21764 by Carlos Esteve-Yag\"ue, Olivier Bokanowski, Richard Tsai.

Figure 1
Figure 1. Figure 1: Evolution of the loss function L(u) := ∥R(u)∥ 2 2 with different choices of ∆x, from 0.05 to 0.00625. Full gradient descent is implemented in the space of grid functions R N+1 , with N = 1/∆x. On the left, we see the evolution of the loss on a fixed grid, i.e. with constant ∆x. On the right, we see the loss evolution on a varying grid with decreasing values of ∆x. Every time the grid is refined, the grid f… view at source ↗
Figure 2
Figure 2. Figure 2: On the left, we see the solution u associated to the experiments in [PITH_FULL_IMAGE:figures/full_fig_p023_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Summary of the experiments described in section 5.1.2, for the 1D eikonal equation. [PITH_FULL_IMAGE:figures/full_fig_p024_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The zero-level set of the solution at time [PITH_FULL_IMAGE:figures/full_fig_p026_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Above: DNN scheme, Below: reference solution [PITH_FULL_IMAGE:figures/full_fig_p027_5.png] view at source ↗
read the original abstract

We establish a well-posedness and error-estimation framework that solves Hamilton-Jacobi equations by minimizing the least-squares residual of monotone finite-difference discretizations. This approach also applies naturally to second-order elliptic and parabolic problems. We prove that, under suitable monotonicity conditions, every critical point of the residual loss functional is the unique global minimizer and coincides with the solution of the discrete scheme. We derive \emph{a~posteriori} error estimates that bound the approximation error by the magnitude of the residual with explicit, computable constants, and extend the full analysis to time-dependent problems with implicit discretization of the time derivatives. A spectral analysis of the linearized system shows that the condition number scales as $O(\Delta x^{-1})$ for proper schemes, and as $O(\exp(\Delta x^{-1}))$ under a uniform ellipticity condition. These results quantify the increasing difficulty of solving the optimization problem on finer meshes, and motivates a progressive multi-level warm-start strategy using Artificial Neural Networks. Combined with the convergence theorem of Barles and Souganidis for monotone and consistent schemes, our results guarantee that the solutions obtained converge to the unique viscosity solution as the mesh is refined. Numerical experiments demonstrate the scalability of the approach to high-dimensional Eikonal equations, level-set problems, and Hamilton--Jacobi--Isaacs equations with genuine second-order diffusion arising from stochastic differential games.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper develops a framework for solving Hamilton-Jacobi equations (and extensions to elliptic/parabolic problems) by minimizing the least-squares residual of monotone finite-difference discretizations. It proves well-posedness showing that under suitable monotonicity every critical point of the residual loss L(u)=½‖r(u)‖² is the unique global minimizer and coincides with the discrete solution, derives a posteriori error estimates bounding approximation error by residual magnitude with explicit constants, extends the analysis to time-dependent problems, provides spectral analysis of the linearized system (condition number O(Δx^{-1}) or worse), and combines with the Barles-Souganidis theorem to guarantee convergence to the viscosity solution as the mesh refines. Numerical experiments demonstrate scalability to high-dimensional Eikonal, level-set, and HJ-Isaacs problems.

Significance. If the central claims hold, the work provides a theoretically grounded optimization-based alternative for HJ equations that exploits monotonicity to ensure a benign loss landscape with unique minimizer at the discrete solution. The explicit a posteriori bounds and conditioning analysis are practically useful, and the combination with Barles-Souganidis yields convergence guarantees. The approach is particularly relevant for high-dimensional problems where direct solvers struggle, and the suggested multi-level warm-start strategy with ANNs addresses the quantified increase in difficulty on fine meshes.

major comments (2)
  1. [§3] §3 (well-posedness theorem): The claim that every critical point of L(u) coincides with the unique discrete solution requires proving that J(u)^T r(u)=0 implies r(u)=0 globally for arbitrary grid functions u. Monotonicity guarantees a unique zero of r and typically makes J an M-matrix at that point, but the argument must also establish that J(u) has full range (or is nonsingular) away from the solution to exclude spurious critical points; the provided spectral analysis focuses on conditioning at the solution and does not explicitly address this global property.
  2. [§4] §4 (a posteriori estimates): The error bounds in the main theorem relating ‖u - u_h‖ to ‖r(u_h)‖ rely on explicit computable constants derived from monotonicity; these constants should be shown to remain bounded independently of Δx (or their growth rate quantified), as degradation would limit utility on refined meshes where the optimization difficulty already increases.
minor comments (2)
  1. [Abstract] Abstract: the phrase 'and motivates a progressive multi-level warm-start strategy' contains a subject-verb agreement issue ('motivates' should be 'motivate').
  2. [Numerical experiments] Numerical experiments section: figure captions and text should specify the exact optimization algorithm, tolerance, and initialization strategy used, along with the precise dimensions tested, to support reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive report. The comments highlight important points regarding the rigor of the well-posedness argument and the mesh dependence of the a posteriori constants. We address each major comment below and will incorporate the necessary clarifications and additions into the revised manuscript.

read point-by-point responses
  1. Referee: [§3] §3 (well-posedness theorem): The claim that every critical point of L(u) coincides with the unique discrete solution requires proving that J(u)^T r(u)=0 implies r(u)=0 globally for arbitrary grid functions u. Monotonicity guarantees a unique zero of r and typically makes J an M-matrix at that point, but the argument must also establish that J(u) has full range (or is nonsingular) away from the solution to exclude spurious critical points; the provided spectral analysis focuses on conditioning at the solution and does not explicitly address this global property.

    Authors: We agree that the global nonsingularity of J(u) away from the solution requires explicit justification to rule out spurious critical points. The proof of Theorem 3.1 uses the strict monotonicity of the residual operator r to show that L is strictly convex with a unique minimizer at the discrete solution, which already implies that any critical point must satisfy r(u)=0. However, to strengthen the argument as suggested, we will insert a new lemma (Lemma 3.2) proving that J(u) remains nonsingular for all grid functions u under the monotonicity assumptions (by showing it is a strictly diagonally dominant M-matrix with positive diagonal and nonpositive off-diagonals). The spectral analysis in §3.3 will be referenced only for the conditioning at the solution, while the new lemma handles the global property. This revision will be included in the next version. revision: yes

  2. Referee: [§4] §4 (a posteriori estimates): The error bounds in the main theorem relating ‖u - u_h‖ to ‖r(u_h)‖ rely on explicit computable constants derived from monotonicity; these constants should be shown to remain bounded independently of Δx (or their growth rate quantified), as degradation would limit utility on refined meshes where the optimization difficulty already increases.

    Authors: We thank the referee for this observation. The constants appearing in Theorem 4.1 are derived from the Lipschitz constant of the Hamiltonian and the monotonicity parameters of the scheme; for consistent monotone discretizations these quantities are independent of Δx and depend only on the problem data (domain, Hamiltonian, boundary conditions). We will add a short remark immediately after the statement of Theorem 4.1 explicitly confirming that the constants remain uniformly bounded as Δx → 0, with the explicit bound stated in terms of the problem data alone. This clarification will also be cross-referenced in the discussion of optimization difficulty on fine meshes. revision: yes

Circularity Check

0 steps flagged

No circularity detected; central claims rest on standard monotonicity properties and external Barles-Souganidis theorem

full rationale

The paper's key result—that critical points of the residual loss coincide with the unique discrete solution under monotonicity—follows from the definition of the residual r(u) for monotone finite-difference schemes and the fact that monotonicity implies a unique zero of r. This is combined with the external Barles-Souganidis convergence theorem for viscosity solutions, neither of which reduces to a self-referential fit, ansatz, or self-citation chain within the paper. The spectral analysis of conditioning at the solution and a posteriori error bounds are derived directly from the scheme properties without redefining inputs as outputs. No load-bearing step equates a prediction to its own fitted parameter or imports uniqueness solely via overlapping-author citations.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The claims rest on standard numerical-analysis assumptions for monotone consistent schemes and on the Barles-Souganidis convergence theorem; no free parameters or new entities are introduced.

axioms (2)
  • domain assumption Finite-difference schemes satisfy monotonicity conditions
    Invoked to guarantee that every critical point of the residual is the unique discrete solution.
  • standard math Barles-Souganidis convergence theorem for monotone and consistent schemes
    Used to conclude that the discrete solutions converge to the unique viscosity solution as the mesh is refined.

pith-pipeline@v0.9.0 · 5558 in / 1316 out tokens · 24224 ms · 2026-05-16T09:39:05.836332+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

18 extracted references · 18 canonical work pages

  1. [1]

    Bardi and S

    M. Bardi and S. Osher. The nonconvex multidimensional riemann problem for hamilton– jacobi equations.SIAM Journal on Mathematical Analysis, 22(2):344–351, 1991

  2. [2]

    Barles and P

    G. Barles and P. E. Souganidis. Convergence of approximation schemes for fully non- linear second order equations.Asymptotic analysis, 4(3):271–283, 1991

  3. [3]

    P. Chen, J. Darbon, and T. Meng. Lax-oleinik-type formulas and efficient algorithms for certain high-dimensional optimal control problems.Communications on Applied Mathematics and Computation, 6(2):1428–1471, 2024

  4. [4]

    Y. T. Chow, J. Darbon, S. Osher, and W. Yin. Algorithm for overcoming the curse of dimensionality for state-dependent hamilton-jacobi equations.Journal of Computational Physics, 387:376–409, 2019

  5. [5]

    Darbon and S

    J. Darbon and S. Osher. Algorithms for overcoming the curse of dimensionality for certain hamilton–jacobi equations arising in control theory and elsewhere.Research in the Mathematical Sciences, 3(1):19, 2016. 28

  6. [6]

    Esteve-Yag¨ ue, R

    C. Esteve-Yag¨ ue, R. Tsai, and A. Massucco. Finite-difference least square methods for solving hamilton-jacobi equations using neural networks.Journal of Computational Physics, 524:113721, 2025

  7. [7]

    Hadamard

    J. Hadamard. Sur les transformations ponctuelles.Bull. Soc. Math. France, 34:71–84, 1906

  8. [8]

    K. Ito, C. Reisinger, and Y. Zhang. A neural network-based policy iteration algorithm with globalH 2-superlinear convergence for stochastic games on domains.Found. Com- put. Math., 21(2):331–374, 2021

  9. [9]

    C.-Y. Kao, S. Osher, and Y.-H. Tsai. Fast sweeping methods for static hamilton–jacobi equations.SIAM Journal on Numerical Analysis, 42(6):2612–2632, 2005

  10. [10]

    Park and S

    Y. Park and S. Osher. Neural implicit solution formula for efficiently solving hamilton– jacobi equations.SIAM Journal on Scientific Computing, 47(6):C1223–C1263, 2025

  11. [11]

    B. H. Pourciau. Hadamard’s theorem for locally Lipschitzian maps.J. Math. Anal. Appl., 85(1):279–285, 1982

  12. [12]

    Raissi, P

    M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational physics, 378:686–707, 2019

  13. [13]

    Rouy and A

    E. Rouy and A. Tourin. A viscosity solutions approach to shape-from-shading.SIAM Journal on Numerical Analysis, 29(3):867–884, 1992

  14. [14]

    J. A. Sethian. Fast marching methods.SIAM review, 41(2):199–235, 1999

  15. [15]

    J. A. Sethian and A. Vladimirsky. Ordered upwind methods for static hamilton–jacobi equations.Proceedings of the National Academy of Sciences, 98(20):11069–11074, 2001

  16. [16]

    Y.-H. R. Tsai, L.-T. Cheng, S. Osher, and H.-K. Zhao. Fast sweeping algorithms for a class of hamilton–jacobi equations.SIAM Journal on Numerical Analysis, 41(2):673– 694, 2003

  17. [17]

    J. N. Tsitsiklis. Efficient algorithms for globally optimal trajectories.IEEE transactions on Automatic Control, 40(9):1528–1538, 2002

  18. [18]

    H. Zhao. A fast sweeping method for eikonal equations.Mathematics of computation, 74(250):603–627, 2005. 29