pith. machine review for the scientific record. sign in

arxiv: 2511.18998 · v3 · submitted 2025-11-24 · 🧮 math.OC · cs.NA· math.NA

Recognition: 2 theorem links

· Lean Theorem

A trust-region funnel algorithm for gray-box optimization

Authors on Pith no claims yet

Pith reviewed 2026-05-17 05:31 UTC · model grok-4.3

classification 🧮 math.OC cs.NAmath.NA
keywords gray-box optimizationtrust-region methodsfunnel algorithmreduced modelsglobal convergencefirst-order critical pointblack-box modelsapproximation error bound
0
0 comments X

The pith

A trust-region funnel algorithm maintains a monotonically decreasing upper bound on reduced model approximation error for gray-box optimization and proves global convergence to a first-order critical point.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper sets out to establish that trust-region methods for gray-box optimization can be simplified by swapping the usual filter for a uni-dimensional funnel that enforces a steadily tightening upper bound on the error of local reduced models for black-box parts. A sympathetic reader would care because gray-box problems mix known algebraic equations with derivative-free black-box simulations, and current trust-region approaches demand heavy parameter tuning that limits their use in engineering applications. If the funnel rules succeed in keeping the error bound decreasing while still ensuring objective progress, the result is a cleaner algorithm with a convergence guarantee that supports multiple reduced model types and both filter and funnel globalization options.

Core claim

The authors propose a trust-region funnel algorithm for gray-box optimization that replaces the filter acceptance criterion with a uni-dimensional funnel. This funnel maintains a monotonically decreasing upper bound on the approximation error of the local black-box reduced models. They prove global convergence of the algorithm to a first-order critical point. Benchmark tests indicate that the new method achieves performance comparable to or better than the classical trust-region filter method.

What carries the argument

The uni-dimensional funnel, which replaces the filter acceptance criterion and maintains a monotonically decreasing upper bound on the approximation error of local black-box reduced models via specific update rules.

If this is right

  • The algorithm reaches a first-order critical point from any starting point under the stated assumptions.
  • The method works with multiple forms of reduced models and both filter and funnel globalization strategies.
  • Benchmark results show performance that is comparable or improved relative to the trust-region filter method.
  • An open-source implementation supports direct use on gray-box problems mixing algebraic and black-box components.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the error bound holds reliably, the funnel structure could be adapted to other trust-region solvers that mix known models with black-box elements.
  • Applying the algorithm to problems with higher-dimensional or noisier black-box components would test whether the monotonic bound remains practical outside the reported benchmarks.
  • The uni-dimensional funnel may offer a template for simplifying acceptance mechanisms in related derivative-free or gray-box methods where error control is central.

Load-bearing premise

The funnel update rules can keep the upper bound on reduced model approximation error monotonically decreasing while still allowing sufficient objective decrease at each accepted step.

What would settle it

A specific gray-box test problem or iteration sequence in which the upper bound on reduced model error increases under the stated funnel rules, or in which the algorithm stops making progress without reaching a first-order critical point.

read the original abstract

Gray-box optimization, where parts of optimization problems are represented by algebraic models while others are treated as black-box models lacking analytic derivatives, remains a challenge. Trust-region (TR) methods provide a robust framework for gray-box problems through local reduced models (RMs) for black-box components, but they are complex and require extensive parameter tuning. Motivated by recent advances in funnel-based convergence theory for nonlinear optimization, we propose a novel TR funnel algorithm for gray-box optimization, replacing the filter acceptance criterion with a uni-dimensional funnel, maintaining a monotonically decreasing upper bound on approximation error of local black-box RMs. A global convergence proof to a first-order critical point is established. The algorithm, implemented open-source in Pyomo, supports multiple RM forms and globalization strategies (filter or funnel). Benchmark tests show the TR funnel algorithm achieves comparable and often improved performance relative to the classical TR filter method, thus providing a simpler, effective alternative for gray-box optimization.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes a trust-region funnel algorithm for gray-box optimization problems, in which algebraic components are treated exactly while black-box components are approximated via local reduced models (RMs). The filter acceptance criterion is replaced by a uni-dimensional funnel that enforces a monotonically decreasing upper bound on RM approximation error. A global convergence proof to a first-order critical point is claimed, the method is implemented open-source in Pyomo, and benchmark comparisons indicate performance that is comparable to or better than the classical TR filter approach.

Significance. If the claimed convergence result is valid, the work supplies a simpler, lower-parameter alternative to filter-based TR methods for mixed algebraic/black-box problems while retaining global convergence guarantees. The open-source Pyomo implementation and empirical benchmarks constitute concrete strengths that would facilitate adoption and further testing.

major comments (2)
  1. [§4] §4 (Convergence Analysis), funnel update rules: the proof that every limit point is first-order critical rests on the uni-dimensional funnel simultaneously maintaining a strictly monotonically decreasing upper bound on RM approximation error and delivering sufficient objective decrease at accepted steps. It is not evident from the stated rules how this holds for black-box functions with high curvature without additional assumptions (e.g., uniform Lipschitz continuity of the black-box Hessian); a concrete counter-example or explicit bound derivation would be needed to close the argument.
  2. [§3.2] §3.2, Eq. (12)–(15): the funnel parameter update that replaces the filter is presented as independent of prior fitted quantities, yet the error-bound maintenance appears to depend on the specific choice of RM form and the radius update; clarification is required on whether the decrease inequality remains valid when the black-box curvature violates the implicit smoothness used to derive the funnel contraction.
minor comments (2)
  1. [Numerical Results] Table 1 and Figure 3: the benchmark exclusion criteria and the precise definition of 'improved performance' (e.g., function evaluations or final objective value) should be stated explicitly so that the cross-method comparison is reproducible.
  2. [Preliminaries] Notation: the symbol for the funnel upper bound is introduced without a dedicated definition paragraph; adding a short table of symbols would improve readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading of our manuscript and the constructive comments. We address each major comment below and indicate the revisions made to strengthen the presentation of the convergence analysis.

read point-by-point responses
  1. Referee: [§4] §4 (Convergence Analysis), funnel update rules: the proof that every limit point is first-order critical rests on the uni-dimensional funnel simultaneously maintaining a strictly monotonically decreasing upper bound on RM approximation error and delivering sufficient objective decrease at accepted steps. It is not evident from the stated rules how this holds for black-box functions with high curvature without additional assumptions (e.g., uniform Lipschitz continuity of the black-box Hessian); a concrete counter-example or explicit bound derivation would be needed to close the argument.

    Authors: The convergence analysis in Section 4 establishes that the funnel maintains a monotonically decreasing upper bound on the RM error while ensuring sufficient model decrease at accepted steps. The proof proceeds by showing that the trust-region radius remains bounded away from zero only if the limit point satisfies the first-order criticality condition, with the funnel contraction factor chosen to dominate the local approximation error. We do not invoke a global Lipschitz assumption on the black-box Hessian; instead, the local quadratic or higher-order RM fitting within each trust region, combined with the radius update, ensures the error bound holds locally. To make this explicit, we have added a lemma in the revised Section 4 deriving the bound on the approximation error in terms of the current funnel parameter and trust-region radius, confirming that the decrease inequality is preserved without requiring uniform curvature bounds. revision: yes

  2. Referee: [§3.2] §3.2, Eq. (12)–(15): the funnel parameter update that replaces the filter is presented as independent of prior fitted quantities, yet the error-bound maintenance appears to depend on the specific choice of RM form and the radius update; clarification is required on whether the decrease inequality remains valid when the black-box curvature violates the implicit smoothness used to derive the funnel contraction.

    Authors: The funnel parameter updates in Equations (12)–(15) are intentionally formulated to depend only on the current objective value, model prediction, and the previous funnel level, preserving generality across RM forms. The error-bound maintenance is guaranteed by the standard trust-region assumption that the RM approximates the black-box function with error O(Δ²) inside the trust region of radius Δ. When curvature is high, the radius is reduced by the algorithm, which in turn tightens the funnel contraction to maintain the inequality. We have inserted a clarifying paragraph after Equation (15) that explicitly states the local smoothness implicit in the RM construction and verifies that the decrease inequality continues to hold under the algorithm's radius adjustment rule. revision: yes

Circularity Check

0 steps flagged

No circularity: convergence proof rests on algorithm rules and standard TR analysis, not self-definition or fitted inputs

full rationale

The paper defines a uni-dimensional funnel to replace the filter, states update rules that enforce a monotonically decreasing upper bound on RM approximation error, and then proves global convergence to first-order criticality using the standard trust-region decrease condition plus the error bound vanishing at limit points. No equation reduces the claimed bound or the criticality result to a fitted parameter or prior self-citation by construction; the funnel mechanics and proof are presented as self-contained once the update rules are accepted. The abstract and method sections treat the bound maintenance as a consequence of the new acceptance criterion rather than an input assumption that is redefined as output.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim rests on standard assumptions from nonlinear optimization theory regarding the existence of first-order critical points and the ability to construct local reduced models with controllable approximation error for black-box components.

axioms (2)
  • domain assumption Black-box components admit local reduced models whose approximation error can be bounded and controlled monotonically.
    Invoked to support the funnel's monotonically decreasing upper bound on error.
  • standard math The objective function satisfies conditions allowing sufficient decrease within the trust region.
    Standard assumption for trust-region convergence proofs.

pith-pipeline@v0.9.0 · 5473 in / 1312 out tokens · 37562 ms · 2026-05-17T05:31:27.157681+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

46 extracted references · 46 canonical work pages · 1 internal anchor

  1. [1]

    A trust region filter method for glass box/black box optimization,

    J. P. Eason and L. T. Biegler, “A trust region filter method for glass box/black box optimization,” AIChE Journal, vol. 62, no. 9, pp. 3124–3136, 2016

  2. [2]

    CONOPT: A GRG code for large sparse dynamic nonlinear optimization problems,

    A. Drud, “CONOPT: A GRG code for large sparse dynamic nonlinear optimization problems,” Math Program, vol. 31, no. 2, pp. 153–191, 1985, doi: 10.1007/BF02591747

  3. [3]

    B. A. Murtagh and M. A. Saunders, MINOS 5.0 user’s guide, vol. 83, no. 20. Systems Optimization Laboratory, Department of Operations Research, Stanford …, 1983

  4. [4]

    SNOPT: An SQP Algorithm for Large -Scale Constrained Optimization,

    P. E. Gill, W. Murray, and M. A. Saunders, “SNOPT: An SQP Algorithm for Large -Scale Constrained Optimization,” SIAM Review , vol. 47, no. 1, pp. 99 –131, 2005, doi: 10.1137/S0036144504446096

  5. [5]

    KNITRO: An Integrated Package for Nonlinear Optimization,

    R. H. Byrd, J. Nocedal, and R. A. Waltz, “KNITRO: An Integrated Package for Nonlinear Optimization,” in Large-Scale Nonlinear Optimization , G. Di Pillo and M. Roma, Eds., Boston, MA: Springer US, 2006, pp. 35–59. doi: 10.1007/0-387-30065-1_4

  6. [6]

    LOQO:an interior point code for quadratic programming,

    R. J. Vanderbei, “LOQO:an interior point code for quadratic programming,” Optim Methods Softw, vol. 11, no. 1–4, pp. 451–484, Jan. 1999, doi: 10.1080/10556789908805759

  7. [7]

    On the implementation of an interior -point filter line -search algorithm for large-scale nonlinear programming,

    A. Wächter and L. T. Biegler, “On the implementation of an interior -point filter line -search algorithm for large-scale nonlinear programming,” Math Program, vol. 106, no. 1, pp. 25–57, 2006, doi: 10.1007/s10107-004-0559-y

  8. [8]

    Error estimation for surrogate models with noisy small-sized training sets,

    J. Wackers, H. P. Solak, R. Pellegrini, A. Serani, and M. Diez, “Error estimation for surrogate models with noisy small-sized training sets,” in Adaptive Modelling and Simulation (ADMOS 2023), 2023

  9. [9]

    Advanced trust region optimization strategies for glass box/black box models,

    J. P. Eason and L. T. Biegler, “Advanced trust region optimization strategies for glass box/black box models,” AIChE Journal, vol. 64, no. 11, pp. 3934–3943, 2018

  10. [10]

    A Trust Region Filter Algorithm for Surrogate -based Optimization,

    J. P. Eason, “A Trust Region Filter Algorithm for Surrogate -based Optimization,” Doctoral Dissertation, Carnegie Mellon University, Feb. 2018, doi: 10.1184/R1/6714431.v1

  11. [11]

    DEFT-FUNNEL: an open-source global optimization solver for constrained grey- box and black-box problems,

    P. R. Sampaio, “DEFT-FUNNEL: an open-source global optimization solver for constrained grey- box and black-box problems,” Computational and Applied Mathematics, vol. 40, no. 5, p. 176, 2021, doi: 10.1007/s40314-021-01562-y

  12. [12]

    Self -Correcting Geometry in Model -Based Algorithms for Derivative-Free Unconstrained Optimization,

    K. Scheinberg and Ph. L. Toint, “Self -Correcting Geometry in Model -Based Algorithms for Derivative-Free Unconstrained Optimization,” SIAM Journal on Optimization , vol. 20, no. 6, pp. 3512–3532, 2010, doi: 10.1137/090748536

  13. [13]

    ARGONAUT: AlgoRithms for Global Optimization of coNstrAined grey-box compUTational problems,

    F. Boukouvala and C. A. Floudas, “ARGONAUT: AlgoRithms for Global Optimization of coNstrAined grey-box compUTational problems,” Optim Lett, vol. 11, no. 5, pp. 895 –913, 2017, doi: 10.1007/s11590-016-1028-2

  14. [14]

    A trust region -based two phase algorithm for constrained black-box and grey-box optimization with infeasible initial point,

    I. Bajaj, S. S. Iyer, and M. M. Faruque Hasan, “A trust region -based two phase algorithm for constrained black-box and grey-box optimization with infeasible initial point,” Comput Chem Eng, vol. 116, pp. 306–321, 2018, doi: https://doi.org/10.1016/j.compchemeng.2017.12.011

  15. [15]

    Global optimization of general constrained grey-box models: new method and its application to constrained PDEs for pressure swing Page 22 of 39 adsorption,

    F. Boukouvala, M. M. F. Hasan, and C. A. Floudas, “Global optimization of general constrained grey-box models: new method and its application to constrained PDEs for pressure swing Page 22 of 39 adsorption,” Journal of Global Optimization, vol. 67, no. 1, pp. 3 –42, 2017, doi: 10.1007/s10898- 015-0376-2

  16. [16]

    A. R. Conn, K. Scheinberg, and L. N. Vicente, Introduction to Derivative -Free Optimization . Society for Industrial and Applied Mathematics, 2009. doi: 10.1137/1.9780898718768

  17. [17]

    Optimization of pressure swing adsorption via a trust-region filter algorithm and equilibrium theory,

    J. Uebbing, L. T. Biegler, L. Rihko -Struckmann, S. Sager, and K. Sundmacher, “Optimization of pressure swing adsorption via a trust-region filter algorithm and equilibrium theory,” Comput Chem Eng, vol. 151, p. 107340, 2021, doi: https://doi.org/10.1016/j.compchemeng.2021.107340

  18. [18]

    A trust-region framework for constrained optimization using reduced order modeling,

    A. Agarwal and L. T. Biegler, “A trust-region framework for constrained optimization using reduced order modeling,” Optimization and Engineering , vol. 14, no. 1, pp. 3 –35, 2013, doi: 10.1007/s11081-011-9164-0

  19. [19]

    A trust region framework for heat exchanger network synthesis with detailed individual heat exchanger designs,

    S. R. Kazi, M. Short, and L. T. Biegler, “A trust region framework for heat exchanger network synthesis with detailed individual heat exchanger designs,” Comput Chem Eng, vol. 153, p. 107447, 2021, doi: https://doi.org/10.1016/j.compchemeng.2021.107447

  20. [20]

    Synthesis of Combined Heat and Mass Exchange Networks Via a Trust Region Filter Optimisation Algorithm Including Detailed Unit Designs,

    S. R. Kazi, M. Short, and L. T. Biegler, “Synthesis of Combined Heat and Mass Exchange Networks Via a Trust Region Filter Optimisation Algorithm Including Detailed Unit Designs,” in Computer Aided Chemical Engineering, vol. 50, M. Türkay and R. Gani, Eds., Elsevier, 2021, pp. 13–18. doi: https://doi.org/10.1016/B978-0-323-88506-5.50003-6

  21. [21]

    Demand‐based optimization of a chlorobenzene process with high‐ fidelity and surrogate reactor models under trust region strategies,

    N. Yoshio and L. T. Biegler, “Demand‐based optimization of a chlorobenzene process with high‐ fidelity and surrogate reactor models under trust region strategies,” AIChE Journal, vol. 67, no. 1, p. e17054, 2021

  22. [22]

    Real -time refinery optimization with reduced -order fluidized catalytic cracker model and surrogate -based trust region filter method,

    X. Chen et al. , “Real -time refinery optimization with reduced -order fluidized catalytic cracker model and surrogate -based trust region filter method,” Comput Chem Eng , vol. 153, p. 107455, 2021, doi: https://doi.org/10.1016/j.compchemeng.2021.107455

  23. [23]

    Including CFD rigorous models in the optimal design of carbon capture plants through trust-region methods,

    H. A. Pedrozo, G. Panagakos, and L. T. Biegler, “Including CFD rigorous models in the optimal design of carbon capture plants through trust-region methods,” Chem Eng Sci, vol. 286, p. 119646, 2024, doi: https://doi.org/10.1016/j.ces.2023.119646

  24. [24]

    Optimization strategies for produced water networks with integrated desalination facilities,

    S. Naik, M. Zamarripa, M. Drouven, and L. T. Biegler, “Optimization strategies for produced water networks with integrated desalination facilities,” Comput Chem Eng, vol. 187, p. 108738, 2024, doi: https://doi.org/10.1016/j.compchemeng.2024.108738

  25. [25]

    Efficient Trust Region Filter Modeling Strategies for Computationally Expensive Black-Box Optimization,

    R. Liang, Y. Han, H. Hu, B. Chen, Z. Yuan, and L. T. Biegler, “Efficient Trust Region Filter Modeling Strategies for Computationally Expensive Black-Box Optimization,” Comput Chem Eng, p. 108816, 2024, doi: https://doi.org/10.1016/j.compchemeng.2024.108816

  26. [26]

    Trust-region filter algorithms utilizing Hessian information for gray-box optimization

    G. Hameed, T. Chen, A. del R. Chanona, L. T. Biegler, and M. Short, “Trust -region Filter Algorithms utilising Hessian Information for Grey -Box Optimisation,” arXiv preprint arXiv:2509.01651, 2025

  27. [27]

    The trust region filter strategy: Survey of a rigorous approach for optimization with surrogate models,

    L. T. Biegler, “The trust region filter strategy: Survey of a rigorous approach for optimization with surrogate models,” Digital Chemical Engineering , vol. 13, p. 100197, 2024, doi: https://doi.org/10.1016/j.dche.2024.100197

  28. [28]

    A unified funnel restoration SQP algorithm,

    D. Kiessling, S. Leyffer, and C. Vanaret, “A unified funnel restoration SQP algorithm,” Math Program, 2025, doi: 10.1007/s10107-025-02284-3. Page 23 of 39

  29. [29]

    Nonlinear programming without a penalty function,

    R. Fletcher and S. Leyffer, “Nonlinear programming without a penalty function,” Math Program, vol. 91, no. 2, pp. 239–269, 2002, doi: 10.1007/s101070100244

  30. [30]

    A. R. Conn, N. I. M. Gould, and P. L. Toint, Trust Region Methods . Society for Industrial and Applied Mathematics, 2000. doi: 10.1137/1.9780898719857

  31. [31]

    A trust -region framework for managing the use of approximation models in optimization,

    N. M. Alexandrov, J. E. Dennis, R. M. Lewis, and V. Torczon, “A trust -region framework for managing the use of approximation models in optimization,” Structural optimization, vol. 15, no. 1, pp. 16–23, 1998, doi: 10.1007/BF01197433

  32. [32]

    UOBYQA: unconstrained optimization by quadratic approximation,

    M. J. D. Powell, “UOBYQA: unconstrained optimization by quadratic approximation,” Math Program, vol. 92, no. 3, pp. 555–582, 2002, doi: 10.1007/s101070100290

  33. [33]

    Nonlinear programming without a penalty function or a filter,

    N. I. M. Gould and Ph. L. Toint, “Nonlinear programming without a penalty function or a filter,” Math Program, vol. 122, no. 1, pp. 155–196, 2010, doi: 10.1007/s10107-008-0244-7

  34. [34]

    Pyomo: modeling and solving mathematical programs in Python,

    W. E. Hart, J. -P. Watson, and D. L. Woodruff, “Pyomo: modeling and solving mathematical programs in Python,” Math Program Comput , vol. 3, no. 3, pp. 219 –260, 2011, doi: 10.1007/s12532-011-0026-8

  35. [35]

    gulhameed361/TRF -Solver: TRF -Solver v0.0.1,

    G. Hameed, “gulhameed361/TRF -Solver: TRF -Solver v0.0.1,” Nov. 2025, Zenodo. doi: 10.5281/zenodo.17698421

  36. [36]

    A generalized chemical processing model for the investigation of computer control,

    T. J. Williams and R. E. Otto, “A generalized chemical processing model for the investigation of computer control,” Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics , vol. 79, no. 5, pp. 458 –473, 1960, doi: 10.1109/TCE.1960.6367296

  37. [37]

    Rudin, Principles of Mathematical Analysis (International Series in Pure and Applied Mathematics), 3rd ed

    W. Rudin, Principles of Mathematical Analysis (International Series in Pure and Applied Mathematics), 3rd ed. McGraw Hill, 1976

  38. [38]

    Nocedal and S

    J. Nocedal and S. J. Wright, Numerical Optimization, 2nd ed. New York, NY: Springer New York, NY, 2006. doi: https://doi.org/10.1007/978-0-387-40065-5

  39. [39]

    Global Convergence of a Trust-Region SQP -Filter Algorithm for General Nonlinear Programming,

    R. Fletcher, N. I. M. Gould, S. Leyffer, P. L. Toint, and A. Wächter, “Global Convergence of a Trust-Region SQP -Filter Algorithm for General Nonlinear Programming,” SIAM Journal on Optimization, vol. 13, no. 3, pp. 635–659, 2002, doi: 10.1137/S1052623499357258

  40. [40]

    On the Global Convergence of a Filter –SQP Algorithm,

    R. Fletcher, S. Leyffer, and P. L. Toint, “On the Global Convergence of a Filter –SQP Algorithm,” SIAM Journal on Optimization, vol. 13, no. 1, pp. 44–59, 2002, doi: 10.1137/S105262340038081X

  41. [41]

    Nonlinear programming,

    H. W. Kuhn and A. W. Tucker, “Nonlinear programming,” in Traces and emergence of nonlinear programming, Springer, 2013, pp. 247–258

  42. [42]

    A contextualized historical analysis of the Kuhn –Tucker theorem in nonlinear programming: the impact of World War II,

    T. H. Kjeldsen, “A contextualized historical analysis of the Kuhn –Tucker theorem in nonlinear programming: the impact of World War II,” Historia mathematica, vol. 27, no. 4, pp. 331 –361, 2000

  43. [43]

    V Fiacco, Introduction to sensitivity and stability analysis in nonlinear programming, vol

    A. V Fiacco, Introduction to sensitivity and stability analysis in nonlinear programming, vol. 165. Academic Press, 1983. Accessed: Oct. 17, 2025. [Online]. Available: https://shop.elsevier.com/books/introduction-to-sensitivity-and-stability-analysis-in-nonlinear- programming/fiacco/978-0-12-254450-7 Page 24 of 39

  44. [44]

    Generalized equations and their solutions, part II: Applications to nonlinear programming,

    S. M. Robinson, “Generalized equations and their solutions, part II: Applications to nonlinear programming,” in Optimality and Stability in Mathematical Programming , M. Guignard, Ed., Berlin, Heidelberg: Springer Berlin Heidelberg, 1982, pp. 200–221. doi: 10.1007/BFb0120989. Appendix A: Global Convergence of Trust-Region Funnel Algorithm We begin by rest...

  45. [45]

    From the update rule (A4), we have: 𝜙(𝑘+1) = (1 − 𝜅𝑓)𝜃(𝑘+1) + 𝜅𝑓𝜙(𝑘) ≤ (1 − 𝜅𝑓)𝜏𝜙(𝑘) + 𝜅𝑓𝜙(𝑘) ≤ (1 − (1 − 𝜏)(1 − 𝜅𝑓))𝜙(𝑘)

    Infinite 𝜃 − 𝑡𝑦𝑝𝑒 steps. From the update rule (A4), we have: 𝜙(𝑘+1) = (1 − 𝜅𝑓)𝜃(𝑘+1) + 𝜅𝑓𝜙(𝑘) ≤ (1 − 𝜅𝑓)𝜏𝜙(𝑘) + 𝜅𝑓𝜙(𝑘) ≤ (1 − (1 − 𝜏)(1 − 𝜅𝑓))𝜙(𝑘). Define 𝑞 ≝ 1 − (1 − 𝜏)(1 − 𝜅𝑓) ∈ (0,1). Then 𝜙(𝑘+1) ≤ 𝑞𝜙(𝑘) implies 𝜙(𝑘) → 0 for 𝑘 → ∞. Page 26 of 39

  46. [46]

    Finite 𝜃 − 𝑡𝑦𝑝𝑒 steps. If only finitely many 𝜃 − 𝑡𝑦𝑝𝑒 steps occur, there exists an iteration index 𝑘′ such that for all 𝑘 ≥ 𝑘′, only 𝑓 − 𝑡𝑦𝑝𝑒 steps are taken, satisfying both the switching condition (14) and the Armijo sufficient decrease condition (15). Summing (A5) from 𝑘 = 𝑘′ to 𝑘 = 𝑘′ + 𝑁 gives ∑ (𝑓(𝑘) − 𝑓(𝑘+1)) 𝑘′+𝑁 𝑘=𝑘′ ≥ 𝜂𝛿 ∑ (𝜃(𝑘))𝛾𝑠 𝑘′+𝑁 𝑘=𝑘′ , w...