pith. machine review for the scientific record. sign in

arxiv: 2605.14019 · v1 · submitted 2026-05-13 · 💰 econ.EM · cs.LG· math.ST· stat.CO· stat.TH

Recognition: no theorem link

Regret Equals Covariance: A Closed-Form Characterization for Stochastic Optimization

Authors on Pith no claims yet

Pith reviewed 2026-05-15 02:30 UTC · model grok-4.3

classification 💰 econ.EM cs.LGmath.STstat.COstat.TH
keywords regretcovariancestochastic optimizationlinear programmingquadratic programmingportfolio optimizationsample average approximation
0
0 comments X

The pith

Expected regret equals the covariance between uncertain costs and optimal decisions in stochastic optimization.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that in any stochastic optimization problem, the expected regret decomposes exactly into the covariance between the uncertain parameter vector and the optimal decision plus a residual term. For linear programs and unconstrained quadratic programs, including portfolio optimization, this residual is zero, making regret identical to the covariance. This allows computing regret from historical cost-decision pairs using a single pass in linear time relative to the data size, rather than relying on costly Monte Carlo simulations. The result comes with concentration bounds and a central limit theorem for the estimator, and it is verified on both synthetic problems and real financial data.

Core claim

The central discovery is that Regret(c) = Cov(c, π*(c)) + R(c) holds exactly for any stochastic optimization problem, with R(c) = 0 for linear programs and unconstrained quadratic programs. This provides a closed-form characterization that replaces simulation-based estimation with direct covariance calculation from observed pairs.

What carries the argument

The decomposition of regret into covariance between costs and optimal decisions plus a residual term, which vanishes exactly for linear and quadratic objectives.

If this is right

  • For linear programs, regret is exactly equal to the covariance without any approximation or simulation.
  • Unconstrained quadratic programs, such as the Markowitz portfolio problem, also satisfy the exact equality.
  • Regret can be estimated in O(nd²) time from historical data, compared to O(Bn²d³) for sample average approximation.
  • The estimator admits concentration bounds, a central limit theorem, and an asymptotically unbiased residual estimator.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Practitioners could monitor regret in real time for ongoing optimization tasks by tracking covariance in streaming data.
  • This identity might guide the design of new optimization algorithms that directly target low covariance between costs and decisions.
  • The framework could be tested on mixed-integer programs to see how small the residual becomes in practice.

Load-bearing premise

The exact zero-residual equality requires the optimization problem to be a linear program or an unconstrained quadratic program; general problems only have a bounded residual.

What would settle it

For a specific linear program, generate many scenarios, compute the average regret via full optimization per scenario, and compare it directly to the covariance between the cost vectors and the corresponding optimal solutions; any significant mismatch would disprove the equality.

Figures

Figures reproduced from arXiv: 2605.14019 by Irene Aldridge.

Figure 1
Figure 1. Figure 1: Logical dependency graph for Section 3.2. Solid arrows indicate that the target result uses the [PITH_FULL_IMAGE:figures/full_fig_p010_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Linear Programming: Empirical vs. Theoretical Regret [PITH_FULL_IMAGE:figures/full_fig_p016_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Quadratic Programming: Empirical vs. Theoretical Regret [PITH_FULL_IMAGE:figures/full_fig_p017_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Integer Programming: Empirical vs. Theoretical Regret [PITH_FULL_IMAGE:figures/full_fig_p018_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Cov-Predicted vs. Realized Regret in Real-data financial portfolios [PITH_FULL_IMAGE:figures/full_fig_p018_5.png] view at source ↗
read the original abstract

Regret is the cost of uncertainty in algorithmic decision-making. Quantifying regret typically requires computationally expensive simulation via Sample Average Approximation (SAA), with complexity $\mathcal{O}(Bn^{2}d^{3})$ in the number of scenarios $B$, variables $n$, and constraints $d$. % This paper proves that expected regret in any stochastic optimization problem admits the exact decomposition % \begin{equation*} \mathrm{Regret}(c) = \mathrm{Cov}(c,\,\pi^{*}(c)) + R(c), \end{equation*} % where $c$ is the vector of uncertain parameters, $\pi^{*}(c)$ is the optimal decision, and $R(c)$ is a residual whose magnitude we bound explicitly under Lipschitz, smooth, and strongly convex conditions. % For linear programs and unconstrained quadratic programs, including the classical Markowitz portfolio problem, we prove $R(c)=0$ exactly, so that $\mathrm{Regret}(c) = \mathrm{Cov}(c,\pi^{*}(c))$ holds without approximation. % When historical cost-decision pairs $\{(c_i, \pi^*(c_i))\}$ are available, the covariance can be estimated in $\mathcal{O}(nd^{2})$ time, which is orders of magnitude faster than SAA. The estimation is performed by a single pass through the data. % We derive concentration bounds, a central limit theorem, and an asymptotically unbiased residual estimator, and we validate all results on synthetic LP, QP, and integer programming instances and on a rolling-window portfolio experiment using ten years of CRSP equity data.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript claims to prove that expected regret in any stochastic optimization problem admits the exact decomposition Regret(c) = Cov(c, π*(c)) + R(c), where c denotes the random parameters, π*(c) the optimal decision, and R(c) a residual term. For linear programs and unconstrained quadratic programs (including Markowitz portfolio optimization), it asserts R(c) = 0 exactly, so regret equals covariance. When historical pairs {(c_i, π*(c_i))} are available, the covariance is estimated in O(nd²) time via a single pass, with accompanying concentration bounds, a CLT, and an asymptotically unbiased residual estimator. Results are validated on synthetic LP/QP/IP instances and a rolling-window experiment on ten years of CRSP equity data.

Significance. If the decomposition holds, the result supplies a closed-form, parameter-free link between regret and covariance that replaces expensive SAA simulation with direct sample-covariance estimation. This would be a substantial computational and conceptual advance for stochastic programming and portfolio theory, enabling rapid regret quantification and new theoretical bounds. The explicit residual bounds under Lipschitz/smooth/strongly-convex assumptions, together with the statistical guarantees (concentration, CLT), further strengthen applicability.

major comments (2)
  1. [Abstract / main theorem] Abstract and central theorem statement: the claimed identity is written as Regret(c) = Cov(c, π*(c)) + R(c). Cov(c, π*(c)) is defined via an outer expectation and is therefore a fixed scalar, while Regret(c) is indexed by the random c. The manuscript must clarify whether both sides are understood as unconditional expectations (i.e., E[Regret(c)] = Cov + E[R(c)]) or whether Cov is redefined as a random variable; without this the scalar equality cannot hold as stated and the exact R(c)=0 claim for LPs/QPs inherits the same ambiguity.
  2. [Proofs for LPs and unconstrained QPs] LP and QP sections: the proof that R(c)=0 exactly for linear programs and unconstrained quadratic programs is load-bearing for the “regret equals covariance” headline. The derivation should be exhibited in full (including the precise definition of the residual R(c)) so that readers can verify it does not rely on additional hidden assumptions beyond linearity or unconstrained quadratic structure.
minor comments (2)
  1. [Complexity discussion] The complexity statements O(Bn²d³) for SAA and O(nd²) for the covariance estimator should be derived or referenced to standard algorithmic results rather than asserted.
  2. [Numerical experiments] Figure captions and table footnotes should explicitly state the number of Monte Carlo replications and the precise data-exclusion rules used in the CRSP rolling-window experiment.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful and constructive report. The two major comments identify genuine issues of notation and proof presentation that we will resolve in revision. We respond to each below.

read point-by-point responses
  1. Referee: [Abstract / main theorem] Abstract and central theorem statement: the claimed identity is written as Regret(c) = Cov(c, π*(c)) + R(c). Cov(c, π*(c)) is defined via an outer expectation and is therefore a fixed scalar, while Regret(c) is indexed by the random c. The manuscript must clarify whether both sides are understood as unconditional expectations (i.e., E[Regret(c)] = Cov + E[R(c)]) or whether Cov is redefined as a random variable; without this the scalar equality cannot hold as stated and the exact R(c)=0 claim for LPs/QPs inherits the same ambiguity.

    Authors: We agree the notation is ambiguous as written. The covariance term is a fixed scalar, Cov(c, π*(c)) := E[(c - E[c])(π*(c) - E[π*(c)])]. The intended identity is E[Regret(c)] = Cov(c, π*(c)) + E[R(c)]. For linear programs and unconstrained quadratic programs we in fact prove R(c) = 0 almost surely, so the equality holds pathwise. We will revise the abstract, the statement of the main theorem, and the surrounding text to state the decomposition explicitly in terms of expectations and to note the almost-sure vanishing of R(c) under the stated structural assumptions. revision: yes

  2. Referee: [Proofs for LPs and unconstrained QPs] LP and QP sections: the proof that R(c)=0 exactly for linear programs and unconstrained quadratic programs is load-bearing for the “regret equals covariance” headline. The derivation should be exhibited in full (including the precise definition of the residual R(c)) so that readers can verify it does not rely on additional hidden assumptions beyond linearity or unconstrained quadratic structure.

    Authors: We accept that the current exposition of the R(c) = 0 proofs is too condensed. In the revision we will insert complete derivations. We first recall the precise definition R(c) := Regret(c) - Cov(c, π*(c)), where Regret(c) is the random variable cᵀ(π*(c) - π*(E[c])) for linear objectives (and the analogous quadratic expression). For linear programs we then use the fact that π*(c) is piecewise linear in c and show that the covariance term exactly cancels the expected regret term by direct expansion. For unconstrained quadratic programs we substitute the closed-form solution π*(c) = -Q⁻¹c (or its affine generalization) and verify that the quadratic and linear terms cancel identically, again without extra assumptions. The expanded proofs will appear in the main text rather than the appendix. revision: yes

Circularity Check

0 steps flagged

No significant circularity; decomposition derived from problem structure

full rationale

The paper claims to prove an exact decomposition of expected regret into covariance plus residual directly from the definitions of regret, optimal policy, and covariance in stochastic optimization. For LPs and unconstrained QPs the residual vanishes by the linearity or quadratic structure of the objective, which is a structural property rather than a fitted or self-referential quantity. The covariance term is estimated via the ordinary sample covariance on observed (c, π*(c)) pairs, a direct statistic independent of the regret identity. No load-bearing self-citations, imported uniqueness theorems, ansatzes smuggled via prior work, or renaming of known empirical patterns appear in the derivation. The result is therefore self-contained against the problem definitions and does not reduce to its inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim rests on standard optimization assumptions rather than new free parameters or invented entities.

axioms (2)
  • domain assumption Objective is Lipschitz continuous, smooth, and strongly convex to bound the residual R(c)
    Invoked to control the magnitude of the residual term in the general decomposition.
  • domain assumption Problem is a linear program or unconstrained quadratic program for R(c)=0
    Used to obtain the exact equality Regret(c) = Cov(c, π*(c)) without approximation.

pith-pipeline@v0.9.0 · 5594 in / 1245 out tokens · 74810 ms · 2026-05-15T02:30:51.911619+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

24 extracted references · 24 canonical work pages · 1 internal anchor

  1. [1]

    Differentiable convex optimization layers,

    Agrawal, Akshay, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, and J Zico Kolter, “Differentiable convex optimization layers,”Advances in Neural Information Processing Systems, 2019,32. Amos, Brandon and J. Zico Kolter, “OptNet: differentiable optimization as a layer in neural networks,” in “Proceedings of the 34th International Conference on...

  2. [2]

    Generalization bounds in the predict-then-optimize framework,

    Forthcoming. , , , and Ambuj Tewari, “Generalization bounds in the predict-then-optimize framework,”Advances in Neural Information Processing Systems, 2022,35, 19276–19289. Ban, Gah-Yi and Cynthia Rudin, “The big data newsvendor: Practical insights from machine learning,” Operations Research, 2019,67(1), 90–108. , Nataliya Korolev, Daniel Kretschmer, and ...

  3. [3]

    Separability and decomposition in SAA for stochastic mixed- integer programming,

    Forthcoming. Bayraksan, G¨ uzin and David K Love, “Separability and decomposition in SAA for stochastic mixed- integer programming,”Operations Research Letters, 2015,43(1), 109–113. Ben-Tal, Aharon, Laurent El Ghaoui, and Arkadi Nemirovski, “Robust optimization,”Princeton University Press,

  4. [4]

    Machine learning for combinatorial optimization: A methodological tour d’horizon,

    22 Bengio, Yoshua, Andrea Lodi, and Antoine Prouvost, “Machine learning for combinatorial optimization: A methodological tour d’horizon,”European Journal of Operational Research, 2021,290(2), 405–421. Berthet, Quentin, Mathieu Blondel, Olivier Teboul, Marco Cuturi, Jean-Philippe Vert, and Francis Bach, “Learning with differentiable perturbed optimizers,”A...

  5. [5]

    Statistical analysis of Wasserstein distributionally robust estimators,

    Forthcoming. , Karthyek Murthy, and Viet Anh Nguyen, “Statistical analysis of Wasserstein distributionally robust estimators,”Probability in the Engineering and Informational Sciences, 2023, pp. 1–41. , Yang Kang, and Karthyek Murthy, “Robust Wasserstein profile inference and applications to machine learning,”Journal of Applied Probability, 2019,56(3), 83...

  6. [6]

    Combinatorial optimization and reasoning with graph neural networks,

    Cappart, Quentin, Didier Ch´ etelat, Elias Khalil, Andrea Lodi, Christopher Morris, and Petar Veliˇ ckovi´ c, “Combinatorial optimization and reasoning with graph neural networks,”Journal of Machine Learning Research, 2023,24(130), 1–61. Cesa-Bianchi, Nicol` o and G´ abor Lugosi, “On the Role of Randomization in the Online Convex Optimization Problem,”Jou...

  7. [7]

    The Impact of Linear Optimization on Promotion Planning,

    Forthcoming. Cohen, Maxime C., Ngai-Hang Zachary Leung, Kiran Panchamgam, Georgia Perakis, and Anthony Smith, “The Impact of Linear Optimization on Promotion Planning,”Operations Research, 2017,65(2), 446–468. 23 Dalle, Luca, Kerem Berk, Priya Donti, and J Zico Kolter, “Decision-aware conditional GANs for time series data,”arXiv preprint arXiv:2210.09876,

  8. [8]

    Risk aversion in multistage stochastic program- ming: A modeling and algorithmic perspective,

    de Mello, Tito Homem and Bernardo K Pagnoncelli, “Risk aversion in multistage stochastic program- ming: A modeling and algorithmic perspective,”European Journal of Operational Research, 2016,249(1), 188–199. DeMiguel, Victor, Lorenzo Garlappi, and Raman Uppal, “Optimal versus naive diversification: How inefficient is the 1/N portfolio strategy?,”The Revie...

  9. [9]

    The Fr´ echet distance between multivariate normal distributions,

    Dowson, DC and BV Landau, “The Fr´ echet distance between multivariate normal distributions,”Journal of Multivariate Analysis, 1982,12(3), 450–455. Dupaˇ cov´ a, Jitka, “Stability in stochastic programming with recourse-estimated parameters,”Mathematical Programming, 1984,28(1), 72–83. , Giorgio Consigli, and Stein W Wallace, “Applications of stochastic p...

  10. [10]

    Smart “Predict, then Optimize

    Elmachtoub, Adam N. and Paul Grigas, “Smart “Predict, then Optimize”,”Management Science, 2022, 68(1), 9–26. Esfahani, Peyman Mohajerin and Daniel Kuhn, “Data-driven distributionally robust optimization using the Wasserstein metric: Performance guarantees and tractable reformulations,”Mathematical Programming, 2018,171(1), 115–166. Ferreira, Kris Johnson,...

  11. [11]

    Postoptimal analysis of a linear program under simultaneous changes in matrix coefficients,

    Freund, Robert M, “Postoptimal analysis of a linear program under simultaneous changes in matrix coefficients,”Mathematical Programming Study, 1985,24, 1–13. Fu, Michael C, “Gradient estimation,”Handbooks in Operations Research and Management Science, 2006, 13, 575–616. Fu, Michael C., “Gradient Estimation,” in Shane G. Henderson and Barry L. Nelson, eds....

  12. [12]

    Indirect estimation via L= Aˆ W,

    Glynn, Peter W and Donald L Iglehart, “Indirect estimation via L= Aˆ W,”Operations Research, 1989, 37(1), 82–103. Guigues, Vincent and Werner R¨ omisch, “Sampling-based decomposition methods for multistage stochastic programs based on extended polyhedral risk measures,”SIAM Journal on Optimization, 2012, 22(2), 286–312. Gupta, Vishal and Nathan Kallus, “D...

  13. [13]

    Learning-Based Robust Optimization: Procedures and Statistical Guarantees,

    Hong, L. Jeff, Zhiyuan Huang, and Henry Lam, “Learning-Based Robust Optimization: Procedures and Statistical Guarantees,”Management Science, 2021,67(6), 3447–3467. Hu, Jiahao, Huizhuo Li, and Guanghui Zhu, “Sample complexity of linear contextual bandits with adversarial corruption,”Operations Research,

  14. [14]

    The Sample Average Approximation Method for Stochastic Discrete Optimization,

    Kleywegt, Anton J., Alexander Shapiro, and Tito Homem de Mello, “The Sample Average Approximation Method for Stochastic Discrete Optimization,”SIAM Journal on Optimization, 2002,12 (2), 479–502. Kotary, James, Ferdinando Fioretto, Pascal Van Hentenryck, and Bryan Wilder, “End-to-end constrained optimization learning: A survey,”arXiv preprint arXiv:2103.16378,

  15. [15]

    Wasserstein distributionally robust optimization: Theory and applications in machine learning,

    Kuhn, Daniel, Peyman Mohajerin Esfahani, Viet Anh Nguyen, and Soroosh Shafieezadeh- Abadeh, “Wasserstein distributionally robust optimization: Theory and applications in machine learning,” Operations Research & Management Science in the Age of Analytics, 2019, pp. 130–166. Lam, Henry, “Advanced tutorial on sample average approximation method: Part I,”Proc...

  16. [16]

    Learning to Optimize

    Forthcoming. L’Ecuyer, Pierre, “A unified view of the IPA, SF, and LR gradient estimation techniques,”Management Science, 1990,36(11), 1364–1383. 25 and Art B Owen, “Recent advances in randomized quasi-Monte Carlo methods,”International Series in Operations Research & Management Science, 2019,274, 419–454. Li, Ke and Jitendra Malik, “Learning to optimize,...

  17. [17]

    Bounds and approximations for multistage stochastic programs,

    Maggioni, Francesca and Georg Ch Pflug, “Bounds and approximations for multistage stochastic programs,”SIAM Journal on Optimization, 2016,26(1), 831–855. Mandi, Jayanta and Tias Guns, “Interior point solving for LP-based prediction+ optimisation,” in “Advances in Neural Information Processing Systems,” Vol. 33 2020, pp. 7272–7282. , Peter J Stuckey, Tias ...

  18. [18]

    Portfolio Selection,

    Markowitz, Harry, “Portfolio Selection,”The Journal of Finance, 1952,7, 77–91. Mehrotra, Mili, Milind Dawande, Srinagesh Gavirneni, Mehmet Demirci, and Sridhar Tayur, “OR Practice - Production Planning with Patterns: A Problem from Processed Food Manufacturing.,” Operations Research, 04 2011,59, 267–282. Milgrom, Paul and Ilya Segal, “Envelope theorems fo...

  19. [19]

    Importance sampling for robust optimization with environmental applications,

    Picheny, Victor, David Ginsbourger, Olivier Roustant, Raphael T Haftka, and Nam-Ho Kim, “Importance sampling for robust optimization with environmental applications,”Technometrics, 2023,65 (2), 222–235. Pichler, Alois and Ruben Schlotter, “Optimal transport and stochastic dominance,”Mathematics of Operations Research,

  20. [20]

    and MEHROTRA, S

    Forthcoming. Politis, Dimitris N. and Joseph P. Romano, “The Stationary Bootstrap,”Journal of the American Statistical Association, 1994,89(428), 1303–1313. Rahimian, Hamed and Sanjay Mehrotra, “Distributionally robust optimization: A review,”arXiv preprint arXiv:1908.05659,

  21. [21]

    Sensitivity analysis in linear programming and quadratic programming,

    Rao, MR and Romesh Saigal, “Sensitivity analysis in linear programming and quadratic programming,” Symposium on Operations Research, 1979, pp. 219–235. Rockafellar, R Tyrrell and Stanislav Uryasev, “Optimization of conditional value-at-risk,”Journal of Risk, 2000,2, 21–42. and , “Conditional value-at-risk for general loss distributions,”Journal of Banking...

  22. [22]

    Inference of statistical bounds for multistage stochastic programming problems.,

    Shapiro, A., “Inference of statistical bounds for multistage stochastic programming problems.,”Mathematical Methods in Operations Research, 2003,58, 57–68. 26 Shapiro, Alexander, “Asymptotic properties of statistical estimators in stochastic programming,”Annals of Statistics, 1989,17(2), 841–858. , “On concepts of directional differentiability,”Journal of...

  23. [23]

    Risk neutral and risk averse stochastic dual dynamic programming method,

    , Tito Homem de Mello, and Joseph C Kim, “Risk neutral and risk averse stochastic dual dynamic programming method,”European Journal of Operational Research, 2013,224(2), 375–391. Sobol’, Il’ya Meerovich, “On the distribution of points in a cube and the approximate evaluation of integrals,”Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 1967,7...

  24. [24]

    End-to-end learning and intervention in games,

    , Eric Ewing, Bistra Dilkina, and Milind Tambe, “End-to-end learning and intervention in games,” Advances in Neural Information Processing Systems, 2020,33, 16100–16111. 27 A Proofs of Main Results A.1 Proof of Theorem 3.6 (Regret–Covariance Decomposition with Residual Bounds) Theorem:Under Assumptions 3.1–3.4 and Condition 1 ( π∗ is L-Lipschitz), the exp...