Recognition: no theorem link
Optimal Lower Bounds for Online Multicalibration
Pith reviewed 2026-05-16 16:01 UTC · model grok-4.3
The pith
Online multicalibration requires Ω(T^{2/3}) expected error even with three groups.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
In the general online setting where group functions may depend on context and predictions, expected multicalibration error is at least Ω(T^{2/3}). The bound is realized by an adversarial sequence over three disjoint binary groups. The same asymptotic rate holds, up to logs, when groups depend only on context and an orthonormal-basis construction supplies an O(log^3 T)-sized family.
What carries the argument
An adversarial online construction with three disjoint binary groups whose membership functions adapt to the learner's current predictions.
If this is right
- No online algorithm can guarantee multicalibration error smaller than Ω(T^{2/3}) in the worst case.
- Multicalibration is strictly harder than marginal calibration in the online adversarial model.
- The Ω(T^{2/3}) rate remains necessary even when group functions ignore the learner's predictions, using O(log^3 T) groups.
- Existing upper bounds for online multicalibration are tight up to logarithmic factors.
Where Pith is reading between the lines
- Algorithm designers for online fairness should expect to tolerate at least this error level when groups are defined adaptively.
- Similar separations may appear in other online problems where constraints depend on the algorithm's own outputs.
- Relaxing the model to approximate or probabilistic group membership could change the achievable rate.
Load-bearing premise
Group membership functions can be chosen adversarially at each step depending on both the current context and the algorithm's predictions.
What would settle it
An online algorithm that achieves expected multicalibration error o(T^{2/3}) against every adversarial sequence defined by three adaptive groups would disprove the lower bound.
read the original abstract
We prove tight lower bounds for online multicalibration, establishing an information-theoretic separation from marginal calibration. In the general setting where group functions can depend on both context and the learner's predictions, we prove an $\Omega(T^{2/3})$ lower bound on expected multicalibration error using just three disjoint binary groups. This matches the upper bounds of Noarov et al. (2025) up to logarithmic factors and exceeds the $O(T^{2/3-\varepsilon})$ upper bound for marginal calibration (Dagan et al., 2025), thereby separating the two problems. We then turn to lower bounds for the more difficult case of group functions that may depend on context but not on the learner's predictions. In this case, we establish an $\widetilde{\Omega}(T^{2/3})$ lower bound for online multicalibration via an $O(\log^3 T)$-sized group family constructed from an orthonormal basis, again matching upper bounds up to logarithmic factors.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proves tight information-theoretic lower bounds for online multicalibration. In the prediction-dependent setting it establishes an Ω(T^{2/3}) lower bound on expected multicalibration error via an explicit adversary over three disjoint binary groups; this matches the upper bounds of Noarov et al. (2025) up to logarithmic factors and strictly exceeds the O(T^{2/3-ε}) upper bound known for marginal calibration (Dagan et al., 2025). In the context-only setting it gives a matching ~Ω(T^{2/3}) lower bound via an O(log^3 T)-sized orthonormal-basis family.
Significance. If the constructions and error-accumulation arguments hold, the result is significant: it supplies the first optimal lower bounds for online multicalibration, demonstrates a clean separation from marginal calibration, and does so with explicit, non-circular adversarial constructions that rely only on standard online-learning primitives.
minor comments (2)
- §3.2: the definition of the multicalibration error measure should explicitly state whether the expectation is over the algorithm's randomness alone or also over the adversary's coins; this affects the precise statement of the Ω(T^{2/3}) bound.
- Theorem 4.1: the O(log^3 T) group-family size is stated without a concrete constant; adding the leading constant would make the bound easier to compare with existing upper-bound constructions.
Simulated Author's Rebuttal
We thank the referee for their positive review and recommendation to accept the manuscript. We appreciate the recognition that the explicit adversarial constructions establish the first tight information-theoretic lower bounds for online multicalibration and cleanly separate it from marginal calibration.
Circularity Check
No significant circularity: lower bounds derived from explicit adversarial constructions independent of cited upper bounds
full rationale
The paper establishes its central Ω(T^{2/3}) lower bound on multicalibration error through two explicit adversarial constructions: one using three disjoint binary groups in the prediction-dependent case, and another using an O(log^3 T)-sized orthonormal-basis family for the context-only case. These constructions are information-theoretic and do not rely on fitting parameters, self-defining quantities, or reducing to the cited upper-bound results from Noarov et al. (2025). The upper-bound citations serve only for asymptotic comparison and separation from marginal calibration; they are not load-bearing in the lower-bound proofs themselves. No equations or steps in the derivation chain reduce by construction to the paper's inputs or prior self-citations in a circular manner. The result is self-contained against external benchmarks via the adversary arguments.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Standard adversarial online learning model where data arrives sequentially and the algorithm must output predictions before seeing the next example
Forward citations
Cited by 3 Pith papers
-
Instance-Adaptive Online Multicalibration
A single online multicalibration algorithm adaptively refines a dyadic grid and achieves instance-dependent rates: O(T^{2/3}) worst-case, O(sqrt T) for marginal stochastic data, and O(sqrt(JT)) for J-piecewise station...
-
An Efficient Black-Box Reduction from Online Learning to Multicalibration, and a New Route to $\Phi$-Regret Minimization
Black-box reductions from no-regret online learning to multicalibration and from multicalibration to Phi-regret minimization are established, resolving the main open question in Garg et al. (SODA '24).
-
The Sample Complexity of Multicalibration
Multicalibration has minimax sample complexity Θ̃(ε^{-3}) when the number of groups is at most ε^{-κ} for fixed κ>0, versus Θ̃(ε^{-2}) for marginal calibration, with a sharp threshold at κ=0.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.