Recognition: unknown
Self-Normalized Martingales and Uniform Regret Bounds for Linear Regression
Pith reviewed 2026-05-09 13:47 UTC · model grok-4.3
The pith
Nontrivial scale-invariant self-normalized martingale bounds exist only in d=1 (with O(log T) doubly-uniform regret), are impossible in d>1 without assumptions, and are recovered under smoothness.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Without further assumptions, nontrivial scale-invariant bounds on self-normalized martingales exist only in dimension d=1; in d=1 we obtain O(log T) scale-invariant self-normalized bounds without any assumptions on the covariates. For d>1 no nontrivial scale-invariant bound can hold in full generality. This implies O(log T) doubly-uniform regret in d=1 and impossibility of sublinear doubly-uniform regret in d>1, resolving the open question of Gaillard et al.
Load-bearing premise
The natural smoothness condition (bounded Radon-Nikodym derivatives of the conditional covariate laws with respect to a fixed base measure) that is used to recover sublinear regret and a regularization-free self-normalized inequality for d>1.
read the original abstract
Self-normalized martingale inequalities lie at the heart of confidence ellipsoids for online least squares and, more broadly, many bandit and reinforcement-learning results. Yet existing vector and scalar results typically rely on bounded covariates and an explicit regularization matrix, producing bounds that are \emph{not scale-invariant}: although the self-normalized quantity is scale-invariant by definition, its standard upper bounds are not. We characterize when scale-invariant upper bounds on self-normalized martingales are possible. Without further assumptions, we prove that nontrivial scale-invariant bounds exist only in dimension $d=1$; moreover, in $d=1$ we obtain $O(\log T)$ scale-invariant self-normalized bounds without any assumptions on the covariates. In contrast, for $d>1$ we show that no nontrivial scale-invariant bound can hold in full generality. We then connect this dichotomy to \emph{doubly-uniform} regret in online linear regression (i.e., regret bounds that are simultaneously independent of the covariate scale and the comparator norm) and use it to resolve the open question of Gaillard, Gerchinovitz, Huard, and Stoltz, \emph{``Uniform regret bounds over $\mathbb{R}^d$ for the sequential linear regression problem with the square loss''} (ALT 2019): in $d=1$ we give an explicit algorithm with $O(\log T)$ doubly-uniform regret, whereas for $d>1$ sublinear doubly-uniform regret is impossible. Finally, under a natural \emph{smoothness} condition (bounded Radon--Nikodym derivatives of the conditional covariate laws with respect to a fixed base measure), we recover sublinear regret for $d>1$ without bounded covariates and derive a self-normalized concentration inequality free of the usual regularization penalties, yielding arguably a first natural scale-invariant bound for adaptive, non-i.i.d. vector martingales.
Editorial analysis
A structured set of objections, weighed in public.
Axiom & Free-Parameter Ledger
axioms (2)
- standard math Standard properties of vector and scalar martingales and self-normalized processes
- domain assumption Bounded Radon-Nikodym derivatives of conditional covariate laws under the smoothness condition
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.