pith. machine review for the scientific record. sign in

arxiv: 1805.02542 · v1 · submitted 2018-05-07 · 🧮 math.ST · stat.TH

Recognition: unknown

Robustness of shape-restricted regression estimators: an envelope perspective

Authors on Pith no claims yet
classification 🧮 math.ST stat.TH
keywords momentregressionestimatorsmodelrateadditiveenvelopeerror
0
0 comments X
read the original abstract

Classical least squares estimators are well-known to be robust with respect to moment assumptions concerning the error distribution in a wide variety of finite-dimensional statistical problems; generally only a second moment assumption is required for least squares estimators to maintain the same rate of convergence that they would satisfy if the errors were assumed to be Gaussian. In this paper, we give a geometric characterization of the robustness of shape-restricted least squares estimators (LSEs) to error distributions with an $L_{2,1}$ moment, in terms of the `localized envelopes' of the model. This envelope perspective gives a systematic approach to proving oracle inequalities for the LSEs in shape-restricted regression problems in the random design setting, under a minimal $L_{2,1}$ moment assumption on the errors. The canonical isotonic and convex regression models, and a more challenging additive regression model with shape constraints are studied in detail. Strikingly enough, in the additive model both the adaptation and robustness properties of the LSE can be preserved, up to error distributions with an $L_{2,1}$ moment, for estimating the shape-constrained proxy of the marginal $L_2$ projection of the true regression function. This holds essentially regardless of whether or not the additive model structure is correctly specified. The new envelope perspective goes beyond shape constrained models. Indeed, at a general level, the localized envelopes give a sharp characterization of the convergence rate of the $L_2$ loss of the LSE between the worst-case rate as suggested by the recent work of the authors [25], and the best possible parametric rate.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Calibeating Prediction-Powered Inference

    stat.ML 2026-04 unverdicted novelty 7.0

    Post-hoc calibration of miscalibrated black-box predictions on a labeled sample improves efficiency of prediction-powered inference for semisupervised mean estimation.