pith. machine review for the scientific record. sign in

arxiv: 1905.12580 · v1 · submitted 2019-05-29 · 💻 cs.LG · stat.ML

Recognition: unknown

Model Similarity Mitigates Test Set Overuse

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords similaritytestdatamodelsreusebenchmarksbeyondbounds
0
0 comments X
read the original abstract

Excessive reuse of test data has become commonplace in today's machine learning workflows. Popular benchmarks, competitions, industrial scale tuning, among other applications, all involve test data reuse beyond guidance by statistical confidence bounds. Nonetheless, recent replication studies give evidence that popular benchmarks continue to support progress despite years of extensive reuse. We proffer a new explanation for the apparent longevity of test data: Many proposed models are similar in their predictions and we prove that this similarity mitigates overfitting. Specifically, we show empirically that models proposed for the ImageNet ILSVRC benchmark agree in their predictions well beyond what we can conclude from their accuracy levels alone. Likewise, models created by large scale hyperparameter search enjoy high levels of similarity. Motivated by these empirical observations, we give a non-asymptotic generalization bound that takes similarity into account, leading to meaningful confidence bounds in practical settings.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.