pith. machine review for the scientific record. sign in

arxiv: 2512.01667 · v2 · submitted 2025-12-01 · 📊 stat.ME · stat.CO

Recognition: unknown

Detecting Model Misspecification in Bayesian Inverse Problems via Variational Gradient Descent

Authors on Pith no claims yet
classification 📊 stat.ME stat.CO
keywords modelbayesianmisspecificationthetadescentdistributiongradientinference
0
0 comments X
read the original abstract

Bayesian inference is optimal when the statistical model is well-specified, while outside this setting Bayesian inference can catastrophically fail; accordingly a wealth of post-Bayesian methodologies have been proposed. Predictively oriented (PrO) approaches lift the statistical model $P_\theta$ to an (infinite) mixture model $\int P_\theta \; \mathrm{d}Q(\theta)$ and fit this predictive distribution via minimising an entropy-regularised objective functional. In the well-specified setting one expects the mixing distribution $Q$ to concentrate around the true data-generating parameter in the large data limit, while such singular concentration will typically not be observed if the model is misspecified. Our contribution is to demonstrate that one can empirically detect model misspecification by comparing the standard Bayesian posterior to the PrO `posterior' $Q$. To operationalise this, we present an efficient numerical algorithm based on variational gradient descent. A simulation study, and a more detailed case study involving a Bayesian inverse problem in seismology, confirm that model misspecification can be automatically detected using this framework.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Concentration and Calibration in Predictive Bayesian Inference

    stat.ME 2026-05 unverdicted novelty 6.0

    Predictive Bayesian inference posteriors concentrate onto a forward-model-dependent quantity and produce miscalibrated credible sets unless the predictive model contains the true data-generating process.