Recognition: unknown
The entropic optimal (self-)transport problem: Limit distributions for decreasing regularization with application to score function estimation
read the original abstract
We study the statistical properties of the entropic optimal (self) transport problem for smooth probability measures. We provide an accurate description of the limit distribution for entropic (self-)potentials and plans as the regularization parameter shrinks with the sample size; this regime is largely unexplored in the prior statistical literature, where $\epsilon$ is typically held fixed. Additionally, we show that a rescaling of the barycentric projection of the empirical entropic optimal self-transport plans converges to the score function, a central object for diffusion models, and characterize the asymptotic fluctuations both pointwise and in $L^2$. Finally, we describe under what conditions the methods used enable to derive (pointwise) limiting distribution results for the empirical entropic optimal transport potentials in the case of two different measures and appropriately chosen shrinking regularization parameter. This endeavour requires a better understanding of the composition of Sinkhorn operators in the small $\eps$-limit, a result of independent interest.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
Learning Monge maps with constrained drifting models
A new constrained gradient flow on the space of transport maps converges to the OT map and enables more stable and accurate training of convexity-constrained neural networks for learning Monge maps.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.