Recognition: unknown
Mismatched Guesswork
read the original abstract
We study the problem of mismatched guesswork, where we evaluate the number of symbols $y \in \mathcal{Y}$ which have higher likelihood than $X \sim \mu$ according to a mismatched distribution $\nu$. We discuss the role of the tilted/exponential families of the source distribution $\mu$ and of the mismatched distribution $\nu$. We show that the value of guesswork can be characterized using the tilted family of the mismatched distribution $\nu$, while the probability of guessing is characterized by an exponential family which passes through $\mu$. Using this characterization, we demonstrate that the mismatched guesswork follows a large deviation principle (LDP), where the rate function is described implicitly using information theoretic quantities. We apply these results to one-to-one source coding (without prefix free constraint) to obtain the cost of mismatch in terms of average codeword length. We show that the cost of mismatch in one-to-one codes is no larger than that of the prefix-free codes, i.e., $D(\mu\| \nu)$. Further, the cost of mismatch vanishes if and only if $\nu$ lies on the tilted family of the true distribution $\mu$, which is in stark contrast to the prefix-free codes. These results imply that one-to-one codes are inherently more robust to mismatch.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
Theoretical Limits of Language Model Alignment
The maximum reward gain under KL-regularized LM alignment is a Jeffreys divergence term, estimable as covariance from base samples, with best-of-N approaching the theoretical limit.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.