Recognition: unknown
The reparameterization trick for acquisition functions
read the original abstract
Bayesian optimization is a sample-efficient approach to solving global optimization problems. Along with a surrogate model, this approach relies on theoretically motivated value heuristics (acquisition functions) to guide the search process. Maximizing acquisition functions yields the best performance; unfortunately, this ideal is difficult to achieve since optimizing acquisition functions per se is frequently non-trivial. This statement is especially true in the parallel setting, where acquisition functions are routinely non-convex, high-dimensional, and intractable. Here, we demonstrate how many popular acquisition functions can be formulated as Gaussian integrals amenable to the reparameterization trick and, ensuingly, gradient-based optimization. Further, we use this reparameterized representation to derive an efficient Monte Carlo estimator for the upper confidence bound acquisition function in the context of parallel selection.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
ORTHOBO: Orthogonal Bayesian Hyperparameter Optimization
OrthoBO introduces an orthogonal acquisition estimator subtracting an optimally weighted score-function control variate to reduce Monte Carlo variance, preserve the acquisition target, and improve ranking stability in...
-
Physics-informed automated surface reconstructing via low-energy electron diffraction based on Bayesian optimization
A trust-region Bayesian optimization framework integrates LEED multiple scattering models to jointly optimize structural and experimental parameters for automated surface reconstruction.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.