Recognition: unknown
Freeze-Thaw Bayesian Optimization
read the original abstract
In this paper we develop a dynamic form of Bayesian optimization for machine learning models with the goal of rapidly finding good hyperparameter settings. Our method uses the partial information gained during the training of a machine learning model in order to decide whether to pause training and start a new model, or resume the training of a previously-considered model. We specifically tailor our method to machine learning problems by developing a novel positive-definite covariance kernel to capture a variety of training curves. Furthermore, we develop a Gaussian process prior that scales gracefully with additional temporal observations. Finally, we provide an information-theoretic framework to automate the decision process. Experiments on several common machine learning models show that our approach is extremely effective in practice.
This paper has not been read by Pith yet.
Forward citations
Cited by 4 Pith papers
-
Open-Ended Task Discovery via Bayesian Optimization
Generate-Select-Refine is an open-ended Bayesian optimization method that generates tasks and concentrates evaluations on the best one with only logarithmic regret overhead relative to standard single-task optimization.
-
HARBOR: Automated Harness Optimization
HARBOR formalizes harness optimization as constrained noisy Bayesian optimization over mixed-variable spaces and reports a case study where it outperforms manual tuning on a production coding agent.
-
Adaptive Candidate Point Thompson Sampling for High-Dimensional Bayesian Optimization
ACTS improves Thompson sampling in high-dimensional Bayesian optimization by adaptively reducing the search space using gradients from surrogate samples to produce better maximizer samples.
-
A Tutorial on Bayesian Optimization
Bayesian optimization uses Gaussian process regression to build a surrogate model and acquisition functions to guide sampling for optimizing costly objective functions, including a new formal generalization of expecte...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.