pith. machine review for the scientific record. sign in

arxiv: 1605.07079 · v2 · submitted 2016-05-23 · 💻 cs.LG · cs.AI· stat.ML

Recognition: unknown

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIstat.ML
keywords optimizationbayesiantrainingdatasetdatasetsdeepfabolasfunction
0
0 comments X
read the original abstract

Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. A Tutorial on Bayesian Optimization

    stat.ML 2018-07 unverdicted novelty 4.0

    Bayesian optimization uses Gaussian process regression to build a surrogate model and acquisition functions to guide sampling for optimizing costly objective functions, including a new formal generalization of expecte...