pith. machine review for the scientific record. sign in

arxiv: 1502.02312 · v2 · submitted 2015-02-08 · 📊 stat.AP

Recognition: unknown

Bayesian and empirical Bayesian forests

Authors on Pith no claims yet
classification 📊 stat.AP
keywords bayesianlargeempiricalforestforestsgainssamplesable
0
0 comments X
read the original abstract

We derive ensembles of decision trees through a nonparametric Bayesian model, allowing us to view random forests as samples from a posterior distribution. This insight provides large gains in interpretability, and motivates a class of Bayesian forest (BF) algorithms that yield small but reliable performance gains. Based on the BF framework, we are able to show that high-level tree hierarchy is stable in large samples. This leads to an empirical Bayesian forest (EBF) algorithm for building approximate BFs on massive distributed datasets and we show that EBFs outperform sub-sampling based alternatives by a large margin.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.