pith. machine review for the scientific record. sign in

arxiv: 1510.02558 · v1 · submitted 2015-10-09 · 📊 stat.ML · cs.LG

Recognition: unknown

Functional Frank-Wolfe Boosting for General Loss Functions

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords boostingalgorithmfwboostfrank-wolfelossbaseclassificationexisting
0
0 comments X
read the original abstract

Boosting is a generic learning method for classification and regression. Yet, as the number of base hypotheses becomes larger, boosting can lead to a deterioration of test performance. Overfitting is an important and ubiquitous phenomenon, especially in regression settings. To avoid overfitting, we consider using $l_1$ regularization. We propose a novel Frank-Wolfe type boosting algorithm (FWBoost) applied to general loss functions. By using exponential loss, the FWBoost algorithm can be rewritten as a variant of AdaBoost for binary classification. FWBoost algorithms have exactly the same form as existing boosting methods, in terms of making calls to a base learning algorithm with different weights update. This direct connection between boosting and Frank-Wolfe yields a new algorithm that is as practical as existing boosting methods but with new guarantees and rates of convergence. Experimental results show that the test performance of FWBoost is not degraded with larger rounds in boosting, which is consistent with the theoretical analysis.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. BoostLLM: Boosting-inspired LLM Fine-tuning for Few-shot Tabular Classification

    cs.LG 2026-05 unverdicted novelty 6.0

    BoostLLM trains sequential PEFT adapters as weak learners in a residual process, using decision-tree paths as a second input view, to improve few-shot tabular classification over standard LLM fine-tuning and match or ...

  2. BoostLLM: Boosting-inspired LLM Fine-tuning for Few-shot Tabular Classification

    cs.LG 2026-05 unverdicted novelty 6.0

    BoostLLM trains sequential PEFT adapters in a boosting framework with tree path inputs to improve LLM performance on few-shot tabular classification, matching or exceeding XGBoost.