Proximal boosting: aggregating weak learners to minimize non-differentiable losses - Université de Paris - Faculté des Sciences Accéder directement au contenu
Article Dans Une Revue Neurocomputing Année : 2023

Proximal boosting: aggregating weak learners to minimize non-differentiable losses

Résumé

Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model. From an optimization point of view, the learning procedure of gradient boosting mimics a gradient descent on a functional variable. This paper proposes to build upon the proximal point algorithm, when the empirical risk to minimize is not differentiable, in order to introduce a novel boosting approach, called proximal boosting. It comes with a companion algorithm inspired by [1] and called residual proximal boosting, which is aimed at better controlling the approximation error. Theoretical convergence is proved for these two procedures under different hypotheses on the empirical risk and advantages of leveraging proximal methods for boosting are illustrated by numerical experiments on simulated and real-world data. In particular, we exhibit a favorable comparison over gradient boosting regarding convergence rate and prediction accuracy.
Fichier principal
Vignette du fichier
paper_flat.pdf (1.21 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01853244 , version 1 (02-08-2018)
hal-01853244 , version 2 (22-01-2020)
hal-01853244 , version 3 (27-07-2021)
hal-01853244 , version 4 (29-11-2022)

Identifiants

Citer

Erwan Fouillen, Claire Boyer, Maxime Sangnier. Proximal boosting: aggregating weak learners to minimize non-differentiable losses. Neurocomputing, 2023, ⟨10.1016/j.neucom.2022.11.065⟩. ⟨hal-01853244v4⟩
364 Consultations
312 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More