Episode 23: Why do ensemble methods work?
Episode 15 of the Data Science at Home podcast, hosted by Francesco Gadaleta, titled "Episode 23: Why do ensemble methods work?" was published on October 3, 2017 and runs 18 minutes.
October 3, 2017 ·18m · Data Science at Home
Summary
Ensemble methods have been designed to improve the performance of the single model, when the single model is not very accurate. According to the general definition of ensembling, it consists in building a number of single classifiers and then combining or aggregating their predictions into one classifier that is usually stronger than the single one. The key idea behind ensembling is that some models will do well when they model certain aspects of the data while others will do well in modelling other aspects. In this episode I show with a numeric example why and when ensemble methods work.
Episode Description
Ensemble methods have been designed to improve the performance of the single model, when the single model is not very accurate. According to the general definition of ensembling, it consists in building a number of single classifiers and then combining or aggregating their predictions into one classifier that is usually stronger than the single one.
The key idea behind ensembling is that some models will do well when they model certain aspects of the data while others will do well in modelling other aspects. In this episode I show with a numeric example why and when ensemble methods work.
Similar Episodes
Apr 13, 2026 ·4m
Apr 12, 2026 ·5m
Apr 11, 2026 ·5m
Apr 10, 2026 ·4m
Apr 9, 2026 ·3m
Apr 8, 2026 ·3m