EuroPython 2016

Machine Learning: Power of Ensembles

Speaker(s) Bargava Subramanian

It is relatively easy to build a first-cut machine learning model. But what does it take to build a reasonably good model, or even a state-of-art model ?

Ensemble models. They are our best friends. They help us exploit the power of computing. Ensemble methods aren’t new. They form the basis for some extremely powerful machine learning algorithms like random forests and gradient boosting machines. The key point about ensemble is that consensus from diverse models are more reliable than a single source. This talk will cover how we can combine model outputs from various base models(logistic regression, support vector machines, decision trees, neural networks, etc) to create a stronger/better model output.

This talk will cover various strategies to create ensemble models.

Using third-party Python libraries along with scikit-learn, this talk will demonstrate the following ensemble methodologies: 1) Bagging 2) Boosting 3) Stacking

Real-life examples from the enterprise world will be show-cased where ensemble models produced better results consistently when compared against single best-performing models.

There will also be emphasis on the following: Feature engineering, model selection, importance of bias-variance and generalization.

Creating better models is the critical component of building a good data science product.

A preliminary version of the slides is available here

in on Friday 22 July at 14:00 See schedule

Comments

  1. Gravatar
    Slides of my talk: https://speakerdeck.com/bargava/power-of-ensembles-1
    — Bargava Subramanian,

New comment