Understanding the SHAP interpretation method: Kernel SHAP

Posted on Sat 29 February 2020 in posts • Tagged with python, machine learning, interpretable machine learning

SHAP is certainly one of the most important tools in the interpretable machine learning toolbox nowadays. It is used by a variety of actors, mentioned extensively by the research community, and in my experience it provides the best insights into a model behavior.

Shap schema (from https://github.com/slundberg/shap)

This blog article gives a detailed yet simple explanation for Kernel SHAP, the core of the SHAP reactor.


Continue reading

Understanding the SHAP interpretation method: Shapley values

Posted on Fri 27 December 2019 in posts • Tagged with python, machine learning, interpretable machine learning

Explainable artificial intelligence (XAI, a.k.a interpretable machine learning) is a thing those days. The goal of XAI is to provide explanations for machine learning models predictions, such that humans can understand the reasons that lead to those predictions.

Interpretable machine learning Google scholar searches over the last few years.

It is important to know the reasons behind an algorithm's predictions in a variety of contexts:


Continue reading

Few-shot learning in NLP: many-classes classification from few examples

Posted on Sun 19 August 2018 in posts • Tagged with python, machine learning, deep learning, natural language processing

If you're doing machine learning and meet a classification problem with many categories and only a few examples per category, it is usually thought that you're in trouble 😨. Acquiring new data to solve this issue is not always easy or even doable. Luckily, we'll see that efficient techniques exist to deal with this situation with Siamese Neural Networks 🕺.


Continue reading

LIME of words: interpreting Recurrent Neural Networks predictions

Posted on Tue 12 September 2017 in posts • Tagged with python, machine learning, deep learning

This is the second part of my blog post on the LIME interpretation model. For a reminder of what LIME is and its purpose, please read the first part. This second part is a quick application of the same algorithm to a deep learning (LSTM) model, while the first part was focused on explaining the predictions of a random forest.


Continue reading

LIME of words: how to interpret your machine learning model predictions

Posted on Mon 07 August 2017 in posts • Tagged with python, machine learning

In this blog post I will share experiments on the LIME (Local Interpretable Model-agnostic Explanations) interpretation model. LIME was introduced in 2016 by Marco Ribeiro and his collaborators in a paper called “Why Should I Trust You?” Explaining the Predictions of Any Classifier. The purpose of this method is to explain a model prediction for a specific sample in a human-interpretable way.


Continue reading