Explaining explainable AI: understanding SHAP

Posted on Fri 27 December 2019 in posts • Tagged with python, machine learning, interpretable machine learning

Explaining explainable AI: understanding SHAP

Explainable artificial intelligence (XAI, a.k.a interpretable machine learning) is a thing those days. The goal of XAI is to provide explanations for machine learning models predictions, such that humans can understand the reasons that lead to those predictions.

It is important to know the reasons behind an algorithm's predictions in a variety of contexts:


Continue reading

Few-shot learning in NLP: many-classes classification from few examples

Posted on Sun 19 August 2018 in posts • Tagged with python, machine learning, deep learning, natural language processing

Few-shot learning on textual data with siamese neural networks

If you're doing machine learning and meet a classification problem with many categories and only a few examples per category, it is usually thought that you're in trouble 😨. Acquiring new data to solve this issue is not always easy or even doable. Luckily, we'll see that efficient techniques exist to deal with this situation 🕺.


Continue reading

LIME of words: interpreting Recurrent Neural Networks predictions

Posted on Tue 12 September 2017 in posts • Tagged with python, machine learning, deep learning

Interpreting recurrent neural networks with LIME

This is the second part of my blog post on the LIME interpretation model. For a reminder of what LIME is and its purpose, please read the first part. This second part is a quick application of the same algorithm to a deep learning (LSTM) model, while the first part was focused on explaining the predictions of a random forest.


Continue reading

LIME of words: how to interpret your machine learning model predictions

Posted on Mon 07 August 2017 in posts • Tagged with python, machine learning

Experiments with the LIME interpretation model

In this blog post I will share experiments on the LIME (Local Interpretable Model-agnostic Explanations) interpretation model. LIME was introduced in 2016 by Marco Ribeiro and his collaborators in a paper called “Why Should I Trust You?” Explaining the Predictions of Any Classifier. The purpose of this method is to explain a model prediction for a specific sample in a human-interpretable way.


Continue reading