Understanding the SHAP interpretation method: Kernel SHAP

Posted on Sat 29 February 2020 in posts • Tagged with python, machine learning, interpretable machine learning

SHAP is certainly one of the most important tools in the interpretable machine learning toolbox nowadays. It is used by a variety of actors, mentioned extensively by the research community, and in my experience it provides the best insights into a model behavior.

Shap schema (from https://github.com/slundberg/shap)

This blog article gives a detailed yet simple explanation for Kernel SHAP, the core of the SHAP reactor.


Continue reading

Understanding the SHAP interpretation method: Shapley values

Posted on Fri 27 December 2019 in posts • Tagged with python, machine learning, interpretable machine learning

Explainable artificial intelligence (XAI, a.k.a interpretable machine learning) is a thing those days. The goal of XAI is to provide explanations for machine learning models predictions, such that humans can understand the reasons that lead to those predictions.

Interpretable machine learning Google scholar searches over the last few years.

It is important to know the reasons behind an algorithm's predictions in a variety of contexts:


Continue reading