SHAP is certainly one of the most important tools in the interpretable machine learning toolbox nowadays. It is used by a variety of actors, mentioned extensively by the research community, and in my experience it provides the best insights into a model behavior.
This blog article gives a detailed yet simple explanation for Kernel SHAP, the core of the SHAP reactor.
Explainable artificial intelligence (XAI, a.k.a interpretable machine learning) is a thing those days. The goal of XAI is to provide explanations for machine learning models predictions, such that humans can understand the reasons that lead to those predictions.
It is important to know the reasons behind an algorithm's predictions in a variety of contexts: