On Local Interpretable Model-agnostic Explanations

Published:

Read this post at Medium.
Local Interpretable Model-agnostic Explanations (LIME) is one of the most popular technique for deel learning explanability, as it covers a wide range of inputs types (e.g. images, text, tabular data) and treats the model as a black-box, which means it can be used with any deep learning model. In this post, I explain how LIME works with the help of some intermediate outputs.