A popular deep learning explainability approach is to approaximate the behavior of the pre-trained deep learning model into a less complex, but interpretable learning method. Decision trees are quite useful here because they are often easy to interpret, while also providing a good performance. In this post, I summarise a explanability method using Linear Model U-Trees (LMUTs) by

Local Interpretable Model-agnostic Explanations (LIME) is one of the most popular technique for deel learning explanability, as it covers a wide range of inputs types (e.g. images, text, tabular data) and treats the model as a black-box, which means it can be used with any deep learning model. In this post, I explain how LIME works with the help of some intermediate outputs.]]>

Neural Networks (NNs) are the basic units of deep learning model. Therefore, the first step towards understanding how deep learning models work is to understand how NNs work. In this post, I use a methoematical approach using plots and matrix algebra to shed light on how the elements of NN come together to generate a powerful learning mechanism.]]>

Building upon the previous post, in this part I use layer fusion to explain the wroking of vanilla Recurrent Neural Networks using embeddings.]]>

Recurrent Neural Networks (RNNs) are difficult to explain or interpret because of both the underlying neural networks and their recurrent nature. In this post, I look at the elements of RNNs to explain how they work individually and in conjunction with other elements.]]>

I leaned the ropes of deep learning model building with Keras. It’s easy to learn and use, and gives the user options to try many useful APIs like early stopping, scheduling learning rate, etc. As a beginner, you may want to familiarize yourself with some basics for debugging. In this post, I list down a few things that I found useful for that.]]>