Understanding what RNN learns: Part 2
Published:
Read this post at Medium.
Building upon the previous post, in this part I use layer fusion to explain the wroking of vanilla Recurrent Neural Networks using embeddings.
Ph.D. Student at UMD, working on robotic perception and planning
Published:
Read this post at Medium.
Building upon the previous post, in this part I use layer fusion to explain the wroking of vanilla Recurrent Neural Networks using embeddings.
Published:
Read this post at Medium.
A popular deep learning explainability approach is to approaximate the behavior of the pre-trained deep learning model into a less complex, but interpretable learning method. Decision trees are quite useful here because they are often easy to interpret, while also providing a good performance. In this post, I summarise a explanability method using Linear Model U-Trees (LMUTs) by Guiliang Liu, Oliver Schulte, Wang Zhu and Qingcan Li in their paper Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees.
Published:
Read this post at Medium.
Local Interpretable Model-agnostic Explanations (LIME) is one of the most popular technique for deel learning explanability, as it covers a wide range of inputs types (e.g. images, text, tabular data) and treats the model as a black-box, which means it can be used with any deep learning model. In this post, I explain how LIME works with the help of some intermediate outputs.
Published:
Read this post at Medium.
Neural Networks (NNs) are the basic units of deep learning model. Therefore, the first step towards understanding how deep learning models work is to understand how NNs work. In this post, I use a methoematical approach using plots and matrix algebra to shed light on how the elements of NN come together to generate a powerful learning mechanism.
Published:
Read this post at Medium.
Recurrent Neural Networks (RNNs) are difficult to explain or interpret because of both the underlying neural networks and their recurrent nature. In this post, I look at the elements of RNNs to explain how they work individually and in conjunction with other elements.