Posts by Tags

Deep Learning

Explaining Deep Reinforcement Learning models with Linear Model U-Trees

Published:

Read this post at Medium.
A popular deep learning explainability approach is to approaximate the behavior of the pre-trained deep learning model into a less complex, but interpretable learning method. Decision trees are quite useful here because they are often easy to interpret, while also providing a good performance. In this post, I summarise a explanability method using Linear Model U-Trees (LMUTs) by Guiliang Liu, Oliver Schulte, Wang Zhu and Qingcan Li in their paper Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees.

On Local Interpretable Model-agnostic Explanations

Published:

Read this post at Medium.
Local Interpretable Model-agnostic Explanations (LIME) is one of the most popular technique for deel learning explanability, as it covers a wide range of inputs types (e.g. images, text, tabular data) and treats the model as a black-box, which means it can be used with any deep learning model. In this post, I explain how LIME works with the help of some intermediate outputs.

Understanding what RNN learns: Part 1

Published:

Read this post at Medium.
Recurrent Neural Networks (RNNs) are difficult to explain or interpret because of both the underlying neural networks and their recurrent nature. In this post, I look at the elements of RNNs to explain how they work individually and in conjunction with other elements.

Keras Functional models: Few pointers for debugging

Published:

Read this post at Medium.
I leaned the ropes of deep learning model building with Keras. It’s easy to learn and use, and gives the user options to try many useful APIs like early stopping, scheduling learning rate, etc. As a beginner, you may want to familiarize yourself with some basics for debugging. In this post, I list down a few things that I found useful for that.

Embedding

Explainability

Explaining Deep Reinforcement Learning models with Linear Model U-Trees

Published:

Read this post at Medium.
A popular deep learning explainability approach is to approaximate the behavior of the pre-trained deep learning model into a less complex, but interpretable learning method. Decision trees are quite useful here because they are often easy to interpret, while also providing a good performance. In this post, I summarise a explanability method using Linear Model U-Trees (LMUTs) by Guiliang Liu, Oliver Schulte, Wang Zhu and Qingcan Li in their paper Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees.

On Local Interpretable Model-agnostic Explanations

Published:

Read this post at Medium.
Local Interpretable Model-agnostic Explanations (LIME) is one of the most popular technique for deel learning explanability, as it covers a wide range of inputs types (e.g. images, text, tabular data) and treats the model as a black-box, which means it can be used with any deep learning model. In this post, I explain how LIME works with the help of some intermediate outputs.

Interpretability

Explaining Deep Reinforcement Learning models with Linear Model U-Trees

Published:

Read this post at Medium.
A popular deep learning explainability approach is to approaximate the behavior of the pre-trained deep learning model into a less complex, but interpretable learning method. Decision trees are quite useful here because they are often easy to interpret, while also providing a good performance. In this post, I summarise a explanability method using Linear Model U-Trees (LMUTs) by Guiliang Liu, Oliver Schulte, Wang Zhu and Qingcan Li in their paper Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees.

On Local Interpretable Model-agnostic Explanations

Published:

Read this post at Medium.
Local Interpretable Model-agnostic Explanations (LIME) is one of the most popular technique for deel learning explanability, as it covers a wide range of inputs types (e.g. images, text, tabular data) and treats the model as a black-box, which means it can be used with any deep learning model. In this post, I explain how LIME works with the help of some intermediate outputs.

Keras

Understanding what RNN learns: Part 1

Published:

Read this post at Medium.
Recurrent Neural Networks (RNNs) are difficult to explain or interpret because of both the underlying neural networks and their recurrent nature. In this post, I look at the elements of RNNs to explain how they work individually and in conjunction with other elements.

Keras Functional models: Few pointers for debugging

Published:

Read this post at Medium.
I leaned the ropes of deep learning model building with Keras. It’s easy to learn and use, and gives the user options to try many useful APIs like early stopping, scheduling learning rate, etc. As a beginner, you may want to familiarize yourself with some basics for debugging. In this post, I list down a few things that I found useful for that.

Machine Learning

Explaining Deep Reinforcement Learning models with Linear Model U-Trees

Published:

Read this post at Medium.
A popular deep learning explainability approach is to approaximate the behavior of the pre-trained deep learning model into a less complex, but interpretable learning method. Decision trees are quite useful here because they are often easy to interpret, while also providing a good performance. In this post, I summarise a explanability method using Linear Model U-Trees (LMUTs) by Guiliang Liu, Oliver Schulte, Wang Zhu and Qingcan Li in their paper Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees.

On Local Interpretable Model-agnostic Explanations

Published:

Read this post at Medium.
Local Interpretable Model-agnostic Explanations (LIME) is one of the most popular technique for deel learning explanability, as it covers a wide range of inputs types (e.g. images, text, tabular data) and treats the model as a black-box, which means it can be used with any deep learning model. In this post, I explain how LIME works with the help of some intermediate outputs.

Neural Networks: Mathematics and Interpretation

Published:

Read this post at Medium.
Neural Networks (NNs) are the basic units of deep learning model. Therefore, the first step towards understanding how deep learning models work is to understand how NNs work. In this post, I use a methoematical approach using plots and matrix algebra to shed light on how the elements of NN come together to generate a powerful learning mechanism.

Understanding what RNN learns: Part 1

Published:

Read this post at Medium.
Recurrent Neural Networks (RNNs) are difficult to explain or interpret because of both the underlying neural networks and their recurrent nature. In this post, I look at the elements of RNNs to explain how they work individually and in conjunction with other elements.

Keras Functional models: Few pointers for debugging

Published:

Read this post at Medium.
I leaned the ropes of deep learning model building with Keras. It’s easy to learn and use, and gives the user options to try many useful APIs like early stopping, scheduling learning rate, etc. As a beginner, you may want to familiarize yourself with some basics for debugging. In this post, I list down a few things that I found useful for that.

Neural Networks

Neural Networks: Mathematics and Interpretation

Published:

Read this post at Medium.
Neural Networks (NNs) are the basic units of deep learning model. Therefore, the first step towards understanding how deep learning models work is to understand how NNs work. In this post, I use a methoematical approach using plots and matrix algebra to shed light on how the elements of NN come together to generate a powerful learning mechanism.

RNN

Understanding what RNN learns: Part 1

Published:

Read this post at Medium.
Recurrent Neural Networks (RNNs) are difficult to explain or interpret because of both the underlying neural networks and their recurrent nature. In this post, I look at the elements of RNNs to explain how they work individually and in conjunction with other elements.