## Explaining Deep Reinforcement Learning models with Linear Model U-Trees

** Published:**

Read this post at Medium.

A popular deep learning explainability approach is to approaximate the behavior of the pre-trained deep learning model into a less complex, but interpretable learning method. Decision trees are quite useful here because they are often easy to interpret, while also providing a good performance. In this post, I summarise a explanability method using Linear Model U-Trees (LMUTs) by *Guiliang Liu*, *Oliver Schulte*, *Wang Zhu* and *Qingcan Li* in their paper *Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees*.