Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.06898
Cited By
Neural-to-Tree Policy Distillation with Policy Improvement Criterion
16 August 2021
Zhaorong Li
Yang Yu
Yingfeng Chen
Ke Chen
Zhipeng Hu
Changjie Fan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Neural-to-Tree Policy Distillation with Policy Improvement Criterion"
3 / 3 papers shown
Title
Practical Knowledge Distillation: Using DNNs to Beat DNNs
Chungman Lee
Pavlos Anastasios Apostolopulos
Igor L. Markov
FedML
22
1
0
23 Feb 2023
MSVIPER: Improved Policy Distillation for Reinforcement-Learning-Based Robot Navigation
Aaron M. Roth
Jing Liang
Ram D. Sriram
Elham Tabassi
Tianyi Zhou
29
1
0
19 Sep 2022
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
254
3,684
0
28 Feb 2017
1