Title |
---|
![]() Distilling Step-by-Step! Outperforming Larger Language Models with Less
Training Data and Smaller Model Sizes Lokesh Nagalapatti Chun-Liang Li Chih-Kuan Yeh Hootan Nakhost Yasuhisa Fujii Alexander Ratner Ranjay Krishna Chen-Yu Lee Tomas Pfister |
![]() Towards A Rigorous Science of Interpretable Machine Learning Finale Doshi-Velez Been Kim |