LLM-Pruner: On the Structural Pruning of Large Language ModelsNeural Information Processing Systems (NeurIPS), 2023 |
When Gradient Descent Meets Derivative-Free Optimization: A Match Made
in Black-Box ScenarioAnnual Meeting of the Association for Computational Linguistics (ACL), 2023 |
Feature-Rich Audio Model Inversion for Data-Free Knowledge Distillation
Towards General Sound ClassificationIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 |
Prompting to Distill: Boosting Data-Free Knowledge Distillation via
Reinforced PromptInternational Joint Conference on Artificial Intelligence (IJCAI), 2022 |
Robust and Resource-Efficient Data-Free Knowledge Distillation by
Generative Pseudo ReplayAAAI Conference on Artificial Intelligence (AAAI), 2022 |