Concept Distillation from Strong to Weak Models via Hypotheses-to-Theories Prompting

Hand-crafting high quality prompts to optimize the performance of language models is a complicated and labor-intensive process. Furthermore, when migrating to newer, smaller, or weaker models (possibly due to latency or cost gains), prompts need to be updated to re-optimize the task performance. We propose Concept Distillation (CD), an automatic prompt optimization technique for enhancing weaker models on complex tasks. CD involves: (1) collecting mistakes made by weak models with a base prompt (initialization), (2) using a strong model to generate reasons for these mistakes and create rules/concepts for weak models (induction), and (3) filtering these rules based on validation set performance and integrating them into the base prompt (deduction/verification). We evaluated CD on NL2Code and mathematical reasoning tasks, observing significant performance boosts for small and weaker language models. Notably, Mistral-7B's accuracy on Multi-Arith increased by 20%, and Phi-3-mini-3.8B's accuracy on HumanEval rose by 34%. Compared to other automated methods, CD offers an effective, cost-efficient strategy for improving weak models' performance on complex tasks and enables seamless workload migration across different language models without compromising performance.
View on arXiv@article{boateng2025_2408.09365, title={ Concept Distillation from Strong to Weak Models via Hypotheses-to-Theories Prompting }, author={ Emmanuel Aboah Boateng and Cassiano O. Becker and Nabiha Asghar and Kabir Walia and Ashwin Srinivasan and Ehi Nosakhare and Soundar Srinivasan and Victor Dibia }, journal={arXiv preprint arXiv:2408.09365}, year={ 2025 } }