29
0

Contrastive Distillation of Emotion Knowledge from LLMs for Zero-Shot Emotion Recognition

Abstract

The ability to handle various emotion labels without dedicated training is crucial for building adaptable Emotion Recognition (ER) systems. Conventional ER models rely on training using fixed label sets and struggle to generalize beyond them. On the other hand, Large Language Models (LLMs) have shown strong zero-shot ER performance across diverse label spaces, but their scale limits their use on edge devices. In this work, we propose a contrastive distillation framework that transfers rich emotional knowledge from LLMs into a compact model without the use of human annotations. We use GPT-4 to generate descriptive emotion annotations, offering rich supervision beyond fixed label sets. By aligning text samples with emotion descriptors in a shared embedding space, our method enables zero-shot prediction on different emotion classes, granularity, and label schema. The distilled model is effective across multiple datasets and label spaces, outperforming strong baselines of similar size and approaching GPT-4's zero-shot performance, while being over 10,000 times smaller.

View on arXiv
@article{niu2025_2505.18040,
  title={ Contrastive Distillation of Emotion Knowledge from LLMs for Zero-Shot Emotion Recognition },
  author={ Minxue Niu and Emily Mower Provost },
  journal={arXiv preprint arXiv:2505.18040},
  year={ 2025 }
}
Comments on this paper