ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17336
32
0

Efficient Intent-Based Filtering for Multi-Party Conversations Using Knowledge Distillation from LLMs

21 March 2025
Reem Gody
M. Abdelghaffar
Mohammed Jabreel
Ahmed Tawfik
ArXivPDFHTML
Abstract

Large language models (LLMs) have showcased remarkable capabilities in conversational AI, enabling open-domain responses in chat-bots, as well as advanced processing of conversations like summarization, intent classification, and insights generation. However, these models are resource-intensive, demanding substantial memory and computational power. To address this, we propose a cost-effective solution that filters conversational snippets of interest for LLM processing, tailored to the target downstream application, rather than processing every snippet. In this work, we introduce an innovative approach that leverages knowledge distillation from LLMs to develop an intent-based filter for multi-party conversations, optimized for compute power constrained environments. Our method combines different strategies to create a diverse multi-party conversational dataset, that is annotated with the target intents and is then used to fine-tune the MobileBERT model for multi-label intent classification. This model achieves a balance between efficiency and performance, effectively filtering conversation snippets based on their intents. By passing only the relevant snippets to the LLM for further processing, our approach significantly reduces overall operational costs depending on the intents and the data distribution as demonstrated in our experiments.

View on arXiv
@article{gody2025_2503.17336,
  title={ Efficient Intent-Based Filtering for Multi-Party Conversations Using Knowledge Distillation from LLMs },
  author={ Reem Gody and Mohamed Abdelghaffar and Mohammed Jabreel and Ahmed Tawfik },
  journal={arXiv preprint arXiv:2503.17336},
  year={ 2025 }
}
Comments on this paper