ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.08976
99
4

Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction

21 February 2025
Chen Wang
Fangxin Wang
Ruocheng Guo
Yueqing Liang
Philip S. Yu
ArXivPDFHTML
Abstract

In Sequential Recommendation Systems (SRecsys), traditional training approaches that rely on Cross-Entropy (CE) loss often prioritize accuracy but fail to align well with user satisfaction metrics. CE loss focuses on maximizing the confidence of the ground truth item, which is challenging to achieve universally across all users and sessions. It also overlooks the practical acceptability of ranking the ground truth item within the top-KKK positions, a common metric in SRecsys. To address this limitation, we propose \textbf{CPFT}, a novel fine-tuning framework that integrates Conformal Prediction (CP)-based losses with CE loss to optimize accuracy alongside confidence that better aligns with widely used top-KKK metrics. CPFT embeds CP principles into the training loop using differentiable proxy losses and computationally efficient calibration strategies, enabling the generation of high-confidence prediction sets. These sets focus on items with high relevance while maintaining robust coverage guarantees. Extensive experiments on five real-world datasets and four distinct sequential models demonstrate that CPFT improves precision metrics and confidence calibration. Our results highlight the importance of confidence-aware fine-tuning in delivering accurate, trustworthy recommendations that enhance user satisfaction.

View on arXiv
@article{wang2025_2402.08976,
  title={ Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction },
  author={ Chen Wang and Fangxin Wang and Ruocheng Guo and Yueqing Liang and Philip S. Yu },
  journal={arXiv preprint arXiv:2402.08976},
  year={ 2025 }
}
Comments on this paper