In Sequential Recommendation Systems (SRecsys), traditional training approaches that rely on Cross-Entropy (CE) loss often prioritize accuracy but fail to align well with user satisfaction metrics. CE loss focuses on maximizing the confidence of the ground truth item, which is challenging to achieve universally across all users and sessions. It also overlooks the practical acceptability of ranking the ground truth item within the top- positions, a common metric in SRecsys. To address this limitation, we propose \textbf{CPFT}, a novel fine-tuning framework that integrates Conformal Prediction (CP)-based losses with CE loss to optimize accuracy alongside confidence that better aligns with widely used top- metrics. CPFT embeds CP principles into the training loop using differentiable proxy losses and computationally efficient calibration strategies, enabling the generation of high-confidence prediction sets. These sets focus on items with high relevance while maintaining robust coverage guarantees. Extensive experiments on five real-world datasets and four distinct sequential models demonstrate that CPFT improves precision metrics and confidence calibration. Our results highlight the importance of confidence-aware fine-tuning in delivering accurate, trustworthy recommendations that enhance user satisfaction.
View on arXiv@article{wang2025_2402.08976, title={ Confidence-aware Fine-tuning of Sequential Recommendation Systems via Conformal Prediction }, author={ Chen Wang and Fangxin Wang and Ruocheng Guo and Yueqing Liang and Philip S. Yu }, journal={arXiv preprint arXiv:2402.08976}, year={ 2025 } }