10
0

Reinforced Interactive Continual Learning via Real-time Noisy Human Feedback

Abstract

This paper introduces an interactive continual learning paradigm where AI models dynamically learn new skills from real-time human feedback while retaining prior knowledge. This paradigm distinctively addresses two major limitations of traditional continual learning: (1) dynamic model updates using streaming, real-time human-annotated data, rather than static datasets with fixed labels, and (2) the assumption of clean labels, by explicitly handling the noisy feedback common in real-world interactions. To tackle these problems, we propose RiCL, a Reinforced interactive Continual Learning framework leveraging Large Language Models (LLMs) to learn new skills effectively from dynamic feedback. RiCL incorporates three key components: a temporal consistency-aware purifier to automatically discern clean from noisy samples in data streams; an interaction-aware direct preference optimization strategy to align model behavior with human intent by reconciling AI-generated and human-provided feedback; and a noise-resistant contrastive learning module that captures robust representations by exploiting inherent data relationships, thus avoiding reliance on potentially unreliable labels. Extensive experiments on two benchmark datasets (FewRel and TACRED), contaminated with realistic noise patterns, demonstrate that our RiCL approach substantially outperforms existing combinations of state-of-the-art online continual learning and noisy-label learning methods.

View on arXiv
@article{yang2025_2505.09925,
  title={ Reinforced Interactive Continual Learning via Real-time Noisy Human Feedback },
  author={ Yutao Yang and Jie Zhou and Junsong Li and Qianjun Pan and Bihao Zhan and Qin Chen and Xipeng Qiu and Liang He },
  journal={arXiv preprint arXiv:2505.09925},
  year={ 2025 }
}
Comments on this paper