Few-shot Continual Relation Extraction is a crucial challenge for enabling AI systems to identify and adapt to evolving relationships in dynamic real-world domains. Traditional memory-based approaches often overfit to limited samples, failing to reinforce old knowledge, with the scarcity of data in few-shot scenarios further exacerbating these issues by hindering effective data augmentation in the latent space. In this paper, we propose a novel retrieval-based solution, starting with a large language model to generate descriptions for each relation. From these descriptions, we introduce a bi-encoder retrieval training paradigm to enrich both sample and class representation learning. Leveraging these enhanced representations, we design a retrieval-based prediction method where each sample "retrieves" the best fitting relation via a reciprocal rank fusion score that integrates both relation description vectors and class prototypes. Extensive experiments on multiple datasets demonstrate that our method significantly advances the state-of-the-art by maintaining robust performance across sequential tasks, effectively addressing catastrophic forgetting.
View on arXiv@article{thanh2025_2502.20596, title={ Few-Shot, No Problem: Descriptive Continual Relation Extraction }, author={ Nguyen Xuan Thanh and Anh Duc Le and Quyen Tran and Thanh-Thien Le and Linh Ngo Van and Thien Huu Nguyen }, journal={arXiv preprint arXiv:2502.20596}, year={ 2025 } }