ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.11923
83
1

PICLe: Pseudo-Annotations for In-Context Learning in Low-Resource Named Entity Detection

16 December 2024
Sepideh Mamooler
Syrielle Montariol
Alexander Mathis
Antoine Bosselut
ArXivPDFHTML
Abstract

In-context learning (ICL) enables Large Language Models (LLMs) to perform tasks using few demonstrations, facilitating task adaptation when labeled examples are hard to obtain. However, ICL is sensitive to the choice of demonstrations, and it remains unclear which demonstration attributes enable in-context generalization. In this work, we conduct a perturbation study of in-context demonstrations for low-resource Named Entity Detection (NED). Our surprising finding is that in-context demonstrations with partially correct annotated entity mentions can be as effective for task transfer as fully correct demonstrations. Based off our findings, we propose Pseudo-annotated In-Context Learning (PICLe), a framework for in-context learning with noisy, pseudo-annotated demonstrations. PICLe leverages LLMs to annotate many demonstrations in a zero-shot first pass. We then cluster these synthetic demonstrations, sample specific sets of in-context demonstrations from each cluster, and predict entity mentions using each set independently. Finally, we use self-verification to select the final set of entity mentions. We evaluate PICLe on five biomedical NED datasets and show that, with zero human annotation, PICLe outperforms ICL in low-resource settings where limited gold examples can be used as in-context demonstrations.

View on arXiv
@article{mamooler2025_2412.11923,
  title={ PICLe: Pseudo-Annotations for In-Context Learning in Low-Resource Named Entity Detection },
  author={ Sepideh Mamooler and Syrielle Montariol and Alexander Mathis and Antoine Bosselut },
  journal={arXiv preprint arXiv:2412.11923},
  year={ 2025 }
}
Comments on this paper