ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24143
23
0

CrossICL: Cross-Task In-Context Learning via Unsupervised Demonstration Transfer

30 May 2025
Jinglong Gao
Xiao Ding
Lingxiao Zou
Bing Qin
Ting Liu
ArXiv (abs)PDFHTML
Main:8 Pages
11 Figures
Bibliography:3 Pages
6 Tables
Appendix:22 Pages
Abstract

In-Context Learning (ICL) enhances the performance of large language models (LLMs) with demonstrations. However, obtaining these demonstrations primarily relies on manual effort. In most real-world scenarios, users are often unwilling or unable to provide such demonstrations. Inspired by the human analogy, we explore a new ICL paradigm CrossICL to study how to utilize existing source task demonstrations in the ICL for target tasks, thereby obtaining reliable guidance without any additional manual effort. To explore this, we first design a two-stage alignment strategy to mitigate the interference caused by gaps across tasks, as the foundation for our experimental exploration. Based on it, we conduct comprehensive exploration of CrossICL, with 875 NLP tasks from the Super-NI benchmark and six types of LLMs, including GPT-4o. Experimental results demonstrate the effectiveness of CrossICL and provide valuable insights on questions like the criteria for selecting cross-task demonstrations, as well as the types of task-gap-induced interference in CrossICL.

View on arXiv
@article{gao2025_2505.24143,
  title={ CrossICL: Cross-Task In-Context Learning via Unsupervised Demonstration Transfer },
  author={ Jinglong Gao and Xiao Ding and Lingxiao Zou and Bing Qin and Ting Liu },
  journal={arXiv preprint arXiv:2505.24143},
  year={ 2025 }
}
Comments on this paper