ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.08557
13
13

Deep Learning for Cross-Domain Few-Shot Visual Recognition: A Survey

15 March 2023
Huali Xu
Shuaifeng Zhi
Shuzhou Sun
Vishal M. Patel
Li Liu
ArXivPDFHTML
Abstract

While deep learning excels in computer vision tasks with abundant labeled data, its performance diminishes significantly in scenarios with limited labeled samples. To address this, Few-shot learning (FSL) enables models to perform the target tasks with very few labeled examples by leveraging prior knowledge from related tasks. However, traditional FSL assumes that both the related and target tasks come from the same domain, which is a restrictive assumption in many real-world scenarios where domain differences are common. To overcome this limitation, Cross-domain few-shot learning (CDFSL) has gained attention, as it allows source and target data to come from different domains and label spaces. This paper presents the first comprehensive review of Cross-domain Few-shot Learning (CDFSL), a field that has received less attention compared to traditional FSL due to its unique challenges. We aim to provide both a position paper and a tutorial for researchers, covering key problems, existing methods, and future research directions. The review begins with a formal definition of CDFSL, outlining its core challenges, followed by a systematic analysis of current approaches, organized under a clear taxonomy. Finally, we discuss promising future directions in terms of problem setups, applications, and theoretical advancements.

View on arXiv
@article{xu2025_2303.08557,
  title={ Deep Learning for Cross-Domain Few-Shot Visual Recognition: A Survey },
  author={ Huali Xu and Shuaifeng Zhi and Shuzhou Sun and Vishal M. Patel and Li Liu },
  journal={arXiv preprint arXiv:2303.08557},
  year={ 2025 }
}
Comments on this paper