23
0

Towards Robust Few-Shot Text Classification Using Transformer Architectures and Dual Loss Strategies

Abstract

Few-shot text classification has important application value in low-resource environments. This paper proposes a strategy that combines adaptive fine-tuning, contrastive learning, and regularization optimization to improve the classification performance of Transformer-based models. Experiments on the FewRel 2.0 dataset show that T5-small, DeBERTa-v3, and RoBERTa-base perform well in few-shot tasks, especially in the 5-shot setting, which can more effectively capture text features and improve classification accuracy. The experiment also found that there are significant differences in the classification difficulty of different relationship categories. Some categories have fuzzy semantic boundaries or complex feature distributions, making it difficult for the standard cross entropy loss to learn the discriminative information required to distinguish categories. By introducing contrastive loss and regularization loss, the generalization ability of the model is enhanced, effectively alleviating the overfitting problem in few-shot environments. In addition, the research results show that the use of Transformer models or generative architectures with stronger self-attention mechanisms can help improve the stability and accuracy of few-shot classification.

View on arXiv
@article{han2025_2505.06145,
  title={ Towards Robust Few-Shot Text Classification Using Transformer Architectures and Dual Loss Strategies },
  author={ Xu Han and Yumeng Sun and Weiqiang Huang and Hongye Zheng and Junliang Du },
  journal={arXiv preprint arXiv:2505.06145},
  year={ 2025 }
}
Comments on this paper