ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.02196
57
1

Exploiting Ensemble Learning for Cross-View Isolated Sign Language Recognition

4 February 2025
Fei Wang
Kun Li
Yiqi Nie
Zhangling Duan
Peng Zou
Z. Wu
Y. Wang
Yanyan Wei
    SLR
ArXivPDFHTML
Abstract

In this paper, we present our solution to the Cross-View Isolated Sign Language Recognition (CV-ISLR) challenge held at WWW 2025. CV-ISLR addresses a critical issue in traditional Isolated Sign Language Recognition (ISLR), where existing datasets predominantly capture sign language videos from a frontal perspective, while real-world camera angles often vary. To accurately recognize sign language from different viewpoints, models must be capable of understanding gestures from multiple angles, making cross-view recognition challenging. To address this, we explore the advantages of ensemble learning, which enhances model robustness and generalization across diverse views. Our approach, built on a multi-dimensional Video Swin Transformer model, leverages this ensemble strategy to achieve competitive performance. Finally, our solution ranked 3rd in both the RGB-based ISLR and RGB-D-based ISLR tracks, demonstrating the effectiveness in handling the challenges of cross-view recognition. The code is available at:this https URL.

View on arXiv
@article{wang2025_2502.02196,
  title={ Exploiting Ensemble Learning for Cross-View Isolated Sign Language Recognition },
  author={ Fei Wang and Kun Li and Yiqi Nie and Zhangling Duan and Peng Zou and Zhiliang Wu and Yuwei Wang and Yanyan Wei },
  journal={arXiv preprint arXiv:2502.02196},
  year={ 2025 }
}
Comments on this paper