ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.06216
27
1

SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization

10 September 2024
Kohei Tsuji
Tatsuya Hiraoka
Yuchang Cheng
Tomoya Iwakura
ArXivPDFHTML
Abstract

NLP datasets may still contain annotation errors, even when they are manually annotated. Researchers have attempted to develop methods to automatically reduce the adverse effect of errors in datasets. However, existing methods are time-consuming because they require many trained models to detect errors. This paper proposes a time-saving method that utilizes a tokenization technique called subword regularization to simulate multiple error detection models for detecting errors. Our proposed method, SubRegWeigh, can perform annotation weighting four to five times faster than the existing method. Additionally, SubRegWeigh improved performance in document classification and named entity recognition tasks. In experiments with pseudo-incorrect labels, SubRegWeigh clearly identifies pseudo-incorrect labels as annotation errors. Our code is available atthis https URL.

View on arXiv
@article{tsuji2025_2409.06216,
  title={ SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization },
  author={ Kohei Tsuji and Tatsuya Hiraoka and Yuchang Cheng and Tomoya Iwakura },
  journal={arXiv preprint arXiv:2409.06216},
  year={ 2025 }
}
Comments on this paper