ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.01868
12
0

Positional Attention for Efficient BERT-Based Named Entity Recognition

3 May 2025
Mo Sun
Siheng Xiong
Yuankai Cai
Bowen Zuo
ArXivPDFHTML
Abstract

This paper presents a framework for Named Entity Recognition (NER) leveraging the Bidirectional Encoder Representations from Transformers (BERT) model in natural language processing (NLP). NER is a fundamental task in NLP with broad applicability across downstream applications. While BERT has established itself as a state-of-the-art model for entity recognition, fine-tuning it from scratch for each new application is computationally expensive and time-consuming. To address this, we propose a cost-efficient approach that integrates positional attention mechanisms into the entity recognition process and enables effective customization using pre-trained parameters. The framework is evaluated on a Kaggle dataset derived from the Groningen Meaning Bank corpus and achieves strong performance with fewer training epochs. This work contributes to the field by offering a practical solution for reducing the training cost of BERT-based NER systems while maintaining high accuracy.

View on arXiv
@article{sun2025_2505.01868,
  title={ Positional Attention for Efficient BERT-Based Named Entity Recognition },
  author={ Mo Sun and Siheng Xiong and Yuankai Cai and Bowen Zuo },
  journal={arXiv preprint arXiv:2505.01868},
  year={ 2025 }
}
Comments on this paper