357

On Cross-Domain Pre-Trained Language Models for Clinical Text Mining: How Do They Perform on Data-Constrained Fine-Tuning?

BigData Congress [Services Society] (BSS), 2022
Abstract

Fine-tuning Large Language Models (LLMs) pre-trained from general or related domain data to a specific domain and task using a limited amount of resources available in the new task has been a popular practice in NLP fields. In this work, we re-visit this assumption, and carry out investigation in clinical NLP, specifically named-entity recognition on Drugs and their related Attributes. We compare Transformer models that are learned from scratch to fine-tuning BERT-based LLMs including BERT-base, BioBERT, and ClinicalBERT. We also investigate the comparison of such models and their extended models with a CRF layer for continuous learning. We use n2c2-2018 shared task data for model development and evaluations. The experimental outcomes show that 1) the CRF layer makes a difference for all neural models; 2) on BIO-strict span level evaluation using macro-average F1, while the fine-tuned LLMs achieved scores 0.83+, the TransformerCRF model learned from scratch achieved 0.78+ demonstrating comparable performances but using much less cost, e.g. 39.80\% less training parameters; 3) on BIO-strict span level evaluation using weighted-average F1, the score gaps are even smaller (97.59\%, 97.44\%, 96.84\%) for models (ClinicalBERT-CRF, BERT-CRF, TransformerCRF). 4) efficient training using down-sampling for better data-distribution (SamBD) further reduced the data for model learning but producing similar outcomes around 0.02 points lower than the full set model training. Our models including source codes will be hosted at \url{https://github.com/HECTA-UoM/TransformerCRF}

View on arXiv
Comments on this paper