272

BERT for Long Documents: A Case Study of Automated ICD Coding

International Workshop on Health Text Mining and Information Analysis (LOUHI), 2022
Abstract

Transformer models have achieved great success across many NLP problems. However, previous studies in automated ICD coding concluded that these models fail to outperform some of the earlier solutions such as CNN-based models. In this paper we challenge this conclusion. We present a simple and scalable method to process long text with the existing transformer models such as BERT. We show that this method significantly improves the previous results reported for transformer models in ICD coding, and is able to outperform one of the prominent CNN-based methods.

View on arXiv
Comments on this paper