Active Learning with Siamese Twins for Sequence Tagging
- VLM
Deep learning, in general, and natural language processing methods, in particular, rely heavily on annotated samples to achieve good performance. However, manually annotating data is expensive and time consuming. Active Learning (AL) strategies reduce the need for huge volumes of labelled data by iteratively selecting a small number of examples for manual annotation based on their estimated utility in training the given model. In this paper, we argue that since AL strategies choose examples independently, they may potentially select similar examples, all of which do not aid in the learning process. We propose a method, referred to as Active Learning (AL), that actively adapts to the sequence tagging model being trained, to further eliminate such redundant examples chosen by an AL strategy. We empirically demonstrate that AL improves the performance of state-of-the-art AL strategies on different sequence tagging tasks. Furthermore, we show that AL is widely applicable by using it in conjunction with different AL strategies and sequence tagging models. We demonstrate that the proposed AL able to reach full data F-score with less data compared to state-of-art AL strategies on different sequence tagging datasets.
View on arXiv