Enhancing BERTopic with Intermediate Layer Representations

BERTopic is a topic modeling algorithm that leverages transformer-based embeddings to create dense clusters, enabling the estimation of topic structures and the extraction of valuable insights from a corpus of documents. This approach allows users to efficiently process large-scale text data and gain meaningful insights into its structure. While BERTopic is a powerful tool, embedding preparation can vary, including extracting representations from intermediate model layers and applying transformations to these embeddings. In this study, we evaluate 18 different embedding representations and present findings based on experiments conducted on three diverse datasets. To assess the algorithm's performance, we report topic coherence and topic diversity metrics across all experiments. Our results demonstrate that, for each dataset, it is possible to find an embedding configuration that performs better than the default setting of BERTopic. Additionally, we investigate the influence of stop words on different embedding configurations.
View on arXiv@article{koterwa2025_2505.06696, title={ Enhancing BERTopic with Intermediate Layer Representations }, author={ Dominik Koterwa and Maciej Świtała }, journal={arXiv preprint arXiv:2505.06696}, year={ 2025 } }