SMOTExT: SMOTE meets Large Language Models

Data scarcity and class imbalance are persistent challenges in training robust NLP models, especially in specialized domains or low-resource settings. We propose a novel technique, SMOTExT, that adapts the idea of Synthetic Minority Over-sampling (SMOTE) to textual data. Our method generates new synthetic examples by interpolating between BERT-based embeddings of two existing examples and then decoding the resulting latent point into text with xRAG architecture. By leveraging xRAG's cross-modal retrieval-generation framework, we can effectively turn interpolated vectors into coherent text. While this is preliminary work supported by qualitative outputs only, the method shows strong potential for knowledge distillation and data augmentation in few-shot settings. Notably, our approach also shows promise for privacy-preserving machine learning: in early experiments, training models solely on generated data achieved comparable performance to models trained on the original dataset. This suggests a viable path toward safe and effective learning under data protection constraints.
View on arXiv@article{bystroński2025_2505.13434, title={ SMOTExT: SMOTE meets Large Language Models }, author={ Mateusz Bystroński and Mikołaj Hołysz and Grzegorz Piotrowski and Nitesh V. Chawla and Tomasz Kajdanowicz }, journal={arXiv preprint arXiv:2505.13434}, year={ 2025 } }