168
0

Explainable AI: XAI-Guided Context-Aware Data Augmentation

Main:9 Pages
3 Figures
Bibliography:3 Pages
6 Tables
Appendix:1 Pages
Abstract

Explainable AI (XAI) has emerged as a powerful tool for improving the performance of AI models, going beyond providing model transparency and interpretability. The scarcity of labeled data remains a fundamental challenge in developing robust and generalizable AI models, particularly for low-resource languages. Conventional data augmentation techniques introduce noise, cause semantic drift, disrupt contextual coherence, lack control, and lead to overfitting. To address these challenges, we propose XAI-Guided Context-Aware Data Augmentation. This novel framework leverages XAI techniques to modify less critical features while selectively preserving most task-relevant features. Our approach integrates an iterative feedback loop, which refines augmented data over multiple augmentation cycles based on explainability-driven insights and the model performance gain. Our experimental results demonstrate that XAI-SR-BT and XAI-PR-BT improve the accuracy of models on hate speech and sentiment analysis tasks by 6.6% and 8.1%, respectively, compared to the baseline, using the Amharic dataset with the XLM-R model. XAI-SR-BT and XAI-PR-BT outperform existing augmentation techniques by 4.8% and 5%, respectively, on the same dataset and model. Overall, XAI-SR-BT and XAI-PR-BT consistently outperform both baseline and conventional augmentation techniques across all tasks and models. This study provides a more controlled, interpretable, and context-aware solution to data augmentation, addressing critical limitations of existing augmentation techniques and offering a new paradigm shift for leveraging XAI techniques to enhance AI model training.

View on arXiv
@article{mersha2025_2506.03484,
  title={ Explainable AI: XAI-Guided Context-Aware Data Augmentation },
  author={ Melkamu Abay Mersha and Mesay Gemeda Yigezu and Atnafu Lambebo Tonja and Hassan Shakil and Samer Iskander and Olga Kolesnikova and Jugal Kalita },
  journal={arXiv preprint arXiv:2506.03484},
  year={ 2025 }
}
Comments on this paper