Unpacking Robustness in Inflectional Languages: Adversarial Evaluation and Mechanistic Insights

Various techniques are used in the generation of adversarial examples, including methods such as TextBugger which introduce minor, hardly visible perturbations to words leading to changes in model behaviour. Another class of techniques involves substituting words with their synonyms in a way that preserves the text's meaning but alters its predicted class, with TextFooler being a prominent example of such attacks. Most adversarial example generation methods are developed and evaluated primarily on non-inflectional languages, typically English. In this work, we evaluate and explain how adversarial attacks perform in inflectional languages. To explain the impact of inflection on model behaviour and its robustness under attack, we designed a novel protocol inspired by mechanistic interpretability, based on Edge Attribution Patching (EAP) method. The proposed evaluation protocol relies on parallel task-specific corpora that include both inflected and syncretic variants of texts in two languages -- Polish and English. To analyse the models and explain the relationship between inflection and adversarial robustness, we create a new benchmark based on task-oriented dataset MultiEmo, enabling the identification of mechanistic inflection-related elements of circuits within the model and analyse their behaviour under attack.
View on arXiv@article{walkowiak2025_2505.07856, title={ Unpacking Robustness in Inflectional Languages: Adversarial Evaluation and Mechanistic Insights }, author={ Paweł Walkowiak and Marek Klonowski and Marcin Oleksy and Arkadiusz Janz }, journal={arXiv preprint arXiv:2505.07856}, year={ 2025 } }