ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02733
44
0

Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study

3 April 2025
Aryan Agrawal
Lisa Alazraki
Shahin Honarvar
Marek Rei
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are highly vulnerable to input perturbations, as even a small prompt change may result in a substantially different output. Existing methods to enhance LLM robustness are primarily focused on perturbed data samples, whereas improving resiliency to perturbations of task-level instructions has remained relatively underexplored. In this work, we focus on character- and word-level edits of task-specific instructions, which substantially degrade downstream performance. We experiment with a variety of techniques to enhance the robustness of LLMs, including self-denoising and representation alignment, testing different models (Llama 3 and Flan-T5), datasets (CoLA, QNLI, SST-2) and instructions (both task-oriented and role-oriented). We find that, on average, self-denoising -- whether performed by a frozen LLM or a fine-tuned model -- achieves substantially higher performance gains than alternative strategies, including more complex baselines such as ensembling and supervised methods.

View on arXiv
@article{agrawal2025_2504.02733,
  title={ Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study },
  author={ Aryan Agrawal and Lisa Alazraki and Shahin Honarvar and Marek Rei },
  journal={arXiv preprint arXiv:2504.02733},
  year={ 2025 }
}
Comments on this paper