ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.12972
16
1

Evaluating the Instruction-following Abilities of Language Models using Knowledge Tasks

16 October 2024
Rudra Murthy
Prince Kumar
Praveen Venkateswaran
Danish Contractor
    KELM
    ALM
    ELM
ArXivPDFHTML
Abstract

LLM evaluation benchmarks have traditionally separated the testing of knowledge/reasoning capabilities from instruction following. In this work, we study the interaction between knowledge and instruction following, and observe that LLMs struggle to follow simple answer modifying instructions, and are also distracted by instructions that should have no bearing on the original knowledge task answer. We leverage existing multiple-choice answer based knowledge benchmarks and apply a set of simple instructions which include manipulating text (eg.: change case), numeric quantities (eg.: increase value, change formatting), operate on lists (eg.: sort answer candidates) and distractor instructions (eg.: change case of numeric answers).

View on arXiv
@article{murthy2025_2410.12972,
  title={ Evaluating the Instruction-following Abilities of Language Models using Knowledge Tasks },
  author={ Rudra Murthy and Praveen Venkateswaran and Prince Kumar and Danish Contractor },
  journal={arXiv preprint arXiv:2410.12972},
  year={ 2025 }
}
Comments on this paper