ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.08550
88
0

LLMs can implicitly learn from mistakes in-context

12 February 2025
Lisa Alazraki
Maximilian Mozes
Jon Ander Campos
Yi Chern Tan
Marek Rei
Max Bartolo
    ReLM
    LRM
ArXivPDFHTML
Abstract

Learning from mistakes is a fundamental feature of human intelligence. Previous work has shown that Large Language Models (LLMs) can also learn from incorrect answers when provided with a comprehensive rationale detailing why an answer is wrong or how to correct it. In this work, we examine whether LLMs can learn from mistakes in mathematical reasoning tasks when these explanations are not provided. We investigate if LLMs are able to implicitly infer such rationales simply from observing both incorrect and correct answers. Surprisingly, we find that LLMs perform better, on average, when rationales are eliminated from the context and incorrect answers are simply shown alongside correct ones. This approach also substantially outperforms chain-of-thought prompting in our evaluations. We show that these results are consistent across LLMs of different sizes and varying reasoning abilities. Further, we carry out an in-depth analysis, and show that prompting with both wrong and correct answers leads to greater performance and better generalisation than introducing additional, more diverse question-answer pairs into the context. Finally, we show that new rationales generated by models that have only observed incorrect and correct answers are scored equally as highly by humans as those produced with the aid of exemplar rationales. Our results demonstrate that LLMs are indeed capable of in-context implicit learning.

View on arXiv
@article{alazraki2025_2502.08550,
  title={ LLMs can implicitly learn from mistakes in-context },
  author={ Lisa Alazraki and Maximilian Mozes and Jon Ander Campos and Yi Chern Tan and Marek Rei and Max Bartolo },
  journal={arXiv preprint arXiv:2502.08550},
  year={ 2025 }
}
Comments on this paper