ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10408
59
0

Understanding the Logical Capabilities of Large Language Models via Out-of-Context Representation Learning

13 March 2025
Jonathan Shaki
Emanuele La Malfa
Michael Wooldridge
Sarit Kraus
    LRM
    ReLM
ArXivPDFHTML
Abstract

We study the capabilities of Large Language Models (LLM) on binary relations, a ubiquitous concept in math employed in most reasoning, math and logic benchmarks. This work focuses on equality, inequality, and inclusion, along with the properties they satisfy, such as ir/reflexivity, a/symmetry, transitivity, and logical complexity (e.g., number of reasoning ``hops''). We propose an alternative to in-context learning that trains only the representations of newly introduced tokens, namely out-of-context representation learning. This method mitigates linguistic biases already present in a model and, differently from in-context learning, does not rely on external information or illustrations. We argue out-of-context representation learning as a better alternative to in-context learning and fine-tuning to evaluate the capabilities of LLMs on logic tasks that are the building blocks of more complex reasoning benchmarks.

View on arXiv
@article{shaki2025_2503.10408,
  title={ Understanding the Logical Capabilities of Large Language Models via Out-of-Context Representation Learning },
  author={ Jonathan Shaki and Emanuele La Malfa and Michael Wooldridge and Sarit Kraus },
  journal={arXiv preprint arXiv:2503.10408},
  year={ 2025 }
}
Comments on this paper