ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.12059
89
1
v1v2 (latest)

Evaluating the Ability of Large Language Models to Reason about Cardinal Directions, Revisited

16 July 2025
Anthony G Cohn
Robert E Blackwell
    LRMELM
ArXiv (abs)PDFHTMLGithub (3★)
Main:7 Pages
5 Figures
Bibliography:1 Pages
Abstract

We investigate the abilities of 28 Large language Models (LLMs) to reason about cardinal directions (CDs) using a benchmark generated from a set of templates, extensively testing an LLM's ability to determine the correct CD given a particular scenario. The templates allow for a number of degrees of variation such as means of locomotion of the agent involved, and whether set in the first, second or third person. Even the newer Large Reasoning Models are unable to reliably determine the correct CD for all questions. This paper summarises and extends earlier work presented at COSIT-24.

View on arXiv
Comments on this paper