38
2

LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning

Abstract

Modern large language models (LLMs) employ various forms of logical inference, both implicitly and explicitly, when addressing reasoning tasks. Understanding how to optimally leverage these inference paradigms is critical for advancing LLMs' reasoning capabilities. This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning -- a fundamental cognitive task -- that is systematically parameterized across three dimensions: modality (textual, visual, symbolic), difficulty (easy, medium, hard), and task format (multiple-choice or free-text generation). We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines across these dimensions, and demonstrate that our findings generalize to broader in-context learning tasks. Additionally, we investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference in LLM reasoning. This exploratory study provides a foundation for future research in enhancing LLM reasoning through systematic logical inference strategies. Resources are available atthis https URL.

View on arXiv
@article{zheng2025_2502.11176,
  title={ LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning },
  author={ Tianshi Zheng and Jiayang Cheng and Chunyang Li and Haochen Shi and Zihao Wang and Jiaxin Bai and Yangqiu Song and Ginny Y. Wong and Simon See },
  journal={arXiv preprint arXiv:2502.11176},
  year={ 2025 }
}
Comments on this paper