66
0

Order Doesn't Matter, But Reasoning Does: Training LLMs with Order-Centric Augmentation

Abstract

Logical reasoning is essential for large language models (LLMs) to ensure accurate and coherent inference. However, LLMs struggle with reasoning order variations and fail to generalize across logically equivalent transformations. LLMs often rely on fixed sequential patterns rather than true logical understanding. To address this issue, we introduce an order-centric data augmentation framework based on commutativity in logical reasoning. We first randomly shuffle independent premises to introduce condition order augmentation. For reasoning steps, we construct a directed acyclic graph (DAG) to model dependencies between steps, which allows us to identify valid reorderings of steps while preserving logical correctness. By leveraging order-centric augmentations, models can develop a more flexible and generalized reasoning process. Finally, we conduct extensive experiments across multiple logical reasoning benchmarks, demonstrating that our method significantly enhances LLMs' reasoning performance and adaptability to diverse logical structures. We release our codes and augmented data inthis https URL.

View on arXiv
@article{he2025_2502.19907,
  title={ Order Doesn't Matter, But Reasoning Does: Training LLMs with Order-Centric Augmentation },
  author={ Qianxi He and Qianyu He and Jiaqing Liang and Yanghua Xiao and Weikang Zhou and Zeye Sun and Fei Yu },
  journal={arXiv preprint arXiv:2502.19907},
  year={ 2025 }
}
Comments on this paper