ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12991
46
0

Chain-of-Thought Prompting for Out-of-Distribution Samples: A Latent-Variable Study

17 April 2025
Yu Wang
Fu-Chieh Chang
Pei-Yuan Wu
    OODD
    ReLM
    LRM
ArXivPDFHTML
Abstract

Chain-of-Thought (CoT) prompting has emerged as a powerful technique to improve in-context learning (ICL) in large language models (LLMs) by breaking complex reasoning into intermediate steps. However, the ability of CoT to generalize under distribution shift remains poorly understood. In this work, we extend a latent-variable framework for CoT prompting and study its behavior on two prototypical out-of-distribution (OOD) scenarios: (i) the latent variables for CoT steps are permuted into novel combinations, and (ii) the latent variables uniformly scaled by a factor. Our experiments demonstrate that CoT inference generalizes effectively to OOD samples whose latent variables closely resemble those seen during training, but its performance degrades as this similarity decreases. These findings provide foundational insights into the strengths and limitations of CoT prompting under OOD conditions and suggest directions for developing more resilient reasoning strategies in future LLMs.

View on arXiv
@article{wang2025_2504.12991,
  title={ Chain-of-Thought Prompting for Out-of-Distribution Samples: A Latent-Variable Study },
  author={ Yu Wang and Fu-Chieh Chang and Pei-Yuan Wu },
  journal={arXiv preprint arXiv:2504.12991},
  year={ 2025 }
}
Comments on this paper