Solving Situation Puzzles with Large Language Model and External Reformulation

In recent years, large language models (LLMs) have shown an impressive ability to perform arithmetic and symbolic reasoning tasks. However, we found that LLMs (e.g., ChatGPT) cannot perform well on reasoning that requires multiple rounds of dialogue, especially when solving situation puzzles. Specifically, LLMs intend to ask very detailed questions focusing on a specific aspect or same/similar questions after several rounds of Q&As. To help LLMs get out of the above dilemma, we propose a novel external reformulation methodology, where the situation puzzle will be reformulated after several rounds of Q&A or when the LLMs raise an incorrect guess. Experiments show superior performance (e.g., win rate, number of question/guess attempts) of our method than directly using LLMs for solving situation puzzles, highlighting the potential of strategic problem reformulation to enhance the reasoning capabilities of LLMs in complex interactive scenarios.
View on arXiv@article{li2025_2503.18394, title={ Solving Situation Puzzles with Large Language Model and External Reformulation }, author={ Kun Li and Xinwei Chen and Tianyou Song and Chengrui Zhou and Zhuoran Liu and Zhenyan Zhang and Jiangjian Guo and Qing Shan }, journal={arXiv preprint arXiv:2503.18394}, year={ 2025 } }