Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs

Two lines of approaches are adopted for complex reasoning with LLMs. One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps. Another line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers. Aiming to resolve the trade-off between answer accuracy and interpretability, we present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning proofs. As long as LLMs correctly translate problem descriptions into Prolog representations, the corresponding reasoning proofs are ensured to be causal and reliable. On two logical reasoning and one arithmetic reasoning datasets, our framework obtains significant improvements in terms of both answer accuracy and reasoning proof accuracy. Our code is released atthis https URL
View on arXiv@article{yang2025_2311.09802, title={ Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs }, author={ Sen Yang and Xin Li and Leyang Cui and Lidong Bing and Wai Lam }, journal={arXiv preprint arXiv:2311.09802}, year={ 2025 } }