v1v2 (latest)
Can Large Language Models Reason and Plan?
- LRM
Abstract
While humans sometimes do show the capability of correcting their own erroneous guesses with self-critiquing, there seems to be no basis for that assumption in the case of LLMs.
View on arXivComments on this paper
