73
79

Can Large Language Models Reason and Plan?

Abstract

While humans sometimes do show the capability of correcting their own erroneous guesses with self-critiquing, there seems to be no basis for that assumption in the case of LLMs.

View on arXiv
Comments on this paper