61

Fine-Tuning Language Models to Know What They Know

Sangjun Park
Elliot Meyerson
Xin Qiu
Risto Miikkulainen
Main:8 Pages
18 Figures
Bibliography:3 Pages
9 Tables
Appendix:10 Pages
Abstract

Metacognition is a critical component of intelligence, specifically regarding the awareness of one's own knowledge. While humans rely on shared internal memory for both answering questions and reporting their knowledge state, this dependency in LLMs remains underexplored. This study proposes a framework to measure metacognitive ability dtype2d_{\rm{type2}}' using a dual-prompt method, followed by the introduction of Evolution Strategy for Metacognitive Alignment (ESMA) to bind a model's internal knowledge to its explicit behaviors. ESMA demonstrates robust generalization across diverse untrained settings, indicating a enhancement in the model's ability to reference its own knowledge. Furthermore, parameter analysis attributes these improvements to a sparse set of significant modifications.

View on arXiv
Comments on this paper