68
0

Conceptual Metaphor Theory as a Prompting Paradigm for Large Language Models

Abstract

We introduce Conceptual Metaphor Theory (CMT) as a framework for enhancing large language models (LLMs) through cognitive prompting in complex reasoning tasks. CMT leverages metaphorical mappings to structure abstract reasoning, improving models' ability to process and explain intricate concepts. By incorporating CMT-based prompts, we guide LLMs toward more structured and human-like reasoning patterns. To evaluate this approach, we compare four native models (Llama3.2, Phi3, Gemma2, and Mistral) against their CMT-augmented counterparts on benchmark tasks spanning domain-specific reasoning, creative insight, and metaphor interpretation. Responses were automatically evaluated using the Llama3.3 70B model. Experimental results indicate that CMT prompting significantly enhances reasoning accuracy, clarity, and metaphorical coherence, outperforming baseline models across all evaluated tasks.

View on arXiv
@article{kramer2025_2502.01901,
  title={ Conceptual Metaphor Theory as a Prompting Paradigm for Large Language Models },
  author={ Oliver Kramer },
  journal={arXiv preprint arXiv:2502.01901},
  year={ 2025 }
}
Comments on this paper