98
v1v2 (latest)

Geometry of Knowledge Allows Extending Diversity Boundaries of Large Language Models

Mateusz Bystroński
Doheon Han
Nitesh V. Chawla
Tomasz Kajdanowicz
Main:7 Pages
4 Figures
Bibliography:3 Pages
3 Tables
Appendix:3 Pages
Abstract

Starting from the hypothesis that knowledge in semantic space is organized along structured manifolds, we argue that this geometric structure renders the space explorable. By traversing it and using the resulting continuous representations to condition an LLM's generation distribution, we can systematically expand the model's reachable semantic range. We introduce a framework that requires no modification of LLM parameters and operationalizes this idea by constructing a conditioning distribution from a small set of diverse anchor generations. This distribution conditions LLM's generation via an xRAG-style projector. Our experiments demonstrate that this manifold-based conditioning substantially increases generative diversity, with direct benefits for enhancing divergent thinking, a core facet of creativity, in language models.

View on arXiv
Comments on this paper