LLM-Powered Preference Elicitation in Combinatorial Assignment

We study the potential of large language models (LLMs) as proxies for humans to simplify preference elicitation (PE) in combinatorial assignment. While traditional PE methods rely on iterative queries to capture preferences, LLMs offer a one-shot alternative with reduced human effort. We propose a framework for LLM proxies that can work in tandem with SOTA ML-powered preference elicitation schemes. Our framework handles the novel challenges introduced by LLMs, such as response variability and increased computational costs. We experimentally evaluate the efficiency of LLM proxies against human queries in the well-studied course allocation domain, and we investigate the model capabilities required for success. We find that our approach improves allocative efficiency by up to 20%, and these results are robust across different LLMs and to differences in quality and accuracy of reporting.
View on arXiv@article{soumalias2025_2502.10308, title={ LLM-Powered Preference Elicitation in Combinatorial Assignment }, author={ Ermis Soumalias and Yanchen Jiang and Kehang Zhu and Michael Curry and Sven Seuken and David C. Parkes }, journal={arXiv preprint arXiv:2502.10308}, year={ 2025 } }