13
1

Human Choice Prediction in Language-based Persuasion Games: Simulation-based Off-Policy Evaluation

Abstract

Recent advances in Large Language Models (LLMs) have spurred interest in designing LLM-based agents for tasks that involve interaction with human and artificial agents. This paper addresses a key aspect in the design of such agents: predicting human decisions in off-policy evaluation (OPE). We focus on language-based persuasion games, where an expert aims to influence the decision-maker through verbal messages. In our OPE framework, the prediction model is trained on human interaction data collected from encounters with one set of expert agents, and its performance is evaluated on interactions with a different set of experts. Using a dedicated application, we collected a dataset of 87K decisions from humans playing a repeated decision-making game with artificial agents. To enhance off-policy performance, we propose a simulation technique involving interactions across the entire agent space and simulated decision-makers. Our learning strategy yields significant OPE gains, e.g., improving prediction accuracy in the top 15% challenging cases by 7.1%. Our code and the large dataset we collected and generated are submitted as supplementary material and publicly available in our GitHub repository:this https URL

View on arXiv
@article{shapira2025_2305.10361,
  title={ Human Choice Prediction in Language-based Persuasion Games: Simulation-based Off-Policy Evaluation },
  author={ Eilam Shapira and Omer Madmon and Reut Apel and Moshe Tennenholtz and Roi Reichart },
  journal={arXiv preprint arXiv:2305.10361},
  year={ 2025 }
}
Comments on this paper