Despite the growing integration of AI chatbots as conversational agents in public discourse, empirical evidence regarding their capacity to foster intercultural empathy remains limited. Using a randomized dialogue experiment, we examined how different types of AI chatbot interaction, i.e., deliberative versus non-deliberative and culturally aligned versus non-aligned, affect intercultural empathy across cultural groups. Results show that deliberative conversations increased intercultural empathy among American participants but not Latin American participants, who perceived AI responses as culturally inaccurate and failing to represent their cultural contexts and perspectives authentically. Real-time interaction analyses reveal that these differences stem from cultural knowledge gaps inherent in Large Language Models. Despite explicit prompting and instruction to represent cultural perspectives in participants' native languages, AI systems still exhibit significant disparities in cultural representation. This highlights the importance of designing AI systems capable of culturally authentic engagement in deliberative conversations. Our study contributes to deliberation theory and AI alignment research by underscoring AI's role in intercultural dialogue and the persistent challenge of representational asymmetry in democratic discourse.
View on arXiv@article{villanueva2025_2504.13887, title={ AI as a deliberative partner fosters intercultural empathy for Americans but fails for Latin American participants }, author={ Isabel Villanueva and Tara Bobinac and Binwei Yao and Junjie Hu and Kaiping Chen }, journal={arXiv preprint arXiv:2504.13887}, year={ 2025 } }