The ongoing evolution of AI paradigms has propelled AI research into the Agentic AI stage. Consequently, the focus of research has shifted from single agents and simple applications towards multi-agent autonomous decision-making and task collaboration in complex environments. As Large Language Models (LLMs) advance, their applications become more diverse and complex, leading to increasingly situational and systemic risks. This has brought significant attention to value alignment for AI agents, which aims to ensure that an agent's goals, preferences, and behaviors align with human values and societal norms. This paper reviews value alignment in agent systems within specific application scenarios. It integrates the advancements in AI driven by large models with the demands of social governance. Our review covers value principles, agent system application scenarios, and agent value alignment evaluation. Specifically, value principles are organized hierarchically from a top-down perspective, encompassing macro, meso, and micro levels. Agent system application scenarios are categorized and reviewed from a general-to-specific viewpoint. Agent value alignment evaluation systematically examines datasets for value alignment assessment and relevant value alignment methods. Additionally, we delve into value coordination among multiple agents within agent systems. Finally, we propose several potential research directions in this field.
View on arXiv@article{zeng2025_2506.09656, title={ Application-Driven Value Alignment in Agentic AI Systems: Survey and Perspectives }, author={ Wei Zeng and Hengshu Zhu and Chuan Qin and Han Wu and Yihang Cheng and Sirui Zhang and Xiaowei Jin and Yinuo Shen and Zhenxing Wang and Feimin Zhong and Hui Xiong }, journal={arXiv preprint arXiv:2506.09656}, year={ 2025 } }