539
v1v2v3v4v5 (latest)

Adaptive Social Learning via Mode Policy Optimization for Language Agents

Main:9 Pages
8 Figures
Bibliography:7 Pages
32 Tables
Appendix:29 Pages
Abstract

Effective social intelligence simulation requires language agents to dynamically adjust reasoning depth, a capability notably absent in current studies. Existing methods either lack explicit reasoning or employ lengthy Chain-of-Thought reasoning uniformly across all scenarios, resulting in excessive token usage and inflexible social behaviors in tasks such as negotiation or collaboration. To address this, we propose an A\textbf{A}daptive S\textbf{S}ocial L\textbf{L}earning (ASL\textbf{ASL}) framework in this paper, aiming to improve the adaptive reasoning ability of language agents in dynamic social interactions. To this end, we first identify the hierarchical reasoning modes under such context, ranging from intuitive response to deep deliberation based on the cognitive control theory. We then develop the A\textbf{A}daptive M\textbf{M}ode P\textbf{P}olicy O\textbf{O}ptimization (AMPO\textbf{AMPO}) algorithm to learn the context-aware mode adaptation and reasoning. Our framework advances existing research in three key aspects: (1) Multi-granular reasoning mode design, (2) Context-aware mode switching in rich social interaction, and (3) Token-efficient reasoning with depth adaptation. Extensive experiments on the benchmark social intelligence environment verify that ASL achieves 15.6% higher task performance than GPT-4o. Notably, our AMPO outperforms GRPO by 7.0% with 32.8% shorter thinking chains, demonstrating the advantages of our AMPO and the learned adaptive reasoning ability over GRPO's solution.

View on arXiv
Comments on this paper