ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12358
46
0

IPCGRL: Language-Instructed Reinforcement Learning for Procedural Level Generation

16 March 2025
In-Chang Baek
Sung-Hyun Kim
Seo-Young Lee
Dong-Hyeun Kim
Kyung-Joong Kim
ArXivPDFHTML
Abstract

Recent research has highlighted the significance of natural language in enhancing the controllability of generative models. While various efforts have been made to leverage natural language for content generation, research on deep reinforcement learning (DRL) agents utilizing text-based instructions for procedural content generation remains limited. In this paper, we propose IPCGRL, an instruction-based procedural content generation method via reinforcement learning, which incorporates a sentence embedding model. IPCGRL fine-tunes task-specific embedding representations to effectively compress game-level conditions. We evaluate IPCGRL in a two-dimensional level generation task and compare its performance with a general-purpose embedding method. The results indicate that IPCGRL achieves up to a 21.4% improvement in controllability and a 17.2% improvement in generalizability for unseen instructions. Furthermore, the proposed method extends the modality of conditional input, enabling a more flexible and expressive interaction framework for procedural content generation.

View on arXiv
@article{baek2025_2503.12358,
  title={ IPCGRL: Language-Instructed Reinforcement Learning for Procedural Level Generation },
  author={ In-Chang Baek and Sung-Hyun Kim and Seo-Young Lee and Dong-Hyeon Kim and Kyung-Joong Kim },
  journal={arXiv preprint arXiv:2503.12358},
  year={ 2025 }
}
Comments on this paper