39
0

Deep Reinforcement Learning via Object-Centric Attention

Abstract

Deep reinforcement learning agents, trained on raw pixel inputs, often fail to generalize beyond their training environments, relying on spurious correlations and irrelevant background details. To address this issue, object-centric agents have recently emerged. However, they require different representations tailored to the task specifications. Contrary to deep agents, no single object-centric architecture can be applied to any environment. Inspired by principles of cognitive science and Occam's Razor, we introduce Object-Centric Attention via Masking (OCCAM), which selectively preserves task-relevant entities while filtering out irrelevant visual information. Specifically, OCCAM takes advantage of the object-centric inductive bias. Empirical evaluations on Atari benchmarks demonstrate that OCCAM significantly improves robustness to novel perturbations and reduces sample complexity while showing similar or improved performance compared to conventional pixel-based RL. These results suggest that structured abstraction can enhance generalization without requiring explicit symbolic representations or domain-specific object extraction pipelines.

View on arXiv
@article{blüml2025_2504.03024,
  title={ Deep Reinforcement Learning via Object-Centric Attention },
  author={ Jannis Blüml and Cedric Derstroff and Bjarne Gregori and Elisabeth Dillies and Quentin Delfosse and Kristian Kersting },
  journal={arXiv preprint arXiv:2504.03024},
  year={ 2025 }
}
Comments on this paper