Beyond Static Perception: Integrating Temporal Context into VLMs for Cloth Folding

Manipulating clothes is challenging due to their complex dynamics, high deformability, and frequent self-occlusions. Garments exhibit a nearly infinite number of configurations, making explicit state representations difficult to define. In this paper, we analyze BiFold, a model that predicts language-conditioned pick-and-place actions from visual observations, while implicitly encoding garment state through end-to-end learning. To address scenarios such as crumpled garments or recovery from failed manipulations, BiFold leverages temporal context to improve state estimation. We examine the internal representations of the model and present evidence that its fine-tuning and temporal context enable effective alignment between text and image regions, as well as temporal consistency.
View on arXiv@article{barbany2025_2505.07600, title={ Beyond Static Perception: Integrating Temporal Context into VLMs for Cloth Folding }, author={ Oriol Barbany and Adrià Colomé and Carme Torras }, journal={arXiv preprint arXiv:2505.07600}, year={ 2025 } }