15
v1v2 (latest)

LCLA: Language-Conditioned Latent Alignment for Vision-Language Navigation

Nitesh Subedi
Adam Haroon
Samuel Tetteh
Prajwal Koirala
Cody Fleming
Soumik Sarkar
Main:8 Pages
5 Figures
Bibliography:3 Pages
4 Tables
Appendix:4 Pages
Abstract

We propose LCLA (Language-Conditioned Latent Alignment), a framework for vision-language navigation that learns modular perception-action interfaces by aligning sensory observations to a latent representation of an expert policy. The expert is first trained with privileged state information, inducing a latent space sufficient for control, after which its latent interface and action head are frozen. A lightweight adapter is then trained to map raw visual-language observations, via a frozen vision-language model, into the expert's latent space, reducing the problem of visuomotor learning to supervised latent alignment rather than end-to-end policy optimization. This decoupling enforces a stable contract between perception and control, enabling expert behavior to be reused across sensing modalities and environmental variations. We instantiate LCLA and evaluate it on a vision-language indoor navigation task, where aligned latent spaces yield strong in-distribution performance and robust zero-shot generalization to unseen environments, lighting conditions, and viewpoints while remaining lightweight at inference time.

View on arXiv
Comments on this paper