ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12780
39
0

LangDA: Building Context-Awareness via Language for Domain Adaptive Semantic Segmentation

17 March 2025
Chang Liu
Bavesh Balaji
Saad Hossain
C Thomas
Kwei-Herng Lai
Raviteja Vemulapalli
Alexander Wong
Sirisha Rambhatla
ArXivPDFHTML
Abstract

Unsupervised domain adaptation for semantic segmentation (DASS) aims to transfer knowledge from a label-rich source domain to a target domain with no labels. Two key approaches in DASS are (1) vision-only approaches using masking or multi-resolution crops, and (2) language-based approaches that use generic class-wise prompts informed by target domain (e.g. "a {snowy} photo of a {class}"). However, the former is susceptible to noisy pseudo-labels that are biased to the source domain. The latter does not fully capture the intricate spatial relationships of objects -- key for dense prediction tasks. To this end, we propose LangDA. LangDA addresses these challenges by, first, learning contextual relationships between objects via VLM-generated scene descriptions (e.g. "a pedestrian is on the sidewalk, and the street is lined with buildings."). Second, LangDA aligns the entire image features with text representation of this context-aware scene caption and learns generalized representations via text. With this, LangDA sets the new state-of-the-art across three DASS benchmarks, outperforming existing methods by 2.6%, 1.4% and 3.9%.

View on arXiv
@article{liu2025_2503.12780,
  title={ LangDA: Building Context-Awareness via Language for Domain Adaptive Semantic Segmentation },
  author={ Chang Liu and Bavesh Balaji and Saad Hossain and C Thomas and Kwei-Herng Lai and Raviteja Vemulapalli and Alexander Wong and Sirisha Rambhatla },
  journal={arXiv preprint arXiv:2503.12780},
  year={ 2025 }
}
Comments on this paper