ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.05343
43
0

Hearing and Seeing Through CLIP: A Framework for Self-Supervised Sound Source Localization

8 May 2025
Sooyoung Park
Arda Senocak
Joon Son Chung
    VLM
ArXivPDFHTML
Abstract

Large-scale vision-language models demonstrate strong multimodal alignment and generalization across diverse tasks. Among them, CLIP stands out as one of the most successful approaches. In this work, we extend the application of CLIP to sound source localization, proposing a self-supervised method operates without explicit text input. We introduce a framework that maps audios into tokens compatible with CLIP's text encoder, producing audio-driven embeddings. These embeddings are used to generate sounding region masks, from which visual features are extracted and aligned with the audio embeddings through a contrastive audio-visual correspondence objective. Our findings show that alignment knowledge of pre-trained multimodal foundation model enables our method to generate more complete and compact localization for sounding objects. We further propose an LLM-guided extension that distills object-aware audio-visual scene understanding into the model during training to enhance alignment. Extensive experiments across five diverse tasks demonstrate that our method, in all variants, outperforms state-of-the-art approaches and achieves strong generalization in zero-shot settings.

View on arXiv
@article{park2025_2505.05343,
  title={ Hearing and Seeing Through CLIP: A Framework for Self-Supervised Sound Source Localization },
  author={ Sooyoung Park and Arda Senocak and Joon Son Chung },
  journal={arXiv preprint arXiv:2505.05343},
  year={ 2025 }
}
Comments on this paper