ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15079
52
0

Can Hallucination Correction Improve Video-Language Alignment?

20 February 2025
Lingjun Zhao
Mingyang Xie
Paola Cascante-Bonilla
Hal Daumé III
Kwonjoon Lee
    HILM
    VLM
ArXivPDFHTML
Abstract

Large Vision-Language Models often generate hallucinated content that is not grounded in its visual inputs. While prior work focuses on mitigating hallucinations, we instead explore leveraging hallucination correction as a training objective to improve video-language alignment. We introduce HACA, a self-training framework learning to correct hallucinations in descriptions that do not align with the video content. By identifying and correcting inconsistencies, HACA enhances the model's ability to align video and textual representations for spatio-temporal reasoning. Our experimental results show consistent gains in video-caption binding and text-to-video retrieval tasks, demonstrating that hallucination correction-inspired tasks serve as an effective strategy for improving vision and language alignment.

View on arXiv
@article{zhao2025_2502.15079,
  title={ Can Hallucination Correction Improve Video-Language Alignment? },
  author={ Lingjun Zhao and Mingyang Xie and Paola Cascante-Bonilla and Hal Daumé III and Kwonjoon Lee },
  journal={arXiv preprint arXiv:2502.15079},
  year={ 2025 }
}
Comments on this paper