ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.13513
  4. Cited By
What if...?: Thinking Counterfactual Keywords Helps to Mitigate
  Hallucination in Large Multi-modal Models

What if...?: Thinking Counterfactual Keywords Helps to Mitigate Hallucination in Large Multi-modal Models

20 March 2024
Junho Kim
Yeonju Kim
Yonghyun Ro
    LRM
    MLLM
ArXivPDFHTML

Papers citing "What if...?: Thinking Counterfactual Keywords Helps to Mitigate Hallucination in Large Multi-modal Models"

2 / 2 papers shown
Title
CODE: Contrasting Self-generated Description to Combat Hallucination in
  Large Multi-modal Models
CODE: Contrasting Self-generated Description to Combat Hallucination in Large Multi-modal Models
Junho Kim
Hyunjun Kim
Yeonju Kim
Yong Man Ro
MLLM
34
10
0
04 Jun 2024
RITUAL: Random Image Transformations as a Universal Anti-hallucination
  Lever in LVLMs
RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in LVLMs
Sangmin Woo
Jaehyuk Jang
Donguk Kim
Yubin Choi
Changick Kim
29
0
0
28 May 2024
1