ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.06968
  4. Cited By
Hallucination Augmented Contrastive Learning for Multimodal Large
  Language Model
v1v2v3v4 (latest)

Hallucination Augmented Contrastive Learning for Multimodal Large Language Model

Computer Vision and Pattern Recognition (CVPR), 2023
12 December 2023
Chaoya Jiang
Haiyang Xu
Mengfan Dong
Jiaxing Chen
Wei Ye
Mingshi Yan
Qinghao Ye
Ji Zhang
Fei Huang
Shikun Zhang
    VLM
ArXiv (abs)PDFHTMLGithub (95★)

Papers citing "Hallucination Augmented Contrastive Learning for Multimodal Large Language Model"

50 / 71 papers shown
Through the Magnifying Glass: Adaptive Perception Magnification for Hallucination-Free VLM Decoding
Through the Magnifying Glass: Adaptive Perception Magnification for Hallucination-Free VLM Decoding
Shunqi Mao
Chaoyi Zhang
Weidong Cai
MLLM
1.2K
6
0
10 Apr 2026
DashFusion: Dual-stream Alignment with Hierarchical Bottleneck Fusion for Multimodal Sentiment Analysis
DashFusion: Dual-stream Alignment with Hierarchical Bottleneck Fusion for Multimodal Sentiment AnalysisIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2025
Yuhua Wen
Qifei Li
Yingying Zhou
Yingming Gao
Zhengqi Wen
Jianhua Tao
Ya Li
150
3
0
05 Dec 2025
Mitigating Object and Action Hallucinations in Multimodal LLMs via Self-Augmented Contrastive Alignment
Mitigating Object and Action Hallucinations in Multimodal LLMs via Self-Augmented Contrastive Alignment
Kai-Po Chang
Wei-Yuan Cheng
Chi-Pin Huang
Fu-En Yang
Yu-Jie Wang
373
1
0
04 Dec 2025
Med-VCD: Mitigating Hallucination for Medical Large Vision Language Models through Visual Contrastive Decoding
Med-VCD: Mitigating Hallucination for Medical Large Vision Language Models through Visual Contrastive DecodingComputers in Biology and Medicine (Comput. Biol. Med.), 2025
Zahra Mahdavi
Zahra Khodakaramimaghsoud
Hooman Khaloo
Sina Bakhshandeh Taleshani
Erfan Hashemi
Javad Mirzapour Kaleybar
Omid Nejati Manzari
MLLMVLM
304
1
0
01 Dec 2025
VeriSciQA: An Auto-Verified Dataset for Scientific Visual Question Answering
VeriSciQA: An Auto-Verified Dataset for Scientific Visual Question Answering
Yuyi Li
Daoyuan Chen
Zhen Wang
Yutong Lu
Yaliang Li
229
0
0
25 Nov 2025
Intervene-All-Paths: Unified Mitigation of LVLM Hallucinations across Alignment Formats
Intervene-All-Paths: Unified Mitigation of LVLM Hallucinations across Alignment Formats
Jiaye Qian
Ge Zheng
Yuchen Zhu
Sibei Yang
MLLM
396
2
0
21 Nov 2025
Insight-A: Attribution-aware for Multimodal Misinformation Detection
Insight-A: Attribution-aware for Multimodal Misinformation Detection
Junjie Wu
Yumeng Fu
Chen Gong
Guohong Fu
83
0
0
17 Nov 2025
Why LVLMs Are More Prone to Hallucinations in Longer Responses: The Role of Context
Why LVLMs Are More Prone to Hallucinations in Longer Responses: The Role of Context
Ge Zheng
Jiaye Qian
Jiajin Tang
Sibei Yang
144
9
0
23 Oct 2025
Beyond Single Models: Mitigating Multimodal Hallucinations via Adaptive Token Ensemble Decoding
Beyond Single Models: Mitigating Multimodal Hallucinations via Adaptive Token Ensemble Decoding
Jinlin Li
Y. X. R. Wang
Yifei Yuan
Xiao Zhou
Y. Zhang
Xixian Yong
Yefeng Zheng
X. Wu
MLLM
187
0
0
21 Oct 2025
Reallocating Attention Across Layers to Reduce Multimodal Hallucination
Reallocating Attention Across Layers to Reduce Multimodal Hallucination
H. Lu
Bolun Chu
Weiye Fu
Guoshun Nan
Junning Liu
Minghui Pan
Qiankun Li
Yi Yu
Hua Wang
Kun Wang
LRM
192
0
0
11 Oct 2025
Mitigating Visual Hallucinations via Semantic Curriculum Preference Optimization in MLLMs
Mitigating Visual Hallucinations via Semantic Curriculum Preference Optimization in MLLMs
Yuanshuai Li
Yuping Yan
Junfeng Tang
Yunxuan Li
Zeqi Zheng
Yaochu Jin
178
1
0
29 Sep 2025
Mitigating Hallucination in Multimodal LLMs with Layer Contrastive Decoding
Mitigating Hallucination in Multimodal LLMs with Layer Contrastive Decoding
Bingkui Tong
Jiaer Xia
Kaiyang Zhou
MLLM
217
4
0
29 Sep 2025
Pay More Attention To Audio: Mitigating Imbalance of Cross-Modal Attention in Large Audio Language Models
Pay More Attention To Audio: Mitigating Imbalance of Cross-Modal Attention in Large Audio Language Models
Junyu Wang
Ziyang Ma
Zhengding Luo
Tianrui Wang
Meng Ge
Xiaobao Wang
Longbiao Wang
AuLLM
135
1
0
23 Sep 2025
Measuring Epistemic Humility in Multimodal Large Language Models
Measuring Epistemic Humility in Multimodal Large Language Models
Bingkui Tong
Jiaer Xia
Sifeng Shang
Kaiyang Zhou
HILM
194
2
0
11 Sep 2025
Tracing and Mitigating Hallucinations in Multimodal LLMs via Dynamic Attention Localization
Tracing and Mitigating Hallucinations in Multimodal LLMs via Dynamic Attention Localization
Tiancheng Yang
L. Zhang
J. Lin
Guimin Hu
Haiyan Zhao
Lijie Hu
355
0
0
09 Sep 2025
Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning
Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning
Yuyao Ge
Shenghua Liu
Yiwei Wang
Shansong Liu
Baolong Bi
Xuanshan Zhou
Jiayu Yao
Jiafeng Guo
Xueqi Cheng
292
7
0
08 Sep 2025
Mitigating Multimodal Hallucinations via Gradient-based Self-Reflection
Mitigating Multimodal Hallucinations via Gradient-based Self-Reflection
Shan Wang
Maying Shen
Nadine Chang
Chuong H. Nguyen
Hongdong Li
J. Álvarez
346
0
0
03 Sep 2025
MM-SeR: Multimodal Self-Refinement for Lightweight Image Captioning
MM-SeR: Multimodal Self-Refinement for Lightweight Image Captioning
Junha Song
Yongsik Jo
So Yeon Min
Quanting Xie
Taehwan Kim
Yonatan Bisk
Jaegul Choo
VLM
277
0
0
29 Aug 2025
Empowering Multimodal LLMs with External Tools: A Comprehensive Survey
Empowering Multimodal LLMs with External Tools: A Comprehensive Survey
Wenbin An
Jiahao Nie
Yaqiang Wu
Feng Tian
Shijian Lu
Q. Zheng
MLLM
251
1
0
14 Aug 2025
WeatherPrompt: Multi-modality Representation Learning for All-Weather Drone Visual Geo-Localization
WeatherPrompt: Multi-modality Representation Learning for All-Weather Drone Visual Geo-Localization
Jiahao Wen
Hang Yu
Zhedong Zheng
395
3
0
13 Aug 2025
Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance
Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance
Yuchu Jiang
Jian Zhao
Yuchen Yuan
Tianle Zhang
Yao Huang
...
Ya Zhang
Shuicheng Yan
Chi Zhang
Z. He
Xuelong Li
SILM
543
6
0
12 Aug 2025
ChartCap: Mitigating Hallucination of Dense Chart Captioning
ChartCap: Mitigating Hallucination of Dense Chart Captioning
Junyoung Lim
Jaewoo Ahn
Gunhee Kim
165
5
0
05 Aug 2025
TARS: MinMax Token-Adaptive Preference Strategy for Hallucination Reduction in MLLMs
TARS: MinMax Token-Adaptive Preference Strategy for Hallucination Reduction in MLLMs
Kejia Zhang
Keda Tao
Zhiming Luo
Chang Liu
Jiasheng Tang
Huan Wang
370
0
0
29 Jul 2025
Extracting Visual Facts from Intermediate Layers for Mitigating Hallucinations in Multimodal Large Language Models
Extracting Visual Facts from Intermediate Layers for Mitigating Hallucinations in Multimodal Large Language Models
Haoran Zhou
Zihan Zhang
Hao Chen
229
0
0
21 Jul 2025
INTER: Mitigating Hallucination in Large Vision-Language Models by Interaction Guidance Sampling
INTER: Mitigating Hallucination in Large Vision-Language Models by Interaction Guidance Sampling
Xin Dong
S. Dong
Jin Wang
Jing Huang
Li Zhou
Zenghui Sun
Lihua Jing
Jingsong Lan
Xiaoyong Zhu
Bo Zheng
MLLM
301
5
0
07 Jul 2025
MALM: A Multi-Information Adapter for Large Language Models to Mitigate Hallucination
MALM: A Multi-Information Adapter for Large Language Models to Mitigate Hallucination
Ao Jia
Haiming Wu
Guohui Yao
D. Song
Songkun Ji
Yazhou Zhang
269
1
0
14 Jun 2025
Same Task, Different Circuits: Disentangling Modality-Specific Mechanisms in VLMs
Yaniv Nikankin
Dana Arad
Yossi Gandelsman
Yonatan Belinkov
436
13
0
10 Jun 2025
Preemptive Hallucination Reduction: An Input-Level Approach for Multimodal Language Model
Preemptive Hallucination Reduction: An Input-Level Approach for Multimodal Language Model
Nokimul Hasan Arif
Shadman Rabby
Md Hefzul Hossain Papon
Sabbir Ahmed
MLLMVLM
377
2
0
29 May 2025
Seeing Far and Clearly: Mitigating Hallucinations in MLLMs with Attention Causal Decoding
Seeing Far and Clearly: Mitigating Hallucinations in MLLMs with Attention Causal DecodingComputer Vision and Pattern Recognition (CVPR), 2025
Feilong Tang
Chengzhi Liu
Zhongxing Xu
Ming Hu
Zelin Peng
...
Minquan Lin
Yifan Peng
Xuelian Cheng
Imran Razzak
Zongyuan Ge
422
33
0
22 May 2025
CAD-Llama: Leveraging Large Language Models for Computer-Aided Design Parametric 3D Model Generation
CAD-Llama: Leveraging Large Language Models for Computer-Aided Design Parametric 3D Model GenerationComputer Vision and Pattern Recognition (CVPR), 2025
Jiahao Li
Weijian Ma
Xueyang Li
Yunzhong Lou
G. Zhou
Xiangdong Zhou
605
37
0
07 May 2025
Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object Perception
Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object PerceptionComputer Vision and Pattern Recognition (CVPR), 2025
Yuanchen Wu
Lu Zhang
Hang Yao
Junlong Du
Ke Yan
Shouhong Ding
Yunsheng Wu
Xuzhao Li
MLLM
634
7
0
29 Apr 2025
Efficient Contrastive Decoding with Probabilistic Hallucination Detection - Mitigating Hallucinations in Large Vision Language Models -
Efficient Contrastive Decoding with Probabilistic Hallucination Detection - Mitigating Hallucinations in Large Vision Language Models -
Laura Fieback
Nishilkumar Balar
Jakob Spiegelberg
Hanno Gottschalk
MLLMVLM
393
0
0
16 Apr 2025
Mitigating Object Hallucinations in MLLMs via Multi-Frequency Perturbations
Mitigating Object Hallucinations in MLLMs via Multi-Frequency Perturbations
Shuo Li
Jiajun Sun
Guodong Zheng
Xiaoran Fan
Yujiong Shen
...
Wenming Tan
Changzhi Sun
Tao Gui
Tao Gui
Qi Zhang
AAMLVLM
425
7
0
19 Mar 2025
Where do Large Vision-Language Models Look at when Answering Questions?
Where do Large Vision-Language Models Look at when Answering Questions?
X. Xing
Chia-Wen Kuo
Li Fuxin
Yulei Niu
Fan Chen
Ming Li
Ying Wu
Longyin Wen
Sijie Zhu
LRM
367
10
0
18 Mar 2025
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language Models
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language ModelsComputer Vision and Pattern Recognition (CVPR), 2025
Hao Yin
Guangzong Si
Zilei Wang
1.1K
30
0
17 Mar 2025
Grounded Chain-of-Thought for Multimodal Large Language Models
Grounded Chain-of-Thought for Multimodal Large Language Models
Qiong Wu
Xiangcong Yang
Weihao Ye
Chenxin Fang
Baiyang Song
Xiaoshuai Sun
Rongrong Ji
LRM
578
32
0
17 Mar 2025
Attention Reallocation: Towards Zero-cost and Controllable Hallucination Mitigation of MLLMs
Attention Reallocation: Towards Zero-cost and Controllable Hallucination Mitigation of MLLMs
Chongjun Tu
Peng Ye
Dongzhan Zhou
Wenlong Zhang
Gang Yu
Tao Chen
Wanli Ouyang
341
12
0
13 Mar 2025
TPC: Cross-Temporal Prediction Connection for Vision-Language Model Hallucination Reduction
TPC: Cross-Temporal Prediction Connection for Vision-Language Model Hallucination Reduction
Chao Wang
Weiwei Fu
Yang Zhou
MLLMVLM
449
4
0
06 Mar 2025
Unlocking a New Rust Programming Experience: Fast and Slow Thinking with LLMs to Conquer Undefined Behaviors
Unlocking a New Rust Programming Experience: Fast and Slow Thinking with LLMs to Conquer Undefined BehaviorsDesign Automation Conference (DAC), 2025
Renshuang Jiang
Pan Dong
Zhenling Duan
Yu Shi
Xiaoxiang Fang
Yan Ding
Jun Ma
Shuai Zhao
Zhe Jiang
228
0
0
04 Mar 2025
Octopus: Alleviating Hallucination via Dynamic Contrastive Decoding
Octopus: Alleviating Hallucination via Dynamic Contrastive DecodingComputer Vision and Pattern Recognition (CVPR), 2025
Wei Suo
Lijun Zhang
Mengyang Sun
Lin Yuanbo Wu
Peng Wang
Yujiao Shi
MLLMVLM
338
21
0
01 Mar 2025
Towards Statistical Factuality Guarantee for Large Vision-Language Models
Towards Statistical Factuality Guarantee for Large Vision-Language Models
Hao Sun
Chao Yan
Nicholas J. Jackson
Wendi Cui
B. Li
Jiaxin Zhang
Sricharan Kumar
404
2
0
27 Feb 2025
Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink
Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink
Yining Wang
Mi Zhang
Junjie Sun
Chenyue Wang
Min Yang
Hui Xue
Jialing Tao
Ranjie Duan
Qingbin Liu
301
10
0
28 Jan 2025
Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection
Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace ProjectionComputer Vision and Pattern Recognition (CVPR), 2024
Le Yang
Ziwei Zheng
Boxu Chen
Subrat Kishore Dutta
Chenhao Lin
Chao Shen
VLM
749
36
0
18 Dec 2024
Who Brings the Frisbee: Probing Hidden Hallucination Factors in Large
  Vision-Language Model via Causality Analysis
Who Brings the Frisbee: Probing Hidden Hallucination Factors in Large Vision-Language Model via Causality AnalysisIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2024
Po-Hsuan Huang
Jeng-Lin Li
Chin-Po Chen
Ming-Ching Chang
Wei-Chao Chen
LRM
348
4
0
04 Dec 2024
VaLiD: Mitigating the Hallucination of Large Vision Language Models by Visual Layer Fusion Contrastive Decoding
VaLiD: Mitigating the Hallucination of Large Vision Language Models by Visual Layer Fusion Contrastive Decoding
Yuan Liu
Yifei Gao
Jitao Sang
MLLM
565
12
0
24 Nov 2024
Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention Lens
Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention LensComputer Vision and Pattern Recognition (CVPR), 2024
Zhangqi Jiang
Junkai Chen
Beier Zhu
Tingjin Luo
Yankun Shen
Xu Yang
654
90
0
23 Nov 2024
Dissecting Representation Misalignment in Contrastive Learning via Influence Function
Lijie Hu
Chenyang Ren
Huanyi Xie
Khouloud Saadi
Shu Yang
Jing Zhang
Di Wang
Di Wang
307
0
0
18 Nov 2024
SymDPO: Boosting In-Context Learning of Large Multimodal Models with
  Symbol Demonstration Direct Preference Optimization
SymDPO: Boosting In-Context Learning of Large Multimodal Models with Symbol Demonstration Direct Preference OptimizationComputer Vision and Pattern Recognition (CVPR), 2024
Hongrui Jia
Chaoya Jiang
Haiyang Xu
Wei Ye
Mengfan Dong
Ming Yan
Ji Zhang
Fei Huang
Shikun Zhang
MLLM
417
7
0
17 Nov 2024
Mitigating Object Hallucination via Concentric Causal Attention
Mitigating Object Hallucination via Concentric Causal AttentionNeural Information Processing Systems (NeurIPS), 2024
Yun Xing
Yiheng Li
Ivan Laptev
Shijian Lu
376
58
0
21 Oct 2024
A Survey of Hallucination in Large Visual Language Models
A Survey of Hallucination in Large Visual Language Models
Wei Lan
Wenyi Chen
Qingfeng Chen
Shirui Pan
Huiyu Zhou
Yi-Lun Pan
LRM
355
12
0
20 Oct 2024
12
Next
Page 1 of 2