ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.17911
  4. Cited By
OPERA: Alleviating Hallucination in Multi-Modal Large Language Models
  via Over-Trust Penalty and Retrospection-Allocation
v1v2v3 (latest)

OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation

Computer Vision and Pattern Recognition (CVPR), 2023
29 November 2023
Qidong Huang
Xiao-wen Dong
Pan Zhang
Sijin Yu
Conghui He
Yuan Liu
Dahua Lin
Weiming Zhang
Neng H. Yu
    MLLM
ArXiv (abs)PDFHTMLHuggingFace (2 upvotes)Github (341★)

Papers citing "OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation"

50 / 238 papers shown
Mitigating Cross-Image Information Leakage in LVLMs for Multi-Image Tasks
Mitigating Cross-Image Information Leakage in LVLMs for Multi-Image Tasks
Yeji Park
Minyoung Lee
Sanghyuk Chun
Junsuk Choe
101
0
0
19 Aug 2025
MRFD: Multi-Region Fusion Decoding with Self-Consistency for Mitigating Hallucinations in LVLMs
MRFD: Multi-Region Fusion Decoding with Self-Consistency for Mitigating Hallucinations in LVLMs
Haonan Ge
Yiwei Wang
Ming-Hsuan Yang
Yujun Cai
177
5
0
14 Aug 2025
Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance
Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance
Yuchu Jiang
Jian Zhao
Yuchen Yuan
Tianle Zhang
Yao Huang
...
Ya Zhang
Shuicheng Yan
Chi Zhang
Z. He
Xuelong Li
SILM
458
3
0
12 Aug 2025
Analyzing and Mitigating Object Hallucination: A Training Bias Perspective
Analyzing and Mitigating Object Hallucination: A Training Bias Perspective
Yifan Li
K. Zhou
Wayne Xin Zhao
Lei Fang
Ji-Rong Wen
76
3
0
06 Aug 2025
SAVER: Mitigating Hallucinations in Large Vision-Language Models via Style-Aware Visual Early Revision
SAVER: Mitigating Hallucinations in Large Vision-Language Models via Style-Aware Visual Early Revision
Zhaoxu Li
Chenqi Kong
Yi Yu
Qiangqiang Wu
Xinghao Jiang
Ngai-Man Cheung
Bihan Wen
Alex Chichung Kot
Xudong Jiang
VLM
102
1
0
05 Aug 2025
IKOD: Mitigating Visual Attention Degradation in Large Vision-Language Models
IKOD: Mitigating Visual Attention Degradation in Large Vision-Language Models
Jiabing Yang
Chenhang Cui
Yiyang Zhou
Yixiang Chen
Peng Xia
Ying Wei
Tao Yu
Yan Huang
Liang Wang
MLLMVLM
115
1
0
05 Aug 2025
A Survey on AgentOps: Categorization, Challenges, and Future Directions
A Survey on AgentOps: Categorization, Challenges, and Future Directions
Zexin Wang
Jingjing Li
Quan Zhou
Haotian Si
Yuanhao Liu
Jianhui Li
Gaogang Xie
Fei Sun
Dan Pei
Changhua Pei
LLMAGAI4TS
167
0
0
04 Aug 2025
Modality Bias in LVLMs: Analyzing and Mitigating Object Hallucination via Attention Lens
Modality Bias in LVLMs: Analyzing and Mitigating Object Hallucination via Attention Lens
Haohan Zheng
Zhenguo Zhang
153
3
0
04 Aug 2025
MAP: Mitigating Hallucinations in Large Vision-Language Models with Map-Level Attention Processing
MAP: Mitigating Hallucinations in Large Vision-Language Models with Map-Level Attention Processing
Chenxi Li
Yichen Guo
Benfang Qian
Jinhao You
Kai Tang
Yaosong Du
Zonghao Zhang
Xiande Huang
MLLM
219
1
0
03 Aug 2025
MIHBench: Benchmarking and Mitigating Multi-Image Hallucinations in Multimodal Large Language Models
MIHBench: Benchmarking and Mitigating Multi-Image Hallucinations in Multimodal Large Language Models
Jiale Li
Mingrui Wu
Zixiang Jin
Hao Chen
Jinfa Huang
Xiaoshuai Sun
Liujuan Cao
Rongrong Ji
VLM
149
2
0
01 Aug 2025
TARS: MinMax Token-Adaptive Preference Strategy for MLLM Hallucination Reduction
TARS: MinMax Token-Adaptive Preference Strategy for MLLM Hallucination Reduction
Kejia Zhang
Keda Tao
Zhiming Luo
Chang Liu
Jiasheng Tang
Huan Wang
LRM
280
0
0
29 Jul 2025
Self-Improvement for Audio Large Language Model using Unlabeled Speech
Self-Improvement for Audio Large Language Model using Unlabeled Speech
S. Wang
Xinyuan Chen
Yao Xu
AuLLM
164
5
0
27 Jul 2025
LISA: A Layer-wise Integration and Suppression Approach for Hallucination Mitigation in Multimodal Large Language Models
LISA: A Layer-wise Integration and Suppression Approach for Hallucination Mitigation in Multimodal Large Language Models
Zhihui Guo
Xin Man
Hui Xu
Jie Shao
Zhiguo Jiang
X. Zhang
Heng Tao Shen
MLLM
281
1
0
25 Jul 2025
A Survey of Multimodal Hallucination Evaluation and Detection
A Survey of Multimodal Hallucination Evaluation and Detection
Zhiyuan Chen
Yuecong Min
Jie M. Zhang
Bei Yan
Jiahao Wang
X. Wang
Shiguang Shan
HILM
341
4
0
25 Jul 2025
LOTUS: A Leaderboard for Detailed Image Captioning from Quality to Societal Bias and User Preferences
LOTUS: A Leaderboard for Detailed Image Captioning from Quality to Societal Bias and User Preferences
Yusuke Hirota
Boyi Li
Ryo Hachiuma
Yueh-Hua Wu
Boris Ivanovic
Yuta Nakashima
Marco Pavone
Yejin Choi
Yu-Chun Wang
Chao-Han Huck Yang
VLM
199
1
0
25 Jul 2025
Extracting Visual Facts from Intermediate Layers for Mitigating Hallucinations in Multimodal Large Language Models
Extracting Visual Facts from Intermediate Layers for Mitigating Hallucinations in Multimodal Large Language Models
Haoran Zhou
Zihan Zhang
Hao Chen
151
0
0
21 Jul 2025
Mitigating Object Hallucinations via Sentence-Level Early Intervention
Mitigating Object Hallucinations via Sentence-Level Early Intervention
Shangpin Peng
Senqiao Yang
Li Jiang
Zhuotao Tian
MLLM
243
5
0
16 Jul 2025
MCA-LLaVA: Manhattan Causal Attention for Reducing Hallucination in Large Vision-Language Models
MCA-LLaVA: Manhattan Causal Attention for Reducing Hallucination in Large Vision-Language Models
Qiyan Zhao
Xiaofeng Zhang
Yiheng Li
Yun Xing
Xiaosong Yuan
Feilong Tang
Sinan Fan
Xuhang Chen
Xuyao Zhang
Dahan Wang
239
3
0
12 Jul 2025
ReLoop: "Seeing Twice and Thinking Backwards" via Closed-loop Training to Mitigate Hallucinations in Multimodal understanding
ReLoop: "Seeing Twice and Thinking Backwards" via Closed-loop Training to Mitigate Hallucinations in Multimodal understanding
JianJiang Yang
Yanshu Li
Ziyan Huang
VLMLRM
168
0
0
07 Jul 2025
Identify, Isolate, and Purge: Mitigating Hallucinations in LVLMs via Self-Evolving Distillation
Identify, Isolate, and Purge: Mitigating Hallucinations in LVLMs via Self-Evolving Distillation
Wenhao Li
Xiu Su
Jingyi Wu
Feng Yang
Yang-Yang Liu
Yi-Ling Chen
Shan You
Chang Xu
VLM
224
0
0
07 Jul 2025
INTER: Mitigating Hallucination in Large Vision-Language Models by Interaction Guidance Sampling
INTER: Mitigating Hallucination in Large Vision-Language Models by Interaction Guidance Sampling
Xin Dong
S. Dong
Jin Wang
Jing Huang
Li Zhou
Zenghui Sun
Lihua Jing
Jingsong Lan
Xiaoyong Zhu
Bo Zheng
MLLM
268
3
0
07 Jul 2025
ASCD: Attention-Steerable Contrastive Decoding for Reducing Hallucination in MLLM
ASCD: Attention-Steerable Contrastive Decoding for Reducing Hallucination in MLLM
Yujun Wang
Aniri
Jinhe Bi
Soeren Pirk
Yunpu Ma
MLLM
353
11
0
17 Jun 2025
MALM: A Multi-Information Adapter for Large Language Models to Mitigate Hallucination
MALM: A Multi-Information Adapter for Large Language Models to Mitigate Hallucination
Ao Jia
Haiming Wu
Guohui Yao
D. Song
Songkun Ji
Yazhou Zhang
215
1
0
14 Jun 2025
Not All Tokens and Heads Are Equally Important: Dual-Level Attention Intervention for Hallucination Mitigation
Not All Tokens and Heads Are Equally Important: Dual-Level Attention Intervention for Hallucination Mitigation
Lexiang Tang
Xianwei Zhuang
Bang Yang
Zhiyuan Hu
Hongxiang Li
Lu Ma
Jinghan Ru
Yuexian Zou
223
1
0
14 Jun 2025
Revisit What You See: Disclose Language Prior in Vision Tokens for LVLM Decoding
Revisit What You See: Disclose Language Prior in Vision Tokens for LVLM Decoding
Beomsik Cho
Jaehyung Kim
280
0
0
11 Jun 2025
SECOND: Mitigating Perceptual Hallucination in Vision-Language Models via Selective and Contrastive Decoding
Woohyeon Park
Woojin Kim
Jaeik Kim
Jaeyoung Do
VLM
155
9
0
10 Jun 2025
Mitigating Behavioral Hallucination in Multimodal Large Language Models for Sequential Images
Mitigating Behavioral Hallucination in Multimodal Large Language Models for Sequential Images
Liangliang You
Junchi Yao
Shu Yang
Guimin Hu
Lijie Hu
Di Wang
MLLM
256
2
0
08 Jun 2025
Mitigating Hallucinations in Large Vision-Language Models via Entity-Centric Multimodal Preference Optimization
Mitigating Hallucinations in Large Vision-Language Models via Entity-Centric Multimodal Preference Optimization
Jiulong Wu
Zhengliang Shi
Shuaiqiang Wang
J. Huang
Dawei Yin
Lingyong Yan
Min Cao
Min Zhang
MLLM
313
1
0
04 Jun 2025
CLAIM: Mitigating Multilingual Object Hallucination in Large Vision-Language Models with Cross-Lingual Attention Intervention
CLAIM: Mitigating Multilingual Object Hallucination in Large Vision-Language Models with Cross-Lingual Attention InterventionAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Zekai Ye
Qiming Li
Xiaocheng Feng
L. Qin
Yichong Huang
...
Zhirui Zhang
Yunfei Lu
Duyu Tang
Dandan Tu
Bing Qin
VLMLRM
142
9
0
03 Jun 2025
BIMA: Bijective Maximum Likelihood Learning Approach to Hallucination Prediction and Mitigation in Large Vision-Language Models
BIMA: Bijective Maximum Likelihood Learning Approach to Hallucination Prediction and Mitigation in Large Vision-Language Models
Huu-Thien Tran
Thanh-Dat Truong
Khoa Luu
MLLM
165
0
0
30 May 2025
MMBoundary: Advancing MLLM Knowledge Boundary Awareness through Reasoning Step Confidence Calibration
MMBoundary: Advancing MLLM Knowledge Boundary Awareness through Reasoning Step Confidence CalibrationAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Zhitao He
Sandeep Polisetty
Zhiyuan Fan
Yuchen Huang
Shujin Wu
Yi R.
LRM
453
10
0
29 May 2025
Qwen Look Again: Guiding Vision-Language Reasoning Models to Re-attention Visual Information
Qwen Look Again: Guiding Vision-Language Reasoning Models to Re-attention Visual Information
Xu Chu
Xinrong Chen
Guanyu Wang
Zhijie Tan
Kui Huang
Wenyu Lv
Tong Mo
Weiping Li
LRMVLM
326
6
0
29 May 2025
Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration
Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration
Mehrdad Fazli
Bowen Wei
Ahmet Sari
Ziwei Zhu
VLM
469
3
0
27 May 2025
Interpreting Social Bias in LVLMs via Information Flow Analysis and Multi-Round Dialogue Evaluation
Interpreting Social Bias in LVLMs via Information Flow Analysis and Multi-Round Dialogue Evaluation
Zhengyang Ji
Yifan Jia
Shang Gao
Yutao Yue
219
0
0
27 May 2025
Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language Models
Causal-LLaVA: Causal Disentanglement for Mitigating Hallucination in Multimodal Large Language Models
Xinmiao Hu
C. Wang
Ruihe An
ChenYu Shao
Xiaojun Ye
Sheng Zhou
Liangcheng Li
MLLMLRM
277
2
0
26 May 2025
Enhancing Visual Reliance in Text Generation: A Bayesian Perspective on Mitigating Hallucination in Large Vision-Language Models
Enhancing Visual Reliance in Text Generation: A Bayesian Perspective on Mitigating Hallucination in Large Vision-Language Models
Nanxing Hu
Xiaoyue Duan
Jinchao Zhang
Guoliang Kang
MLLM
310
2
0
26 May 2025
Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs
Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs
Hao Fang
Changle Zhou
Jiawei Kong
Kuofeng Gao
Bin Chen
Tao Liang
MLLM
429
4
0
26 May 2025
Reasoning Segmentation for Images and Videos: A Survey
Reasoning Segmentation for Images and Videos: A Survey
Yiqing Shen
Chenjia Li
Fei Xiong
Jeong-O Jeong
Tianpeng Wang
Michael Latman
Mathias Unberath
VOS
420
8
0
24 May 2025
Do You Keep an Eye on What I Ask? Mitigating Multimodal Hallucination via Attention-Guided Ensemble Decoding
Do You Keep an Eye on What I Ask? Mitigating Multimodal Hallucination via Attention-Guided Ensemble DecodingInternational Conference on Learning Representations (ICLR), 2025
Yeongjae Cho
Keonwoo Kim
Taebaek Hwang
Sungzoon Cho
301
5
0
23 May 2025
Mitigating Hallucinations in Vision-Language Models through Image-Guided Head Suppression
Mitigating Hallucinations in Vision-Language Models through Image-Guided Head Suppression
Sreetama Sarkar
Yue Che
Alex Gavin
Peter A. Beerel
Souvik Kundu
MLLMVLM
237
6
0
22 May 2025
Seeing Far and Clearly: Mitigating Hallucinations in MLLMs with Attention Causal Decoding
Seeing Far and Clearly: Mitigating Hallucinations in MLLMs with Attention Causal DecodingComputer Vision and Pattern Recognition (CVPR), 2025
Feilong Tang
Chengzhi Liu
Zhongxing Xu
Ming Hu
Zelin Peng
...
Minquan Lin
Yifan Peng
Xuelian Cheng
Imran Razzak
Zongyuan Ge
300
19
0
22 May 2025
The Hallucination Tax of Reinforcement Finetuning
The Hallucination Tax of Reinforcement Finetuning
Linxin Song
Taiwei Shi
Jieyu Zhao
HILMLRM
302
13
0
20 May 2025
GMSA: Enhancing Context Compression via Group Merging and Layer Semantic Alignment
GMSA: Enhancing Context Compression via Group Merging and Layer Semantic Alignment
Jiwei Tang
Zhicheng Zhang
Shunlong Wu
Jingheng Ye
Lichen Bai
...
Tingwei Lu
Jiaqi Chen
Lin Hai
Hai-Tao Zheng
Hong-Gee Kim
265
4
0
18 May 2025
Emotion Knowledge Enhancement for Vision Large Language Models: A Self-Verification Approach for High-Quality Emotion Instruction Data Generation
Emotion Knowledge Enhancement for Vision Large Language Models: A Self-Verification Approach for High-Quality Emotion Instruction Data Generation
Feifan Wang
Tengfei Song
Minggui He
Yan Yu
Zhanglin Wu
Hao Yang
Wenming Zheng
Osamu Yoshie
227
0
0
14 May 2025
Bias and Generalizability of Foundation Models across Datasets in Breast Mammography
Bias and Generalizability of Foundation Models across Datasets in Breast MammographyInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2025
Elodie Germani
Selin Türk Ilayda
Zeineddine Fatima
Mourad Charbel
Shadi Albarqouni
AI4CE
316
3
0
14 May 2025
Mapping User Trust in Vision Language Models: Research Landscape, Challenges, and Prospects
Mapping User Trust in Vision Language Models: Research Landscape, Challenges, and Prospects
Agnese Chiatti
Sara Bernardini
Lara Shibelski Godoy Piccolo
Viola Schiaffonati
Matteo Matteucci
332
3
0
08 May 2025
A Comprehensive Analysis for Visual Object Hallucination in Large Vision-Language Models
A Comprehensive Analysis for Visual Object Hallucination in Large Vision-Language Models
Liqiang Jing
Guiming Hardy Chen
Ehsan Aghazadeh
Xin Eric Wang
Xinya Du
269
2
0
04 May 2025
Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models
Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Sangmin Woo
Kang Zhou
Yun Zhou
Shuai Wang
Sheng Guan
Haibo Ding
Lin Lee Cheong
VPVLM
307
2
0
30 Apr 2025
Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs
Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs
Dung Nguyen
Minh Khoi Ho
Huy Ta
T. Nguyen
Qi Chen
...
Zhibin Liao
Minh-Son To
Johan Verjans
Phi Le Nguyen
Vu Minh Hieu Phan
480
0
0
30 Apr 2025
Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object Perception
Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object PerceptionComputer Vision and Pattern Recognition (CVPR), 2025
Yuanchen Wu
Lu Zhang
Hang Yao
Junlong Du
Ke Yan
Shouhong Ding
Yunsheng Wu
Xuzhao Li
MLLM
523
3
0
29 Apr 2025
Previous
12345
Next