ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.02219
  4. Cited By
Behind the Magic, MERLIM: Multi-modal Evaluation Benchmark for Large
  Image-Language Models

Behind the Magic, MERLIM: Multi-modal Evaluation Benchmark for Large Image-Language Models

3 December 2023
Andrés Villa
Juan Carlos León Alcázar
Alvaro Soto
Bernard Ghanem
    MLLM
    VLM
ArXivPDFHTML

Papers citing "Behind the Magic, MERLIM: Multi-modal Evaluation Benchmark for Large Image-Language Models"

13 / 13 papers shown
Title
Mapping User Trust in Vision Language Models: Research Landscape, Challenges, and Prospects
Mapping User Trust in Vision Language Models: Research Landscape, Challenges, and Prospects
Agnese Chiatti
Sara Bernardini
Lara Shibelski Godoy Piccolo
Viola Schiaffonati
Matteo Matteucci
52
0
0
08 May 2025
TruthPrInt: Mitigating LVLM Object Hallucination Via Latent Truthful-Guided Pre-Intervention
TruthPrInt: Mitigating LVLM Object Hallucination Via Latent Truthful-Guided Pre-Intervention
Jinhao Duan
Fei Kong
Hao-Ran Cheng
James Diffenderfer
B. Kailkhura
Lichao Sun
Xiaofeng Zhu
Xiaoshuang Shi
Kaidi Xu
58
0
0
13 Mar 2025
A Survey of Hallucination in Large Visual Language Models
A Survey of Hallucination in Large Visual Language Models
Wei Lan
Wenyi Chen
Qingfeng Chen
Shirui Pan
Huiyu Zhou
Yi-Lun Pan
LRM
25
4
0
20 Oct 2024
Evaluating and Analyzing Relationship Hallucinations in Large
  Vision-Language Models
Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models
Mingrui Wu
Jiayi Ji
Oucheng Huang
Jiale Li
Yuhang Wu
Xiaoshuai Sun
Rongrong Ji
48
8
0
24 Jun 2024
Explaining Multi-modal Large Language Models by Analyzing their Vision
  Perception
Explaining Multi-modal Large Language Models by Analyzing their Vision Perception
Loris Giulivi
Giacomo Boracchi
30
2
0
23 May 2024
Hallucination of Multimodal Large Language Models: A Survey
Hallucination of Multimodal Large Language Models: A Survey
Zechen Bai
Pichao Wang
Tianjun Xiao
Tong He
Zongbo Han
Zheng Zhang
Mike Zheng Shou
VLM
LRM
71
136
0
29 Apr 2024
VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large
  Vision-Language Models
VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models
Haoyi Qiu
Wenbo Hu
Zi-Yi Dou
Nanyun Peng
33
7
0
22 Apr 2024
Pensieve: Retrospect-then-Compare Mitigates Visual Hallucination
Pensieve: Retrospect-then-Compare Mitigates Visual Hallucination
Dingchen Yang
Bowen Cao
Guang Chen
Changjun Jiang
46
7
0
21 Mar 2024
InternLM-XComposer2: Mastering Free-form Text-Image Composition and
  Comprehension in Vision-Language Large Model
InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model
Xiao-wen Dong
Pan Zhang
Yuhang Zang
Yuhang Cao
Bin Wang
...
Conghui He
Xingcheng Zhang
Yu Qiao
Dahua Lin
Jiaqi Wang
VLM
MLLM
73
242
0
29 Jan 2024
LightHouse: A Survey of AGI Hallucination
LightHouse: A Survey of AGI Hallucination
Feng Wang
LRM
HILM
VLM
19
3
0
08 Jan 2024
BLIP: Bootstrapping Language-Image Pre-training for Unified
  Vision-Language Understanding and Generation
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
382
4,010
0
28 Jan 2022
Pix2seq: A Language Modeling Framework for Object Detection
Pix2seq: A Language Modeling Framework for Object Detection
Ting-Li Chen
Saurabh Saxena
Lala Li
David J. Fleet
Geoffrey E. Hinton
MLLM
ViT
VLM
233
341
0
22 Sep 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy
  Text Supervision
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
293
2,875
0
11 Feb 2021
1