ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20977
  4. Cited By
Evaluating and Steering Modality Preferences in Multimodal Large Language Model
v1v2 (latest)

Evaluating and Steering Modality Preferences in Multimodal Large Language Model

27 May 2025
Yu Zhang
Jinlong Ma
Yongshuai Hou
Xuefeng Bai
Kehai Chen
Yang Xiang
Jun Yu
Min Zhang
ArXiv (abs)PDFHTMLHuggingFace (9 upvotes)

Papers citing "Evaluating and Steering Modality Preferences in Multimodal Large Language Model"

10 / 10 papers shown
Title
MLLMEraser: Achieving Test-Time Unlearning in Multimodal Large Language Models through Activation Steering
MLLMEraser: Achieving Test-Time Unlearning in Multimodal Large Language Models through Activation Steering
Chenlu Ding
Jiancan Wu
Leheng Sheng
Fan Zhang
Yancheng Yuan
Xiang Wang
Xiangnan He
MUKELM
40
0
0
05 Oct 2025
Compose and Fuse: Revisiting the Foundational Bottlenecks in Multimodal Reasoning
Compose and Fuse: Revisiting the Foundational Bottlenecks in Multimodal Reasoning
Yucheng Wang
Yifan Hou
Aydin Javadov
Mubashara Akhtar
Mrinmaya Sachan
LRM
0
0
0
28 Sep 2025
From Bias to Balance: Exploring and Mitigating Spatial Bias in LVLMs
From Bias to Balance: Exploring and Mitigating Spatial Bias in LVLMs
Yingjie Zhu
X. Bai
Kehai Chen
Yang Xiang
Weili Guan
Jun-chen Yu
Min Zhang
12
0
0
26 Sep 2025
Bridge the Gap: From Weak to Full Supervision for Temporal Action Localization with PseudoFormer
Bridge the Gap: From Weak to Full Supervision for Temporal Action Localization with PseudoFormer
Ziyi Liu
Rahul Gupta
98
2
0
21 Apr 2025
MoMa-Kitchen: A 100K+ Benchmark for Affordance-Grounded Last-Mile Navigation in Mobile Manipulation
P. Zhang
Xianqiang Gao
Yuhan Wu
Kehui Liu
Dong Wang
Zechuan Wang
Bin Zhao
Yan Ding
Xiaochen Li
LM&Ro
149
9
0
14 Mar 2025
Qwen2.5-VL Technical Report
Qwen2.5-VL Technical Report
S. Bai
Keqin Chen
Xuejing Liu
Jialin Wang
Wenbin Ge
...
Zesen Cheng
Hang Zhang
Zhibo Yang
Haiyang Xu
Junyang Lin
VLM
523
1,606
0
20 Feb 2025
Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided Decoding
Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided Decoding
Kyungmin Min
Minbeom Kim
Kang-il Lee
Dongryeol Lee
Kyomin Jung
MLLM
299
11
0
20 Feb 2025
Improving Instruction-Following in Language Models through Activation Steering
Improving Instruction-Following in Language Models through Activation Steering
Alessandro Stolfo
Vidhisha Balachandran
Safoora Yousefi
Eric Horvitz
Besmira Nushi
LLMSV
259
48
0
15 Oct 2024
VLind-Bench: Measuring Language Priors in Large Vision-Language Models
VLind-Bench: Measuring Language Priors in Large Vision-Language Models
Kang-il Lee
Minbeom Kim
Seunghyun Yoon
Minsung Kim
Dongryeol Lee
Hyukhun Koh
Kyomin Jung
CoGeVLM
287
10
0
13 Jun 2024
Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?
Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?
Letitia Parcalabescu
Anette Frank
MLLMCoGeVLM
317
11
0
29 Apr 2024
1