ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.16500
  4. Cited By
CogVLM2: Visual Language Models for Image and Video Understanding

CogVLM2: Visual Language Models for Image and Video Understanding

29 August 2024
Wenyi Hong
Weihan Wang
Ming Ding
Wenmeng Yu
Qingsong Lv
Yan Wang
Yean Cheng
Shiyu Huang
Junhui Ji
Zhao Xue
Lei Zhao
Zhuoyi Yang
Xiaotao Gu
Xiaohan Zhang
Guanyu Feng
Da Yin
Zihan Wang
Ji Qi
Xixuan Song
Peng Zhang
Debing Liu
Bin Xu
Juanzi Li
Yuxiao Dong
Jie Tang
    VLM
    MLLM
ArXivPDFHTML

Papers citing "CogVLM2: Visual Language Models for Image and Video Understanding"

18 / 18 papers shown
Title
Seeing the Abstract: Translating the Abstract Language for Vision Language Models
Seeing the Abstract: Translating the Abstract Language for Vision Language Models
Davide Talon
Federico Girella
Ziyue Liu
Marco Cristani
Yiming Wang
VLM
42
0
0
06 May 2025
Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object Perception
Antidote: A Unified Framework for Mitigating LVLM Hallucinations in Counterfactual Presupposition and Object Perception
Yuanchen Wu
Lu Zhang
Hang Yao
Junlong Du
Ke Yan
Shouhong Ding
Yunsheng Wu
X. Li
MLLM
68
0
0
29 Apr 2025
Guiding VLM Agents with Process Rewards at Inference Time for GUI Navigation
Guiding VLM Agents with Process Rewards at Inference Time for GUI Navigation
Zhiyuan Hu
Shiyun Xiong
Yifan Zhang
See-Kiong Ng
Anh Tuan Luu
Bo An
Shuicheng Yan
Bryan Hooi
31
0
0
22 Apr 2025
DomainCQA: Crafting Expert-Level QA from Domain-Specific Charts
DomainCQA: Crafting Expert-Level QA from Domain-Specific Charts
Ling Zhong
Yujing Lu
Jing Yang
Weiming Li
Peng Wei
Yongheng Wang
Manni Duan
Qing Zhang
45
0
0
25 Mar 2025
MMXU: A Multi-Modal and Multi-X-ray Understanding Dataset for Disease Progression
MMXU: A Multi-Modal and Multi-X-ray Understanding Dataset for Disease Progression
Linjie Mu
Zhongzhen Huang
Shengqian Qin
Yakun Zhu
S. Zhang
Xiaofan Zhang
38
0
0
17 Feb 2025
MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation
MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation
Haibo Tong
Zhaoyang Wang
Z. Chen
Haonian Ji
Shi Qiu
...
Peng Xia
Mingyu Ding
Rafael Rafailov
Chelsea Finn
Huaxiu Yao
EGVM
VGen
84
2
0
03 Feb 2025
ComposeAnyone: Controllable Layout-to-Human Generation with Decoupled Multimodal Conditions
ComposeAnyone: Controllable Layout-to-Human Generation with Decoupled Multimodal Conditions
Shiyue Zhang
Zheng Chong
Xi Lu
Wenqing Zhang
Haoxiang Li
Xujie Zhang
Jiehui Huang
Xiao Dong
Xiaodan Liang
DiffM
40
0
0
21 Jan 2025
Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation
Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation
Yuhui Zhang
Yuchang Su
Yiming Liu
Xiaohan Wang
James Burgess
...
Josiah Aklilu
Alejandro Lozano
Anjiang Wei
Ludwig Schmidt
Serena Yeung-Levy
50
3
0
06 Jan 2025
ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding
ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding
Xiao Wang
Qingyi Si
Jianlong Wu
Shiyu Zhu
Li Cao
Liqiang Nie
VLM
70
6
0
29 Dec 2024
JMMMU: A Japanese Massive Multi-discipline Multimodal Understanding Benchmark for Culture-aware Evaluation
JMMMU: A Japanese Massive Multi-discipline Multimodal Understanding Benchmark for Culture-aware Evaluation
Shota Onohara
Atsuyuki Miyai
Yuki Imajuku
Kazuki Egashira
Jeonghun Baek
Xiang Yue
Graham Neubig
Kiyoharu Aizawa
OSLM
71
1
0
22 Oct 2024
S$^4$ST: A Strong, Self-transferable, faSt, and Simple Scale Transformation for Transferable Targeted Attack
S4^44ST: A Strong, Self-transferable, faSt, and Simple Scale Transformation for Transferable Targeted Attack
Yongxiang Liu
Bowen Peng
Li Liu
X. Li
31
0
0
13 Oct 2024
Vision Language Models See What You Want but not What You See
Vision Language Models See What You Want but not What You See
Qingying Gao
Yijiang Li
Haiyun Lyu
Haoran Sun
Dezhi Luo
Hokin Deng
LRM
VLM
32
3
0
01 Oct 2024
Probing Mechanical Reasoning in Large Vision Language Models
Probing Mechanical Reasoning in Large Vision Language Models
Haoran Sun
Qingying Gao
Haiyun Lyu
Dezhi Luo
Yijiang Li
Hokin Deng
LRM
31
2
0
01 Oct 2024
Vision Language Models Know Law of Conservation without Understanding More-or-Less
Vision Language Models Know Law of Conservation without Understanding More-or-Less
Dezhi Luo
Haiyun Lyu
Qingying Gao
Haoran Sun
Yijiang Li
Hokin Deng
13
1
0
01 Oct 2024
CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer
CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer
Zhuoyi Yang
Jiayan Teng
Wendi Zheng
Ming Ding
Shiyu Huang
...
Weihan Wang
Yean Cheng
Xiaotao Gu
Yuxiao Dong
Jie Tang
DiffM
VGen
69
384
0
12 Aug 2024
ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All
  Tools
ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools
Team GLM
:
Aohan Zeng
Bin Xu
Bowen Wang
...
Zhaoyu Wang
Zhen Yang
Zhengxiao Du
Zhenyu Hou
Zihan Wang
ALM
53
473
0
18 Jun 2024
LVBench: An Extreme Long Video Understanding Benchmark
LVBench: An Extreme Long Video Understanding Benchmark
Weihan Wang
Zehai He
Wenyi Hong
Yean Cheng
Xiaohan Zhang
...
Shiyu Huang
Bin Xu
Yuxiao Dong
Ming Ding
Jie Tang
ELM
VLM
38
63
0
12 Jun 2024
Learn to Explain: Multimodal Reasoning via Thought Chains for Science
  Question Answering
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
A. Kalyan
ELM
ReLM
LRM
198
1,089
0
20 Sep 2022
1