ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.08128
  4. Cited By
ViperGPT: Visual Inference via Python Execution for Reasoning

ViperGPT: Visual Inference via Python Execution for Reasoning

14 March 2023
Dídac Surís
Sachit Menon
Carl Vondrick
    MLLM
    LRM
    ReLM
ArXivPDFHTML

Papers citing "ViperGPT: Visual Inference via Python Execution for Reasoning"

50 / 339 papers shown
Title
Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning
Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning
Brandon Huang
Chancharik Mitra
Assaf Arbelle
Leonid Karlinsky
Trevor Darrell
Roei Herzig
37
12
0
21 Jun 2024
Whiteboard-of-Thought: Thinking Step-by-Step Across Modalities
Whiteboard-of-Thought: Thinking Step-by-Step Across Modalities
Sachit Menon
Richard Zemel
Carl Vondrick
LRM
28
1
0
20 Jun 2024
VDebugger: Harnessing Execution Feedback for Debugging Visual Programs
VDebugger: Harnessing Execution Feedback for Debugging Visual Programs
Xueqing Wu
Zongyu Lin
Songyan Zhao
Te-Lin Wu
Pan Lu
Nanyun Peng
Kai-Wei Chang
LRM
45
2
0
19 Jun 2024
DrVideo: Document Retrieval Based Long Video Understanding
DrVideo: Document Retrieval Based Long Video Understanding
Ziyu Ma
Chenhui Gou
Hengcan Shi
Bin Sun
Shutao Li
Hamid Rezatofighi
Jianfei Cai
VLM
34
12
0
18 Jun 2024
Automatic benchmarking of large multimodal models via iterative
  experiment programming
Automatic benchmarking of large multimodal models via iterative experiment programming
Alessandro Conti
Enrico Fini
Paolo Rota
Yiming Wang
Massimiliano Mancini
Elisa Ricci
30
0
0
18 Jun 2024
CodeNav: Beyond tool-use to using real-world codebases with LLM agents
CodeNav: Beyond tool-use to using real-world codebases with LLM agents
Tanmay Gupta
Luca Weihs
Aniruddha Kembhavi
LLMAG
ELM
56
1
0
18 Jun 2024
Investigating Video Reasoning Capability of Large Language Models with
  Tropes in Movies
Investigating Video Reasoning Capability of Large Language Models with Tropes in Movies
Hung-Ting Su
Chun-Tong Chao
Ya-Ching Hsu
Xudong Lin
Yulei Niu
Hung-Yi Lee
Winston H. Hsu
LRM
31
1
0
16 Jun 2024
What is the Visual Cognition Gap between Humans and Multimodal LLMs?
What is the Visual Cognition Gap between Humans and Multimodal LLMs?
Xu Cao
Bolin Lai
Wenqian Ye
Yunsheng Ma
Joerg Heintz
Jintai Chen
Jianguo Cao
James M. Rehg
37
8
0
14 Jun 2024
Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal
  Language Models
Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models
Yushi Hu
Weijia Shi
Xingyu Fu
Dan Roth
Mari Ostendorf
Luke Zettlemoyer
Noah A. Smith
Ranjay Krishna
LRM
32
34
0
13 Jun 2024
Too Many Frames, Not All Useful: Efficient Strategies for Long-Form Video QA
Too Many Frames, Not All Useful: Efficient Strategies for Long-Form Video QA
Jongwoo Park
Kanchana Ranasinghe
Kumara Kahatapitiya
Wonjeong Ryoo
Donghyun Kim
Michael S. Ryoo
57
20
0
13 Jun 2024
Commonsense-T2I Challenge: Can Text-to-Image Generation Models
  Understand Commonsense?
Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense?
Xingyu Fu
Muyu He
Yujie Lu
William Yang Wang
Dan Roth
EGVM
LRM
26
15
0
11 Jun 2024
LogiCode: an LLM-Driven Framework for Logical Anomaly Detection
LogiCode: an LLM-Driven Framework for Logical Anomaly Detection
Yiheng Zhang
Yunkang Cao
Xiaohao Xu
Weiming Shen
29
14
0
07 Jun 2024
Re-ReST: Reflection-Reinforced Self-Training for Language Agents
Re-ReST: Reflection-Reinforced Self-Training for Language Agents
Zi-Yi Dou
Cheng-Fu Yang
Xueqing Wu
Kai-Wei Chang
Nanyun Peng
LRM
81
7
0
03 Jun 2024
ParSEL: Parameterized Shape Editing with Language
ParSEL: Parameterized Shape Editing with Language
Aditya Ganeshan
Ryan Y. Huang
Xianghao Xu
R. K. Jones
Daniel E. Ritchie
KELM
37
1
0
30 May 2024
LLM-based Hierarchical Concept Decomposition for Interpretable
  Fine-Grained Image Classification
LLM-based Hierarchical Concept Decomposition for Interpretable Fine-Grained Image Classification
Renyi Qu
Mark Yatskar
16
1
0
29 May 2024
VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos
VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos
Ziyang Wang
Shoubin Yu
Elias Stengel-Eskin
Jaehong Yoon
Feng Cheng
Gedas Bertasius
Mohit Bansal
40
56
0
29 May 2024
MMCTAgent: Multi-modal Critical Thinking Agent Framework for Complex
  Visual Reasoning
MMCTAgent: Multi-modal Critical Thinking Agent Framework for Complex Visual Reasoning
Somnath Kumar
Yash Gadhia
T. Ganu
A. Nambi
LRM
45
1
0
28 May 2024
A Human-Like Reasoning Framework for Multi-Phases Planning Task with
  Large Language Models
A Human-Like Reasoning Framework for Multi-Phases Planning Task with Large Language Models
Chengxing Xie
Difan Zou
LRM
LLMAG
27
4
0
28 May 2024
Tool Learning with Large Language Models: A Survey
Tool Learning with Large Language Models: A Survey
Changle Qu
Sunhao Dai
Xiaochi Wei
Hengyi Cai
Shuaiqiang Wang
Dawei Yin
Jun Xu
Jirong Wen
LLMAG
31
77
0
28 May 2024
UDKAG: Augmenting Large Vision-Language Models with Up-to-Date Knowledge
UDKAG: Augmenting Large Vision-Language Models with Up-to-Date Knowledge
Chuanhao Li
Zhen Li
Chenchen Jing
Shuo Liu
Wenqi Shao
Yuwei Wu
Ping Luo
Yu Qiao
Kaipeng Zhang
ELM
23
3
0
23 May 2024
Libra: Building Decoupled Vision System on Large Language Models
Libra: Building Decoupled Vision System on Large Language Models
Yifan Xu
Xiaoshan Yang
Y. Song
Changsheng Xu
MLLM
VLM
31
6
0
16 May 2024
G-VOILA: Gaze-Facilitated Information Querying in Daily Scenarios
G-VOILA: Gaze-Facilitated Information Querying in Daily Scenarios
Zeyu Wang
Yuanchun Shi
Yuntao wang
Yuchen Yao
Kun Yan
Yuhan Wang
Lei Ji
Xuhai Xu
Chun Yu
16
7
0
13 May 2024
LogoMotion: Visually Grounded Code Generation for Content-Aware
  Animation
LogoMotion: Visually Grounded Code Generation for Content-Aware Animation
Vivian Liu
Rubaiat Habib Kazi
Li-Yi Wei
Matthew Fisher
Timothy Langlois
Seth Walker
Lydia B. Chilton
33
0
0
11 May 2024
Memory-Maze: Scenario Driven Benchmark and Visual Language Navigation
  Model for Guiding Blind People
Memory-Maze: Scenario Driven Benchmark and Visual Language Navigation Model for Guiding Blind People
Masaki Kuribayashi
Kohei Uehara
Allan Wang
Daisuke Sato
Simon Chu
Shigeo Morishima
30
1
0
11 May 2024
ChatHuman: Language-driven 3D Human Understanding with
  Retrieval-Augmented Tool Reasoning
ChatHuman: Language-driven 3D Human Understanding with Retrieval-Augmented Tool Reasoning
Jing Lin
Yao Feng
Weiyang Liu
Michael J. Black
3DH
LRM
40
5
0
07 May 2024
WorldQA: Multimodal World Knowledge in Videos through Long-Chain
  Reasoning
WorldQA: Multimodal World Knowledge in Videos through Long-Chain Reasoning
Yuanhan Zhang
Kaichen Zhang
Bo-wen Li
Fanyi Pu
Christopher Arif Setiadharma
Jingkang Yang
Ziwei Liu
VGen
47
7
0
06 May 2024
Large Language Models Synergize with Automated Machine Learning
Large Language Models Synergize with Automated Machine Learning
Jinglue Xu
Jialong Li
Zhen Liu
Nagar Anthel Venkatesh Suryanarayanan
Guoyuan Zhou
Jia Guo
Hitoshi Iba
Kenji Tei
33
4
0
06 May 2024
Simplifying Multimodality: Unimodal Approach to Multimodal Challenges in
  Radiology with General-Domain Large Language Model
Simplifying Multimodality: Unimodal Approach to Multimodal Challenges in Radiology with General-Domain Large Language Model
Seonhee Cho
Choonghan Kim
Jiho Lee
Chetan Chilkunda
Sujin Choi
Joo Heung Yoon
44
0
0
29 Apr 2024
Anywhere: A Multi-Agent Framework for User-Guided, Reliable, and Diverse Foreground-Conditioned Image Generation
Anywhere: A Multi-Agent Framework for User-Guided, Reliable, and Diverse Foreground-Conditioned Image Generation
Tianyidan Xie
Rui Ma
Qian Wang
Xiaoqian Ye
Feixuan Liu
Ying Tai
Zhenyu Zhang
Lanjun Wang
Zili Yi
DiffM
MLLM
42
2
0
29 Apr 2024
Position: Do Not Explain Vision Models Without Context
Position: Do Not Explain Vision Models Without Context
Paulina Tomaszewska
Przemysław Biecek
24
1
0
28 Apr 2024
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video
  Dense Captioning
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
Lin Xu
Yilin Zhao
Daquan Zhou
Zhijie Lin
See Kiong Ng
Jiashi Feng
MLLM
VLM
34
156
0
25 Apr 2024
Leveraging Large Language Models for Multimodal Search
Leveraging Large Language Models for Multimodal Search
Oriol Barbany
Michael Huang
Xinliang Zhu
Arnab Dhua
23
8
0
24 Apr 2024
Think-Program-reCtify: 3D Situated Reasoning with Large Language Models
Think-Program-reCtify: 3D Situated Reasoning with Large Language Models
Qingrong He
Kejun Lin
Shizhe Chen
Anwen Hu
Qin Jin
LRM
37
1
0
23 Apr 2024
From Matching to Generation: A Survey on Generative Information Retrieval
From Matching to Generation: A Survey on Generative Information Retrieval
Xiaoxi Li
Jiajie Jin
Yujia Zhou
Yuyao Zhang
Peitian Zhang
Yutao Zhu
Zhicheng Dou
3DV
67
45
0
23 Apr 2024
Describe-then-Reason: Improving Multimodal Mathematical Reasoning
  through Visual Comprehension Training
Describe-then-Reason: Improving Multimodal Mathematical Reasoning through Visual Comprehension Training
Mengzhao Jia
Zhihan Zhang
W. Yu
Fangkai Jiao
Meng-Long Jiang
VLM
ReLM
LRM
48
7
0
22 Apr 2024
Fact :Teaching MLLMs with Faithful, Concise and Transferable Rationales
Fact :Teaching MLLMs with Faithful, Concise and Transferable Rationales
Minghe Gao
Shuang Chen
Liang Pang
Yuan Yao
Jisheng Dang
Wenqiao Zhang
Juncheng Li
Siliang Tang
Yueting Zhuang
Tat-Seng Chua
LRM
32
5
0
17 Apr 2024
OpenBias: Open-set Bias Detection in Text-to-Image Generative Models
OpenBias: Open-set Bias Detection in Text-to-Image Generative Models
Moreno DÍncà
E. Peruzzo
Massimiliano Mancini
Dejia Xu
Vidit Goel
Xingqian Xu
Zhangyang Wang
Humphrey Shi
N. Sebe
53
31
0
11 Apr 2024
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs
Kanchana Ranasinghe
Satya Narayan Shukla
Omid Poursaeed
Michael S. Ryoo
Tsung-Yu Lin
LRM
38
21
0
11 Apr 2024
VLLMs Provide Better Context for Emotion Understanding Through Common
  Sense Reasoning
VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning
Alexandros Xenos
Niki Maria Foteinopoulou
Ioanna Ntinou
Ioannis Patras
Georgios Tzimiropoulos
19
15
0
10 Apr 2024
MoReVQA: Exploring Modular Reasoning Models for Video Question Answering
MoReVQA: Exploring Modular Reasoning Models for Video Question Answering
Juhong Min
Shyamal Buch
Arsha Nagrani
Minsu Cho
Cordelia Schmid
LRM
34
20
0
09 Apr 2024
Self-Training Large Language Models for Improved Visual Program
  Synthesis With Visual Reinforcement
Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement
Zaid Khan
B. Vijaykumar
S. Schulter
Yun Fu
Manmohan Chandraker
LRM
ReLM
20
6
0
06 Apr 2024
JRDB-Social: A Multifaceted Robotic Dataset for Understanding of Context
  and Dynamics of Human Interactions Within Social Groups
JRDB-Social: A Multifaceted Robotic Dataset for Understanding of Context and Dynamics of Human Interactions Within Social Groups
Simindokht Jahangard
Zhixi Cai
Shiki Wen
Hamid Rezatofighi
26
6
0
06 Apr 2024
Idea-2-3D: Collaborative LMM Agents Enable 3D Model Generation from
  Interleaved Multimodal Inputs
Idea-2-3D: Collaborative LMM Agents Enable 3D Model Generation from Interleaved Multimodal Inputs
Junhao Chen
Xiang Li
Xiaojun Ye
Chao Li
Zhaoxin Fan
Hao Zhao
VGen
3DV
197
4
0
05 Apr 2024
PREGO: online mistake detection in PRocedural EGOcentric videos
PREGO: online mistake detection in PRocedural EGOcentric videos
Alessandro Flaborea
Guido Maria DÁmely di Melendugno
Leonardo Plini
Luca Scofano
Edoardo De Matteis
Antonino Furnari
G. Farinella
Fabio Galasso
EgoV
48
11
0
02 Apr 2024
Evaluating Text-to-Visual Generation with Image-to-Text Generation
Evaluating Text-to-Visual Generation with Image-to-Text Generation
Zhiqiu Lin
Deepak Pathak
Baiqi Li
Jiayao Li
Xide Xia
Graham Neubig
Pengchuan Zhang
Deva Ramanan
EGVM
37
125
0
01 Apr 2024
Chat Modeling: Natural Language-based Procedural Modeling of Biological
  Structures without Training
Chat Modeling: Natural Language-based Procedural Modeling of Biological Structures without Training
Donggang Jia
Yunhai Wang
Ivan Viola
29
1
0
01 Apr 2024
An Image Grid Can Be Worth a Video: Zero-shot Video Question Answering
  Using a VLM
An Image Grid Can Be Worth a Video: Zero-shot Video Question Answering Using a VLM
Wonkyun Kim
Changin Choi
Wonseok Lee
Wonjong Rhee
VLM
43
50
0
27 Mar 2024
PropTest: Automatic Property Testing for Improved Visual Programming
PropTest: Automatic Property Testing for Improved Visual Programming
Jaywon Koo
Ziyan Yang
Paola Cascante-Bonilla
Baishakhi Ray
Vicente Ordonez
LRM
24
2
0
25 Mar 2024
Synthesize Step-by-Step: Tools, Templates and LLMs as Data Generators
  for Reasoning-Based Chart VQA
Synthesize Step-by-Step: Tools, Templates and LLMs as Data Generators for Reasoning-Based Chart VQA
Zhuowan Li
Bhavan A. Jasani
Peng Tang
Shabnam Ghadar
LRM
22
8
0
25 Mar 2024
VURF: A General-purpose Reasoning and Self-refinement Framework for Video Understanding
VURF: A General-purpose Reasoning and Self-refinement Framework for Video Understanding
Ahmad A Mahmood
Ashmal Vayani
Muzammal Naseer
Salman Khan
Fahad Shahbaz Khan
LRM
49
7
0
21 Mar 2024
Previous
1234567
Next