ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.13884
  4. Cited By
Multimodal Few-Shot Learning with Frozen Language Models

Multimodal Few-Shot Learning with Frozen Language Models

25 June 2021
Maria Tsimpoukelli
Jacob Menick
Serkan Cabi
S. M. Ali Eslami
Oriol Vinyals
Felix Hill
    MLLM
ArXivPDFHTML

Papers citing "Multimodal Few-Shot Learning with Frozen Language Models"

50 / 532 papers shown
Title
Visual Instruction Tuning with Chain of Region-of-Interest
Visual Instruction Tuning with Chain of Region-of-Interest
Yixin Chen
Shuai Zhang
Boran Han
Bernie Wang
23
0
0
11 May 2025
A Survey on Progress in LLM Alignment from the Perspective of Reward Design
A Survey on Progress in LLM Alignment from the Perspective of Reward Design
Miaomiao Ji
Yanqiu Wu
Zhibin Wu
Shoujin Wang
Jian Yang
Mark Dras
Usman Naseem
39
0
0
05 May 2025
VIST-GPT: Ushering in the Era of Visual Storytelling with LLMs?
VIST-GPT: Ushering in the Era of Visual Storytelling with LLMs?
Mohamed Gado
Towhid Taliee
Muhammad Memon
D. Ignatov
Radu Timofte
70
0
0
27 Apr 2025
A Large Vision-Language Model based Environment Perception System for Visually Impaired People
A Large Vision-Language Model based Environment Perception System for Visually Impaired People
Zezhou Chen
Zhaoxiang Liu
Kai Wang
Kohou Wang
Shiguo Lian
50
0
0
25 Apr 2025
CLIP-Powered Domain Generalization and Domain Adaptation: A Comprehensive Survey
CLIP-Powered Domain Generalization and Domain Adaptation: A Comprehensive Survey
Jindong Li
Y. Li
Yali Fu
Jiahong Liu
Yixin Liu
Menglin Yang
Irwin King
VLM
36
0
0
19 Apr 2025
Enhancing Multimodal In-Context Learning for Image Classification through Coreset Optimization
Enhancing Multimodal In-Context Learning for Image Classification through Coreset Optimization
Huiyi Chen
Jiawei Peng
Kaihua Tang
Xin Geng
Xu Yang
22
0
0
19 Apr 2025
Analysing the Robustness of Vision-Language-Models to Common Corruptions
Analysing the Robustness of Vision-Language-Models to Common Corruptions
Muhammad Usama
Syeda Aishah Asim
Syed Bilal Ali
Syed Talal Wasim
Umair Bin Mansoor
VLM
36
0
0
18 Apr 2025
DeepMLF: Multimodal language model with learnable tokens for deep fusion in sentiment analysis
DeepMLF: Multimodal language model with learnable tokens for deep fusion in sentiment analysis
Efthymios Georgiou
V. Katsouros
Yannis Avrithis
Alexandros Potamianos
24
1
0
15 Apr 2025
ConceptFormer: Towards Efficient Use of Knowledge-Graph Embeddings in Large Language Models
ConceptFormer: Towards Efficient Use of Knowledge-Graph Embeddings in Large Language Models
Joel Barmettler
Abraham Bernstein
Luca Rossetto
KELM
3DV
44
0
0
10 Apr 2025
CubeRobot: Grounding Language in Rubik's Cube Manipulation via Vision-Language Model
CubeRobot: Grounding Language in Rubik's Cube Manipulation via Vision-Language Model
Feiyang Wang
Xiaomin Yu
Wangyu Wu
LM&Ro
63
0
0
25 Mar 2025
ImageGen-CoT: Enhancing Text-to-Image In-context Learning with Chain-of-Thought Reasoning
ImageGen-CoT: Enhancing Text-to-Image In-context Learning with Chain-of-Thought Reasoning
Jiaqi Liao
Z. Yang
Linjie Li
Dianqi Li
Kevin Qinghong Lin
Yu-Xi Cheng
Lijuan Wang
MLLM
LRM
57
0
0
25 Mar 2025
LLaVAction: evaluating and training multi-modal large language models for action recognition
LLaVAction: evaluating and training multi-modal large language models for action recognition
Shaokai Ye
Haozhe Qi
Alexander Mathis
Mackenzie W. Mathis
60
1
0
24 Mar 2025
MM-Spatial: Exploring 3D Spatial Understanding in Multimodal LLMs
MM-Spatial: Exploring 3D Spatial Understanding in Multimodal LLMs
Erik Daxberger
Nina Wenzel
David Griffiths
Haiming Gang
Justin Lazarow
...
Kai Kang
Marcin Eichner
Y. Yang
Afshin Dehghan
Peter Grasch
72
2
0
17 Mar 2025
TLAC: Two-stage LMM Augmented CLIP for Zero-Shot Classification
TLAC: Two-stage LMM Augmented CLIP for Zero-Shot Classification
Ans Munir
Faisal Z. Qureshi
M. H. Khan
Mohsen Ali
VLM
70
0
0
15 Mar 2025
DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for Robust Few-Shot Segmentation
Amin Karimi
Charalambos Poullis
VLM
49
0
0
06 Mar 2025
See What You Are Told: Visual Attention Sink in Large Multimodal Models
Seil Kang
Jinyeong Kim
Junhyeok Kim
Seong Jae Hwang
VLM
107
5
0
05 Mar 2025
Advancing Multimodal In-Context Learning in Large Vision-Language Models with Task-aware Demonstrations
Advancing Multimodal In-Context Learning in Large Vision-Language Models with Task-aware Demonstrations
Yanshu Li
44
0
0
05 Mar 2025
Enhancing Spoken Discourse Modeling in Language Models Using Gestural Cues
Varsha Suresh
Muhammad Hamza Mughal
Christian Theobalt
Vera Demberg
51
0
0
05 Mar 2025
R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts
R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts
Zhongyang Li
Ziyue Li
Tianyi Zhou
MoE
46
0
0
27 Feb 2025
MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Poisoning Attacks
MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Poisoning Attacks
Hyeonjeong Ha
Qiusi Zhan
Jeonghwan Kim
Dimitrios Bralios
Saikrishna Sanniboina
Nanyun Peng
Kai-Wei Chang
Daniel Kang
Heng Ji
KELM
AAML
67
1
0
25 Feb 2025
FilterRAG: Zero-Shot Informed Retrieval-Augmented Generation to Mitigate Hallucinations in VQA
FilterRAG: Zero-Shot Informed Retrieval-Augmented Generation to Mitigate Hallucinations in VQA
S M Sarwar
66
1
0
25 Feb 2025
Interaction2Code: Benchmarking MLLM-based Interactive Webpage Code Generation from Interactive Prototyping
Interaction2Code: Benchmarking MLLM-based Interactive Webpage Code Generation from Interactive Prototyping
Jingyu Xiao
Yuxuan Wan
Yintong Huo
Z. Wang
Xinyi Xu
Wenxuan Wang
Zhiyao Xu
Y. Wang
Michael R. Lyu
110
1
0
21 Feb 2025
Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation
Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation
Y. Yang
Ajay Patel
Matt Deitke
Tanmay Gupta
Luca Weihs
...
Mark Yatskar
Chris Callison-Burch
Ranjay Krishna
Aniruddha Kembhavi
Christopher Clark
SyDa
68
2
0
21 Feb 2025
TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents
TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents
Geon Lee
Wenchao Yu
Kijung Shin
Wei Cheng
Haifeng Chen
AI4TS
LLMAG
54
5
0
17 Feb 2025
Activating Distributed Visual Region within LLMs for Efficient and Effective Vision-Language Training and Inference
Activating Distributed Visual Region within LLMs for Efficient and Effective Vision-Language Training and Inference
Siyuan Wang
Dianyi Wang
Chengxing Zhou
Zejun Li
Zhihao Fan
Xuanjing Huang
Zhongyu Wei
VLM
147
0
0
17 Dec 2024
Style-Pro: Style-Guided Prompt Learning for Generalizable
  Vision-Language Models
Style-Pro: Style-Guided Prompt Learning for Generalizable Vision-Language Models
Niloufar Alipour Talemi
Hossein Kashiani
Fatemeh Afghah
CLIP
VLM
70
0
0
25 Nov 2024
Visual-Oriented Fine-Grained Knowledge Editing for MultiModal Large
  Language Models
Visual-Oriented Fine-Grained Knowledge Editing for MultiModal Large Language Models
Zhen Zeng
Leijiang Gu
Xun Yang
Zhangling Duan
Zenglin Shi
Meng Wang
KELM
73
2
0
19 Nov 2024
IP-MOT: Instance Prompt Learning for Cross-Domain Multi-Object Tracking
IP-MOT: Instance Prompt Learning for Cross-Domain Multi-Object Tracking
Run Luo
Zikai Song
Longze Chen
Yunshui Li
Min Yang
Wei-Guo Yang
33
0
0
30 Oct 2024
Analyzing Multimodal Interaction Strategies for LLM-Assisted
  Manipulation of 3D Scenes
Analyzing Multimodal Interaction Strategies for LLM-Assisted Manipulation of 3D Scenes
Junlong Chen
Jens Grubert
Per Ola Kristensson
33
1
0
29 Oct 2024
Knowledge-Guided Prompt Learning for Request Quality Assurance in Public
  Code Review
Knowledge-Guided Prompt Learning for Request Quality Assurance in Public Code Review
Lin Li
Xinchun Yu
Xinyu Chen
Peng Liang
21
0
0
29 Oct 2024
What Factors Affect Multi-Modal In-Context Learning? An In-Depth
  Exploration
What Factors Affect Multi-Modal In-Context Learning? An In-Depth Exploration
L. Qin
Qiguang Chen
Hao Fei
Zhi Chen
Min Li
Wanxiang Che
34
5
0
27 Oct 2024
A Stack-Propagation Framework for Low-Resource Personalized Dialogue
  Generation
A Stack-Propagation Framework for Low-Resource Personalized Dialogue Generation
Haoyu Song
W. Zhang
Kaiyan Zhang
Ting Liu
32
3
0
26 Oct 2024
Visual Text Matters: Improving Text-KVQA with Visual Text Entity
  Knowledge-aware Large Multimodal Assistant
Visual Text Matters: Improving Text-KVQA with Visual Text Entity Knowledge-aware Large Multimodal Assistant
A. S. Penamakuri
Anand Mishra
22
1
0
24 Oct 2024
Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Under Ambiguities
Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Under Ambiguities
Zheyuan Zhang
Fengyuan Hu
Jayjun Lee
Freda Shi
Parisa Kordjamshidi
Joyce Chai
Ziqiao Ma
51
11
0
22 Oct 2024
OpenMU: Your Swiss Army Knife for Music Understanding
OpenMU: Your Swiss Army Knife for Music Understanding
Mengjie Zhao
Zhi-Wei Zhong
Zhuoyuan Mao
Shiqi Yang
Wei-Hsiang Liao
Shusuke Takahashi
Hiromi Wakaki
Yuki Mitsufuji
OSLM
45
4
0
21 Oct 2024
RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping
  Language-Image Pre-training
RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training
Muhe Ding
Yang Ma
Pengda Qin
Jianlong Wu
Yuhong Li
Liqiang Nie
18
1
0
18 Oct 2024
Transforming Game Play: A Comparative Study of DCQN and DTQN
  Architectures in Reinforcement Learning
Transforming Game Play: A Comparative Study of DCQN and DTQN Architectures in Reinforcement Learning
William A. Stigall
45
0
0
14 Oct 2024
Towards Interpreting Visual Information Processing in Vision-Language Models
Towards Interpreting Visual Information Processing in Vision-Language Models
Clement Neo
Luke Ong
Philip H. S. Torr
Mor Geva
David M. Krueger
Fazl Barez
84
6
0
09 Oct 2024
TuneVLSeg: Prompt Tuning Benchmark for Vision-Language Segmentation
  Models
TuneVLSeg: Prompt Tuning Benchmark for Vision-Language Segmentation Models
Rabin Adhikari
Safal Thapaliya
Manish Dhakal
Bishesh Khanal
MLLM
VLM
28
0
0
07 Oct 2024
MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning
MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning
Haotian Zhang
Mingfei Gao
Zhe Gan
Philipp Dufter
Nina Wenzel
...
Haoxuan You
Zirui Wang
Afshin Dehghan
Peter Grasch
Yinfei Yang
VLM
MLLM
36
32
1
30 Sep 2024
Efficient Long-Form Speech Recognition for General Speech In-Context
  Learning
Efficient Long-Form Speech Recognition for General Speech In-Context Learning
Hao Yen
Shaoshi Ling
Guoli Ye
21
0
0
29 Sep 2024
EAGLE: Towards Efficient Arbitrary Referring Visual Prompts
  Comprehension for Multimodal Large Language Models
EAGLE: Towards Efficient Arbitrary Referring Visual Prompts Comprehension for Multimodal Large Language Models
Jiacheng Zhang
Yang Jiao
Shaoxiang Chen
Jingjing Chen
Yu-Gang Jiang
28
1
0
25 Sep 2024
Multi-Modal Generative AI: Multi-modal LLM, Diffusion and Beyond
Multi-Modal Generative AI: Multi-modal LLM, Diffusion and Beyond
Hong Chen
Xin Wang
Yuwei Zhou
Bin Huang
Yipeng Zhang
Wei Feng
Houlun Chen
Zeyang Zhang
Siao Tang
Wenwu Zhu
DiffM
47
7
0
23 Sep 2024
ChefFusion: Multimodal Foundation Model Integrating Recipe and Food
  Image Generation
ChefFusion: Multimodal Foundation Model Integrating Recipe and Food Image Generation
Peiyu Li
Xiaobao Huang
Yijun Tian
Nitesh V. Chawla
25
0
0
18 Sep 2024
Benchmarking VLMs' Reasoning About Persuasive Atypical Images
Benchmarking VLMs' Reasoning About Persuasive Atypical Images
Sina Malakouti
Aysan Aghazadeh
Ashmit Khandelwal
Adriana Kovashka
VLM
26
2
0
16 Sep 2024
Prompt-and-Transfer: Dynamic Class-aware Enhancement for Few-shot
  Segmentation
Prompt-and-Transfer: Dynamic Class-aware Enhancement for Few-shot Segmentation
Hanbo Bi
Yingchao Feng
Wenhui Diao
Peijin Wang
Yongqiang Mao
Kun Fu
Hongqi Wang
Xian Sun
VLM
34
3
0
16 Sep 2024
Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale
Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale
Rogerio Bonatti
Dan Zhao
Francesco Bonacci
Dillon Dupont
Sara Abdali
...
Justin Wagle
K. Koishida
A. Bucker
Lawrence Jang
Zack Hui
LLMAG
43
26
0
12 Sep 2024
Shaking Up VLMs: Comparing Transformers and Structured State Space
  Models for Vision & Language Modeling
Shaking Up VLMs: Comparing Transformers and Structured State Space Models for Vision & Language Modeling
Georgios Pantazopoulos
Malvina Nikandrou
Alessandro Suglia
Oliver Lemon
Arash Eshghi
Mamba
42
1
0
09 Sep 2024
Multi-modal Situated Reasoning in 3D Scenes
Multi-modal Situated Reasoning in 3D Scenes
Xiongkun Linghu
Jiangyong Huang
Xuesong Niu
Xiaojian Ma
Baoxiong Jia
Siyuan Huang
34
11
0
04 Sep 2024
Multi-Modal Adapter for Vision-Language Models
Multi-Modal Adapter for Vision-Language Models
Dominykas Seputis
Serghei Mihailov
Soham Chatterjee
Zehao Xiao
VLM
24
1
0
03 Sep 2024
1234...91011
Next