Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.07519
Cited By
v1
v2 (latest)
Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-modal Knowledge Transfer
14 March 2022
Woojeong Jin
Dong-Ho Lee
Chenguang Zhu
Jay Pujara
Xiang Ren
CLIP
VLM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-modal Knowledge Transfer"
7 / 7 papers shown
Title
A Thousand Words or An Image: Studying the Influence of Persona Modality in Multimodal LLMs
Julius Broomfield
Kartik Sharma
Srijan Kumar
77
0
0
27 Feb 2025
WinoViz: Probing Visual Properties of Objects Under Different States
Woojeong Jin
Tejas Srinivasan
Jesse Thomason
Xiang Ren
82
1
0
21 Feb 2024
Towards Vision Enhancing LLMs: Empowering Multimodal Knowledge Storage and Sharing in LLMs
Yunxin Li
Baotian Hu
Wei Wang
Xiaochun Cao
Min Zhang
62
5
0
27 Nov 2023
MPCHAT: Towards Multimodal Persona-Grounded Conversation
Jaewoo Ahn
Yeda Song
Sangdoo Yun
Gunhee Kim
53
22
0
27 May 2023
Learning to Imagine: Visually-Augmented Natural Language Generation
Tianyi Tang
Yushuo Chen
Yifan Du
Junyi Li
Wayne Xin Zhao
Ji-Rong Wen
DiffM
75
9
0
26 May 2023
Towards Versatile and Efficient Visual Knowledge Integration into Pre-trained Language Models with Cross-Modal Adapters
Xinyun Zhang
Haochen Tan
Han Wu
Bei Yu
KELM
25
1
0
12 May 2023
CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Tejas Srinivasan
Ting-Yun Chang
Leticia Pinto-Alva
Georgios Chochlakis
Mohammad Rostami
Jesse Thomason
VLM
CLL
101
76
0
18 Jun 2022
1