Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.19698
Cited By
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations
30 October 2023
Aleksandar Petrov
Philip H. S. Torr
Adel Bibi
VPVLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations"
21 / 21 papers shown
Title
Efficient Knowledge Transfer in Multi-Task Learning through Task-Adaptive Low-Rank Representation
Xiao Zhang
Kangsheng Wang
Tianyu Hu
Huimin Ma
45
2
0
20 Apr 2025
Re-Imagining Multimodal Instruction Tuning: A Representation View
Yiyang Liu
James Liang
Ruixiang Tang
Yugyung Lee
Majid Rabbani
...
Raghuveer M. Rao
Lifu Huang
Dongfang Liu
Qifan Wang
Cheng Han
51
0
0
02 Mar 2025
PromptExp: Multi-granularity Prompt Explanation of Large Language Models
Ximing Dong
Shaowei Wang
Dayi Lin
Gopi Krishnan Rajbahadur
Boquan Zhou
Shichao Liu
Ahmed E. Hassan
AAML
LRM
16
1
0
16 Oct 2024
Parameter-Efficient Fine-Tuning of State Space Models
Kevin Galim
Wonjun Kang
Yuchen Zeng
H. Koo
Kangwook Lee
29
4
0
11 Oct 2024
How Much Can RAG Help the Reasoning of LLM?
Jingyu Liu
Jiaen Lin
Yong Liu
LRM
18
9
0
03 Oct 2024
Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts
Minh Le
Chau Nguyen
Huy Nguyen
Quyen Tran
Trung Le
Nhat Ho
30
3
0
03 Oct 2024
Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely
Siyun Zhao
Yuqing Yang
Zilong Wang
Zhiyuan He
Luna Qiu
Lili Qiu
SyDa
RALM
3DV
32
31
0
23 Sep 2024
PromptDSI: Prompt-based Rehearsal-free Instance-wise Incremental Learning for Document Retrieval
Tuan-Luc Huynh
Thuy-Trang Vu
Weiqing Wang
Yinwei Wei
Trung Le
D. Gašević
Yuan-Fang Li
Thanh-Toan Do
VLM
CLL
38
0
0
18 Jun 2024
Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt Adaptation
Abhinav C. P. Jain
Swarat Chaudhuri
Thomas W. Reps
Christopher M. Jermaine
18
1
0
24 May 2024
Towards Incremental Learning in Large Language Models: A Critical Review
M. Jovanovic
Peter Voss
ELM
CLL
KELM
26
5
0
28 Apr 2024
R
2
R^2
R
2
-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding
Ye Liu
Jixuan He
Wanhua Li
Junsik Kim
D. Wei
Hanspeter Pfister
Chang Wen Chen
34
13
0
31 Mar 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
136
301
0
21 Mar 2024
Code Simulation Challenges for Large Language Models
Emanuele La Malfa
Christoph Weinhuber
Orazio Torre
Fangru Lin
Samuele Marro
Anthony Cohn
Nigel Shadbolt
Michael Wooldridge
LLMAG
LRM
14
8
0
17 Jan 2024
The Expressive Power of Low-Rank Adaptation
Yuchen Zeng
Kangwook Lee
20
49
0
26 Oct 2023
The Learnability of In-Context Learning
Noam Wies
Yoav Levine
Amnon Shashua
114
89
0
14 Mar 2023
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
134
276
0
15 Oct 2021
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Yujia Qin
Xiaozhi Wang
Yusheng Su
Yankai Lin
Ning Ding
...
Juanzi Li
Lei Hou
Peng Li
Maosong Sun
Jie Zhou
VLM
VPVLM
100
24
0
15 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
248
340
0
01 Jan 2021
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
223
4,424
0
23 Jan 2020
1