Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.05638
Cited By
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
11 May 2022
Haokun Liu
Derek Tam
Mohammed Muqeeth
Jay Mohta
Tenghao Huang
Mohit Bansal
Colin Raffel
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"
46 / 96 papers shown
Title
Traffic Signal Control Using Lightweight Transformers: An Offline-to-Online RL Approach
Xingshuai Huang
Di Wu
Benoit Boulet
OffRL
6
2
0
12 Dec 2023
Diversified in-domain synthesis with efficient fine-tuning for few-shot classification
Victor G. Turrisi da Costa
Nicola Dall’Asen
Yiming Wang
N. Sebe
Elisa Ricci
30
2
0
05 Dec 2023
A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA
Damjan Kalajdzievski
ALM
9
77
0
28 Nov 2023
PrivateLoRA For Efficient Privacy Preserving LLM
Yiming Wang
Yu Lin
Xiaodong Zeng
Guannan Zhang
29
11
0
23 Nov 2023
PEFT-MedAware: Large Language Model for Medical Awareness
Keivalya Pandya
MedIm
AI4MH
LM&MA
11
0
0
17 Nov 2023
Tied-Lora: Enhancing parameter efficiency of LoRA with weight tying
Adithya Renduchintala
Tugrul Konuk
Oleksii Kuchaiev
MoMe
8
41
0
16 Nov 2023
SiRA: Sparse Mixture of Low Rank Adaptation
Yun Zhu
Nevan Wichers
Chu-Cheng Lin
Xinyi Wang
Tianlong Chen
...
Han Lu
Canoee Liu
Liangchen Luo
Jindong Chen
Lei Meng
MoE
16
27
0
15 Nov 2023
When does In-context Learning Fall Short and Why? A Study on Specification-Heavy Tasks
Hao Peng
Xiaozhi Wang
Jianhui Chen
Weikai Li
Y. Qi
...
Zhili Wu
Kaisheng Zeng
Bin Xu
Lei Hou
Juanzi Li
13
27
0
15 Nov 2023
Decomposed Prompt Tuning via Low-Rank Reparameterization
Yao Xiao
Lu Xu
Jiaxi Li
Wei Lu
Xiaoli Li
VLM
8
6
0
16 Oct 2023
In-Context Learning for Few-Shot Molecular Property Prediction
Christopher Fifty
J. Leskovec
Sebastian Thrun
24
5
0
13 Oct 2023
Comparing Styles across Languages: A Cross-Cultural Exploration of Politeness
Shreya Havaldar
Matthew Pressimone
Eric Wong
Lyle Ungar
41
2
0
11 Oct 2023
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Yukang Chen
Shengju Qian
Haotian Tang
Xin Lai
Zhijian Liu
Song Han
Jiaya Jia
21
150
0
21 Sep 2023
SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding
Tianyu Yu
Chengyue Jiang
Chao Lou
Shen Huang
Xiaobin Wang
...
Haitao Zheng
Ningyu Zhang
Pengjun Xie
Fei Huang
Yong-jia Jiang
LRM
35
17
0
21 Aug 2023
Investigating the Learning Behaviour of In-context Learning: A Comparison with Supervised Learning
Xindi Wang
Yufei Wang
Can Xu
Xiubo Geng
Bowen Zhang
Chongyang Tao
Frank Rudzicz
Robert E. Mercer
Daxin Jiang
8
11
0
28 Jul 2023
PASTA: Pretrained Action-State Transformer Agents
Raphael Boige
Yannis Flet-Berliac
Arthur Flajolet
Guillaume Richard
Thomas Pierrot
LM&Ro
OffRL
11
5
0
20 Jul 2023
Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language Models
Lautaro Estienne
Luciana Ferrer
Matías Vera
Pablo Piantanida
VLM
8
1
0
13 Jul 2023
Parameter-efficient is not sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions
Dongshuo Yin
Xueting Han
Bin Li
Hao Feng
Jinghua Bai
VPVLM
15
16
0
16 Jun 2023
Parameter-Efficient Fine-Tuning without Introducing New Latency
Baohao Liao
Yan Meng
Christof Monz
8
48
0
26 May 2023
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations
Jesus Solano
Oana-Maria Camburu
Pasquale Minervini
8
1
0
22 May 2023
Automated Few-shot Classification with Instruction-Finetuned Language Models
Rami Aly
Xingjian Shi
Kaixiang Lin
Aston Zhang
A. Wilson
11
9
0
21 May 2023
CoEdIT: Text Editing by Task-Specific Instruction Tuning
Vipul Raheja
Dhruv Kumar
Ryan Koo
Dongyeop Kang
ALM
8
56
0
17 May 2023
Improving Small Language Models on PubMedQA via Generative Data Augmentation
Zhen Guo
Peiqi Wang
Yanwei Wang
Shangdi Yu
LM&MA
MedIm
13
10
0
12 May 2023
A Comprehensive Analysis of Adapter Efficiency
Nandini Mundra
Sumanth Doddapaneni
Raj Dabre
Anoop Kunchukuttan
Ratish Puduppully
Mitesh M. Khapra
8
10
0
12 May 2023
Towards Zero-Shot Functional Compositionality of Language Models
Hangyeol Yu
Myeongho Jeong
Jamin Shin
Hyeongdon Moon
Juneyoung Park
Seungtaek Choi
20
1
0
06 Mar 2023
Few-shot Multimodal Multitask Multilingual Learning
Aman Chadha
Vinija Jain
26
0
0
19 Feb 2023
Towards Agile Text Classifiers for Everyone
Maximilian Mozes
Jessica Hoffmann
Katrin Tomanek
Muhamed Kouate
Nithum Thain
Ann Yuan
Tolga Bolukbasi
Lucas Dixon
16
13
0
13 Feb 2023
(QA)
2
^2
2
: Question Answering with Questionable Assumptions
Najoung Kim
Phu Mon Htut
Sam Bowman
Jackson Petty
27
33
0
20 Dec 2022
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
Zheng-Xin Yong
Hailey Schoelkopf
Niklas Muennighoff
Alham Fikri Aji
David Ifeoluwa Adelani
...
Genta Indra Winata
Stella Biderman
Edward Raff
Dragomir R. Radev
Vassilina Nikoulina
CLL
VLM
AI4CE
LRM
27
81
0
19 Dec 2022
Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Hamish Ivison
Noah A. Smith
Hannaneh Hajishirzi
Pradeep Dasigi
23
19
0
01 Dec 2022
Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning
Yu Meng
Martin Michalski
Jiaxin Huang
Yu Zhang
Tarek F. Abdelzaher
Jiawei Han
VLM
22
46
0
06 Nov 2022
Continued Pretraining for Better Zero- and Few-Shot Promptability
Zhaofeng Wu
IV RobertL.Logan
Pete Walsh
Akshita Bhagia
Dirk Groeneveld
Sameer Singh
Iz Beltagy
VLM
13
12
0
19 Oct 2022
Automatic Chain of Thought Prompting in Large Language Models
Zhuosheng Zhang
Aston Zhang
Mu Li
Alexander J. Smola
ReLM
LRM
19
567
0
07 Oct 2022
Efficient Few-Shot Learning Without Prompts
Lewis Tunstall
Nils Reimers
Unso Eun Seo Jo
Luke Bates
Daniel Korat
Moshe Wasserblat
Oren Pereg
VLM
26
180
0
22 Sep 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
118
98
0
24 May 2022
A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement
Yuting Yang
Pei Huang
Juan Cao
Jintao Li
Yun Lin
J. Dong
Feifei Ma
Jian Zhang
AAML
SILM
36
13
0
21 Mar 2022
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Stephen H. Bach
Victor Sanh
Zheng-Xin Yong
Albert Webson
Colin Raffel
...
Khalid Almubarak
Xiangru Tang
Dragomir R. Radev
Mike Tian-Jian Jiang
Alexander M. Rush
VLM
212
335
0
02 Feb 2022
Co-training Improves Prompt-based Learning for Large Language Models
Hunter Lang
Monica Agrawal
Yoon Kim
David Sontag
VLM
LRM
146
38
0
02 Feb 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
203
1,651
0
15 Oct 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
131
276
0
15 Oct 2021
Meta-learning via Language Model In-context Tuning
Yanda Chen
Ruiqi Zhong
Sheng Zha
George Karypis
He He
210
155
0
15 Oct 2021
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
228
780
0
14 Oct 2021
RAFT: A Real-World Few-Shot Text Classification Benchmark
Neel Alex
Eli Lifland
Lewis Tunstall
A. Thakur
Pegah Maham
...
Carolyn Ashurst
Paul Sedille
A. Carlier
M. Noetel
Andreas Stuhlmuller
RALM
163
56
0
28 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
238
1,898
0
31 Dec 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
220
3,054
0
23 Jan 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
248
1,382
0
21 Jan 2020
Previous
1
2