ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.02423
  4. Cited By
PTP: Boosting Stability and Performance of Prompt Tuning with
  Perturbation-Based Regularizer

PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer

3 May 2023
Lichang Chen
Heng-Chiao Huang
Varun Madhavan
    AAML
ArXivPDFHTML

Papers citing "PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer"

5 / 5 papers shown
Title
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
254
2,999
0
18 Apr 2021
Robust Optimization as Data Augmentation for Large-scale Graphs
Robust Optimization as Data Augmentation for Large-scale Graphs
Kezhi Kong
G. Li
Mucong Ding
Zuxuan Wu
Chen Zhu
Bernard Ghanem
Gavin Taylor
Tom Goldstein
81
63
0
19 Oct 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
228
1,382
0
21 Jan 2020
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Chen Zhu
Yu Cheng
Zhe Gan
S. Sun
Tom Goldstein
Jingjing Liu
AAML
184
390
0
25 Sep 2019
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
221
1,436
0
17 Sep 2019
1