ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.00759
  4. Cited By
HyperPrompt: Prompt-based Task-Conditioning of Transformers

HyperPrompt: Prompt-based Task-Conditioning of Transformers

1 March 2022
Yun He
H. Zheng
Yi Tay
Jai Gupta
Yu Du
V. Aribandi
Zhe Zhao
Yaguang Li
Zhaoji Chen
Donald Metzler
Heng-Tze Cheng
Ed H. Chi
    LRM
    VLM
ArXivPDFHTML

Papers citing "HyperPrompt: Prompt-based Task-Conditioning of Transformers"

21 / 21 papers shown
Title
A Survey of Controllable Learning: Methods and Applications in Information Retrieval
A Survey of Controllable Learning: Methods and Applications in Information Retrieval
Chenglei Shen
Xiao Zhang
Teng Shi
Changshuo Zhang
Guofu Xie
Jun Xu
68
5
0
03 Jan 2025
Global and Local Prompts Cooperation via Optimal Transport for Federated
  Learning
Global and Local Prompts Cooperation via Optimal Transport for Federated Learning
Hongxia Li
Wei Huang
Jingya Wang
Ye-ling Shi
FedML
VLM
35
19
0
29 Feb 2024
Investigating the Effectiveness of HyperTuning via Gisting
Investigating the Effectiveness of HyperTuning via Gisting
Jason Phang
46
0
0
26 Feb 2024
Decomposed Prompt Tuning via Low-Rank Reparameterization
Decomposed Prompt Tuning via Low-Rank Reparameterization
Yao Xiao
Lu Xu
Jiaxi Li
Wei Lu
Xiaoli Li
VLM
17
6
0
16 Oct 2023
Focus Your Attention (with Adaptive IIR Filters)
Focus Your Attention (with Adaptive IIR Filters)
Shahar Lutati
Itamar Zimerman
Lior Wolf
32
9
0
24 May 2023
Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Lingfeng Shen
Weiting Tan
Boyuan Zheng
Daniel Khashabi
VLM
39
6
0
18 May 2023
Full Scaling Automation for Sustainable Development of Green Data Centers
Full Scaling Automation for Sustainable Development of Green Data Centers
Shiyu Wang
Yinbo Sun
X. Shi
Shiyi Zhu
Linfao Ma
James Y. Zhang
Yifei Zheng
Jian Liu
35
8
0
01 May 2023
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Zhen Wang
Rameswar Panda
Leonid Karlinsky
Rogerio Feris
Huan Sun
Yoon Kim
VLM
VPVLM
19
107
0
06 Mar 2023
Task Bias in Vision-Language Models
Task Bias in Vision-Language Models
Sachit Menon
I. Chandratreya
Carl Vondrick
VLM
SSL
19
6
0
08 Dec 2022
HyperTuning: Toward Adapting Large Language Models without
  Back-propagation
HyperTuning: Toward Adapting Large Language Models without Back-propagation
Jason Phang
Yi Mao
Pengcheng He
Weizhu Chen
14
30
0
22 Nov 2022
TEMPERA: Test-Time Prompting via Reinforcement Learning
TEMPERA: Test-Time Prompting via Reinforcement Learning
Tianjun Zhang
Xuezhi Wang
Denny Zhou
Dale Schuurmans
Joseph E. Gonzalez
VLM
20
35
0
21 Nov 2022
Prompt Tuning for Parameter-efficient Medical Image Segmentation
Prompt Tuning for Parameter-efficient Medical Image Segmentation
Marc Fischer
Alexander Bartler
Bin Yang
SSeg
19
18
0
16 Nov 2022
Two-stage LLM Fine-tuning with Less Specialization and More
  Generalization
Two-stage LLM Fine-tuning with Less Specialization and More Generalization
Yihan Wang
Si Si
Daliang Li
Michal Lukasik
Felix X. Yu
Cho-Jui Hsieh
Inderjit S Dhillon
Sanjiv Kumar
40
29
0
01 Nov 2022
HyperHawkes: Hypernetwork based Neural Temporal Point Process
HyperHawkes: Hypernetwork based Neural Temporal Point Process
Manisha Dubey
P. K. Srijith
M. Desarkar
AI4TS
115
1
0
01 Oct 2022
UL2: Unifying Language Learning Paradigms
UL2: Unifying Language Learning Paradigms
Yi Tay
Mostafa Dehghani
Vinh Q. Tran
Xavier Garcia
Jason W. Wei
...
Tal Schuster
H. Zheng
Denny Zhou
N. Houlsby
Donald Metzler
AI4CE
57
296
0
10 May 2022
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hamish Ivison
Matthew E. Peters
AI4CE
26
20
0
15 Mar 2022
HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both
  Language and Vision-and-Language Tasks
HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both Language and Vision-and-Language Tasks
Zhengkun Zhang
Wenya Guo
Xiaojun Meng
Yasheng Wang
Yadao Wang
Xin Jiang
Qun Liu
Zhenglu Yang
31
15
0
08 Mar 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,657
0
15 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1