ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.08348
  4. Cited By
Prompt Waywardness: The Curious Case of Discretized Interpretation of
  Continuous Prompts
v1v2 (latest)

Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts

15 December 2021
Daniel Khashabi
Xinxi Lyu
Sewon Min
Lianhui Qin
Kyle Richardson
Sean Welleck
Hannaneh Hajishirzi
Tushar Khot
Ashish Sabharwal
Sameer Singh
Yejin Choi
ArXiv (abs)PDFHTML

Papers citing "Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts"

10 / 60 papers shown
Reducing Retraining by Recycling Parameter-Efficient Prompts
Reducing Retraining by Recycling Parameter-Efficient Prompts
Brian Lester
Joshua Yurtsever
Siamak Shakeri
Noah Constant
175
13
0
10 Aug 2022
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement LearningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Mingkai Deng
Jianyu Wang
Cheng-Ping Hsieh
Yihan Wang
Han Guo
Tianmin Shu
Meng Song
Eric Xing
Zhiting Hu
395
432
0
25 May 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft PromptsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
394
122
0
24 May 2022
Representation Projection Invariance Mitigates Representation Collapse
Representation Projection Invariance Mitigates Representation CollapseConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Anastasia Razdaibiedina
A. Khetan
Zohar Karnin
Daniel Khashabi
Vishaal Kapoor
V. Madan
268
7
0
23 May 2022
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
  In-Context Learning
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context LearningNeural Information Processing Systems (NeurIPS), 2022
Haokun Liu
Derek Tam
Mohammed Muqeeth
Jay Mohta
Tenghao Huang
Joey Tianyi Zhou
Colin Raffel
459
1,183
0
11 May 2022
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts
  in the Vocabulary Space
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary SpaceConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Mor Geva
Avi Caciularu
Ke Wang
Yoav Goldberg
KELM
644
466
0
28 Mar 2022
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large
  Language Models
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language ModelsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Archiki Prasad
Peter Hase
Xiang Zhou
Joey Tianyi Zhou
249
148
0
14 Mar 2022
Describing Differences between Text Distributions with Natural Language
Describing Differences between Text Distributions with Natural LanguageInternational Conference on Machine Learning (ICML), 2022
Ruiqi Zhong
Charles Burton Snell
Dan Klein
Jacob Steinhardt
VLM
300
55
0
28 Jan 2022
The Power of Prompt Tuning for Low-Resource Semantic Parsing
The Power of Prompt Tuning for Low-Resource Semantic Parsing
Nathan Schucher
Siva Reddy
H. D. Vries
VLM
243
36
0
16 Oct 2021
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Albert Webson
Ellie Pavlick
LRM
486
428
0
02 Sep 2021
Previous
12
Page 2 of 2