ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.01247
  4. Cited By
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
v1v2 (latest)

Do Prompt-Based Models Really Understand the Meaning of their Prompts?

2 September 2021
Albert Webson
Ellie Pavlick
    LRM
ArXiv (abs)PDFHTML

Papers citing "Do Prompt-Based Models Really Understand the Meaning of their Prompts?"

27 / 277 papers shown
Title
Shortcut Learning of Large Language Models in Natural Language
  Understanding
Shortcut Learning of Large Language Models in Natural Language UnderstandingCommunications of the ACM (CACM), 2022
Mengnan Du
Fengxiang He
Na Zou
Dacheng Tao
Helen Zhou
KELMOffRL
342
108
0
25 Aug 2022
Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation
  with Large Language Models
Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language ModelsIEEE Transactions on Visualization and Computer Graphics (TVCG), 2022
Hendrik Strobelt
Albert Webson
Victor Sanh
Benjamin Hoover
Johanna Beyer
Hanspeter Pfister
Alexander M. Rush
VLM
155
174
0
16 Aug 2022
Language models show human-like content effects on reasoning tasks
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLMLRM
419
210
0
14 Jul 2022
BioTABQA: Instruction Learning for Biomedical Table Question Answering
BioTABQA: Instruction Learning for Biomedical Table Question AnsweringConference and Labs of the Evaluation Forum (CLEF), 2022
Man Luo
S. Saxena
Swaroop Mishra
Mihir Parmar
Chitta Baral
LMTD
308
16
0
06 Jul 2022
MVP: Multi-task Supervised Pre-training for Natural Language Generation
MVP: Multi-task Supervised Pre-training for Natural Language GenerationAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Tianyi Tang
Junyi Li
Wayne Xin Zhao
Ji-Rong Wen
272
29
0
24 Jun 2022
Using cognitive psychology to understand GPT-3
Using cognitive psychology to understand GPT-3Proceedings of the National Academy of Sciences of the United States of America (PNAS), 2022
Marcel Binz
Eric Schulz
ELMLLMAG
559
607
0
21 Jun 2022
InstructDial: Improving Zero and Few-shot Generalization in Dialogue
  through Instruction Tuning
InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction TuningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Prakhar Gupta
Cathy Jiao
Yi-Ting Yeh
Shikib Mehri
M. Eskénazi
Jeffrey P. Bigham
ALM
296
54
0
25 May 2022
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement LearningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Mingkai Deng
Jianyu Wang
Cheng-Ping Hsieh
Yihan Wang
Han Guo
Tianmin Shu
Meng Song
Eric Xing
Zhiting Hu
301
422
0
25 May 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot ReasonersNeural Information Processing Systems (NeurIPS), 2022
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLMLRM
1.3K
5,855
0
24 May 2022
Improving Short Text Classification With Augmented Data Using GPT-3
Improving Short Text Classification With Augmented Data Using GPT-3Natural Language Engineering (NLE), 2022
Salvador Balkus
Donghui Yan
147
55
0
23 May 2022
Instruction Induction: From Few Examples to Natural Language Task
  Descriptions
Instruction Induction: From Few Examples to Natural Language Task DescriptionsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Or Honovich
Uri Shaham
Samuel R. Bowman
Omer Levy
ELMLRM
432
174
0
22 May 2022
Can Foundation Models Wrangle Your Data?
Can Foundation Models Wrangle Your Data?Proceedings of the VLDB Endowment (PVLDB), 2022
A. Narayan
Ines Chami
Laurel J. Orr
Simran Arora
Christopher Ré
LMTDAI4CE
392
280
0
20 May 2022
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
  In-Context Learning
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context LearningNeural Information Processing Systems (NeurIPS), 2022
Haokun Liu
Derek Tam
Mohammed Muqeeth
Jay Mohta
Tenghao Huang
Joey Tianyi Zhou
Colin Raffel
433
1,135
0
11 May 2022
The Unreliability of Explanations in Few-shot Prompting for Textual
  Reasoning
The Unreliability of Explanations in Few-shot Prompting for Textual ReasoningNeural Information Processing Systems (NeurIPS), 2022
Xi Ye
Greg Durrett
ReLMLRM
270
220
0
06 May 2022
Language Models in the Loop: Incorporating Prompting into Weak
  Supervision
Language Models in the Loop: Incorporating Prompting into Weak SupervisionACM / IMS Journal of Data Science (JDS), 2022
Ryan Smith
Jason Alan Fries
Braden Hancock
Stephen H. Bach
246
61
0
04 May 2022
OPT: Open Pre-trained Transformer Language Models
OPT: Open Pre-trained Transformer Language Models
Susan Zhang
Stephen Roller
Naman Goyal
Mikel Artetxe
Moya Chen
...
Daniel Simig
Punit Singh Koura
Anjali Sridhar
Tianlu Wang
Luke Zettlemoyer
VLMOSLMAI4CE
799
4,320
0
02 May 2022
Data Distributional Properties Drive Emergent In-Context Learning in
  Transformers
Data Distributional Properties Drive Emergent In-Context Learning in TransformersNeural Information Processing Systems (NeurIPS), 2022
Stephanie C. Y. Chan
Adam Santoro
Andrew Kyle Lampinen
Jane X. Wang
Aaditya K. Singh
Pierre Harvey Richemond
J. Mcclelland
Felix Hill
765
322
0
22 Apr 2022
In-BoXBART: Get Instructions into Biomedical Multi-Task Learning
In-BoXBART: Get Instructions into Biomedical Multi-Task Learning
Mihir Parmar
Swaroop Mishra
Mirali Purohit
Man Luo
M. H. Murad
Chitta Baral
189
23
0
15 Apr 2022
Can language models learn from explanations in context?
Can language models learn from explanations in context?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Andrew Kyle Lampinen
Ishita Dasgupta
Stephanie C. Y. Chan
Kory Matthewson
Michael Henry Tessler
Antonia Creswell
James L. McClelland
Jane X. Wang
Felix Hill
LRMReLM
512
347
0
05 Apr 2022
PERFECT: Prompt-free and Efficient Few-shot Learning with Language
  Models
PERFECT: Prompt-free and Efficient Few-shot Learning with Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Rabeeh Karimi Mahabadi
Luke Zettlemoyer
James Henderson
Marzieh Saeidi
Lambert Mathias
Ves Stoyanov
Majid Yazdani
VLM
176
76
0
03 Apr 2022
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large
  Language Models
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language ModelsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Archiki Prasad
Peter Hase
Xiang Zhou
Joey Tianyi Zhou
201
146
0
14 Mar 2022
Rethinking the Role of Demonstrations: What Makes In-Context Learning
  Work?
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Sewon Min
Xinxi Lyu
Ari Holtzman
Mikel Artetxe
M. Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
LLMAGLRM
464
1,775
0
25 Feb 2022
PromptSource: An Integrated Development Environment and Repository for
  Natural Language Prompts
PromptSource: An Integrated Development Environment and Repository for Natural Language PromptsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Stephen H. Bach
Victor Sanh
Zheng-Xin Yong
Albert Webson
Colin Raffel
...
Khalid Almubarak
Xiangru Tang
Dragomir R. Radev
Mike Tian-Jian Jiang
Alexander M. Rush
VLM
567
391
0
02 Feb 2022
Analyzing the Limits of Self-Supervision in Handling Bias in Language
Analyzing the Limits of Self-Supervision in Handling Bias in Language
Lisa Bauer
Karthik Gopalakrishnan
Spandana Gella
Yang Liu
Joey Tianyi Zhou
Dilek Z. Hakkani-Tür
ELM
170
3
0
16 Dec 2021
True Few-Shot Learning with Prompts -- A Real-World Perspective
True Few-Shot Learning with Prompts -- A Real-World PerspectiveTransactions of the Association for Computational Linguistics (TACL), 2021
Timo Schick
Hinrich Schütze
VLM
158
74
0
26 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A SurveyACM Computing Surveys (CSUR), 2021
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MAVLMAI4CE
402
1,333
0
01 Nov 2021
Systematic human learning and generalization from a brief tutorial with
  explanatory feedback
Systematic human learning and generalization from a brief tutorial with explanatory feedbackOpen Mind (OM), 2021
A. Nam
James L. McClelland
88
3
0
10 Jul 2021
Previous
123456