How are Prompts Different in Terms of Sensitivity?North American Chapter of the Association for Computational Linguistics (NAACL), 2023 |
PromptAgent: Strategic Planning with Language Models Enables
Expert-level Prompt OptimizationInternational Conference on Learning Representations (ICLR), 2023 |
Failures Pave the Way: Enhancing Large Language Models through
Tuning-free Rule AccumulationConference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
Encouraging Divergent Thinking in Large Language Models through
Multi-Agent DebateConference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
Automatic Prompt Optimization with "Gradient Descent" and Beam SearchConference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt
Tuning and DiscoveryNeural Information Processing Systems (NeurIPS), 2023 |
A Survey on In-context LearningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Large Language Models Are Human-Level Prompt EngineersInternational Conference on Learning Representations (ICLR), 2022 |
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model
AdaptationIEEE Transactions on Knowledge and Data Engineering (TKDE), 2022 |
An Explanation of In-context Learning as Implicit Bayesian InferenceInternational Conference on Learning Representations (ICLR), 2021 |
Towards Understanding Knowledge DistillationInternational Conference on Machine Learning (ICML), 2019 |