On the Power of Foundation ModelsInternational Conference on Machine Learning (ICML), 2022 |
Few-shot Query-Focused Summarization with Prefix-MergingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Alignment-Enriched Tuning for Patch-Level Pre-trained Document Image
ModelsAAAI Conference on Artificial Intelligence (AAAI), 2022 |
Navigation as Attackers Wish? Towards Building Robust Embodied Agents
under Federated LearningNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
RNTrajRec: Road Network Enhanced Trajectory Recovery with
Spatial-Temporal TransformerIEEE International Conference on Data Engineering (ICDE), 2022 |
VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal RetrievalComputer Vision and Pattern Recognition (CVPR), 2022 |
TEMPERA: Test-Time Prompting via Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2022 |
Validating Large Language Models with ReLMConference on Machine Learning and Systems (MLSys), 2022 |
Multitask Vision-Language Prompt TuningIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022 |
QAmeleon: Multilingual QA with Only 5 ExamplesTransactions of the Association for Computational Linguistics (TACL), 2022 |
A Universal Discriminator for Zero-Shot GeneralizationAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
SPE: Symmetrical Prompt Enhancement for Fact ProbingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
ADEPT: A DEbiasing PrompT FrameworkAAAI Conference on Artificial Intelligence (AAAI), 2022 |
MACSum: Controllable Summarization with Mixed AttributesTransactions of the Association for Computational Linguistics (TACL), 2022 |
ConsPrompt: Exploiting Contrastive Samples for Fewshot Prompt LearningIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2022 |
COPEN: Probing Conceptual Knowledge in Pre-trained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Contrastive Learning with Prompt-derived Virtual Semantic Prototypes for
Unsupervised Sentence EmbeddingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Prompt-based Text Entailment for Low-Resource Named Entity RecognitionInternational Conference on Computational Linguistics (COLING), 2022 |
Could Giant Pretrained Image Models Extract Universal Representations?Neural Information Processing Systems (NeurIPS), 2022 |
Large Language Models Are Human-Level Prompt EngineersInternational Conference on Learning Representations (ICLR), 2022 |
Fine-grained Visual-Text Prompt-Driven Self-Training for Open-Vocabulary
Object DetectionIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2022 |
Parameter-Efficient Tuning Makes a Good Classification HeadConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Inducer-tuning: Connecting Prefix-tuning and Adapter-tuningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
IELM: An Open Information Extraction Benchmark for Pre-Trained Language
ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
PALT: Parameter-Lite Transfer of Language Models for Knowledge Graph
CompletionConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Evaluating Parameter Efficient Learning for GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
NVIDIA FLARE: Federated Learning from Simulation to Real-WorldIEEE Data Engineering Bulletin (DEB), 2022 |
Generative Knowledge Graph Construction: A ReviewConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Model ensemble instead of prompt fusion: a sample-specific knowledge
transfer method for few-shot prompt tuningInternational Conference on Learning Representations (ICLR), 2022 |
Generative Prompt Tuning for Relation ClassificationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of
RewardsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Late Prompt Tuning: A Late Prompt Could Be Better Than Many PromptsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 Xiangyang Liu Tianxiang Sun Xuanjing Huang Xipeng Qiu |
TabLLM: Few-shot Classification of Tabular Data with Large Language
ModelsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022 |
Towards Realistic Low-resource Relation Extraction: A Benchmark with
Empirical Baseline StudyConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Tiny-Attention Adapter: Contexts Are More Important Than the Number of
ParametersConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Zero-Shot Learners for Natural Language Understanding via a Unified
Multiple Choice PerspectiveConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Knowledge Prompting in Pre-trained Language Model for Natural Language
UnderstandingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Multitask Pre-training of Modular Prompt for Chinese Few-Shot LearningAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
Joint Reasoning on Hybrid-knowledge sources for Task-Oriented DialogFindings (Findings), 2022 |
Can Language Models Be Specific? How?Annual Meeting of the Association for Computational Linguistics (ACL), 2022 |
XPrompt: Exploring the Extreme of Prompt TuningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |