Zero-to-Strong Generalization: Eliciting Strong Capabilities of Large
Language Models Iteratively without Gold LabelsInternational Conference on Computational Linguistics (COLING), 2024 |
Rectifying Demonstration Shortcut in In-Context LearningNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024 |
Less is KEN: a Universal and Simple Non-Parametric Pruning Algorithm for
Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2024 |
SAPT: A Shared Attention Framework for Parameter-Efficient Continual
Learning of Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2024 |
Gen-Z: Generative Zero-Shot Text Classification with Contextualized
Label DescriptionsInternational Conference on Learning Representations (ICLR), 2023 |
Fusing Models with Complementary ExpertiseInternational Conference on Learning Representations (ICLR), 2023 |
LEAP: Efficient and Automated Test Method for NLP SoftwareInternational Conference on Automated Software Engineering (ASE), 2023 |
Overthinking the Truth: Understanding how Language Models Process False
DemonstrationsInternational Conference on Learning Representations (ICLR), 2023 |
Massively Multilingual Corpus of Sentiment Datasets and Multi-faceted
Sentiment Classification BenchmarkNeural Information Processing Systems (NeurIPS), 2023 |
Mitigating Label Biases for In-context LearningAnnual Meeting of the Association for Computational Linguistics (ACL), 2023 |
Active Learning Principles for In-Context Learning with Large Language
ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023 |
Prompting with Pseudo-Code InstructionsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
What In-Context Learning "Learns" In-Context: Disentangling Task
Recognition and Task LearningAnnual Meeting of the Association for Computational Linguistics (ACL), 2023 |
Learning to Initialize: Can Meta Learning Improve Cross-task
Generalization in Prompt Tuning?Annual Meeting of the Association for Computational Linguistics (ACL), 2023 |
Knowledge is a Region in Weight Space for Fine-tuned Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
ColD Fusion: Collaborative Descent for Distributed Multitask FinetuningAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
Few-shot Adaptation Works with UnpredicTable DataAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in
Low-Resource NLPInternational Conference on Learning Representations (ICLR), 2022 |
Eliciting and Understanding Cross-Task Skills with Task-Level
Mixture-of-ExpertsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Ground-Truth Labels Matter: A Deeper Look into Input-Label
DemonstrationsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Assessment of Massively Multilingual Sentiment ClassifiersWorkshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA), 2022 |
Rethinking the Role of Demonstrations: What Makes In-Context Learning
Work?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
MetaICL: Learning to Learn In ContextNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in
NLPConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
Augmenting Poetry Composition with Verse by VerseNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |