ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.09139
  4. Cited By
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning
  in NLP Using Fewer Parameters & Less Data
v1v2v3 (latest)

Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data

International Conference on Learning Representations (ICLR), 2020
19 September 2020
Jonathan Pilault
Amine Elhattami
C. Pal
    CLLMoE
ArXiv (abs)PDFHTMLGithub (56★)

Papers citing "Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data"

3 / 53 papers shown
Supervising Model Attention with Human Explanations for Robust Natural
  Language Inference
Supervising Model Attention with Human Explanations for Robust Natural Language InferenceAAAI Conference on Artificial Intelligence (AAAI), 2021
Joe Stacey
Yonatan Belinkov
Marek Rei
268
51
0
16 Apr 2021
Self-Explaining Structures Improve NLP Models
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Leilei Gan
Jiwei Li
MILMXAILRMFAtt
307
42
0
03 Dec 2020
Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation
Improving BERT Fine-Tuning via Self-Ensemble and Self-DistillationJournal of Computational Science and Technology (JCST), 2020
Yige Xu
Xipeng Qiu
L. Zhou
Xuanjing Huang
143
73
0
24 Feb 2020
Previous
12