ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.16008
  4. Cited By
Multi-task learning for natural language processing in the 2020s: where
  are we going?

Multi-task learning for natural language processing in the 2020s: where are we going?

22 July 2020
Joseph Worsham
Jugal Kalita
    AIMat
ArXiv (abs)PDFHTML

Papers citing "Multi-task learning for natural language processing in the 2020s: where are we going?"

28 / 28 papers shown
Title
Transfer Learning Study of Motion Transformer-based Trajectory
  Predictions
Transfer Learning Study of Motion Transformer-based Trajectory Predictions
Lars Ullrich
Alex McMaster
Knut Graichen
80
3
0
12 Apr 2024
Multi-Task Learning for Features Extraction in Financial Annual Reports
Multi-Task Learning for Features Extraction in Financial Annual Reports
Syrielle Montariol
Matej Martinc
Andraz Pelicon
Senja Pollak
Boshko Koloski
Igor Loncarski
Aljoša Valentinčič
26
3
0
08 Apr 2024
Learning to Maximize Mutual Information for Chain-of-Thought
  Distillation
Learning to Maximize Mutual Information for Chain-of-Thought Distillation
Xin Chen
Hanxian Huang
Yanjun Gao
Yi Wang
Jishen Zhao
Ke Ding
97
15
0
05 Mar 2024
A Unified Causal View of Instruction Tuning
A Unified Causal View of Instruction Tuning
Luyao Chen
Wei Huang
Ruqing Zhang
Wei Chen
Jiafeng Guo
Xueqi Cheng
50
1
0
09 Feb 2024
OmniDialog: An Omnipotent Pre-training Model for Task-Oriented Dialogue
  System
OmniDialog: An Omnipotent Pre-training Model for Task-Oriented Dialogue System
Mingtao Yang
See-Kiong Ng
Jinlan Fu
76
2
0
28 Dec 2023
Challenges and Opportunities of Using Transformer-Based Multi-Task
  Learning in NLP Through ML Lifecycle: A Survey
Challenges and Opportunities of Using Transformer-Based Multi-Task Learning in NLP Through ML Lifecycle: A Survey
Lovre Torbarina
Tin Ferkovic
Lukasz Roguski
Velimir Mihelčić
Bruno Šarlija
Z. Kraljevic
59
5
0
16 Aug 2023
Mitigating Negative Transfer with Task Awareness for Sexism, Hate
  Speech, and Toxic Language Detection
Mitigating Negative Transfer with Task Awareness for Sexism, Hate Speech, and Toxic Language Detection
Angel Felipe Magnossão de Paula
Paolo Rosso
Damiano Spina
71
4
0
07 Jul 2023
Extending Memory for Language Modelling
Extending Memory for Language Modelling
A. Nugaliyadde
KELMCLLVLM
36
0
0
19 May 2023
Less is More: Selective Layer Finetuning with SubTuning
Less is More: Selective Layer Finetuning with SubTuning
Gal Kaplun
Andrey Gurevich
Tal Swisa
Mazor David
Shai Shalev-Shwartz
Eran Malach
78
9
0
13 Feb 2023
A Survey on Arabic Named Entity Recognition: Past, Recent Advances, and
  Future Trends
A Survey on Arabic Named Entity Recognition: Past, Recent Advances, and Future Trends
Xiaoye Qu
Yingjie Gu
Qingrong Xia
Zechang Li
Zhefeng Wang
Baoxing Huai
80
20
0
07 Feb 2023
TCBERT: A Technical Report for Chinese Topic Classification BERT
TCBERT: A Technical Report for Chinese Topic Classification BERT
Ting Han
Kunhao Pan
Xinyu Chen
Dingjie Song
Yuchen Fan
Xinyu Gao
Ruyi Gan
Jiaxing Zhang
VLM
58
1
0
21 Nov 2022
Cold Start Streaming Learning for Deep Networks
Cold Start Streaming Learning for Deep Networks
Cameron R. Wolfe
Anastasios Kyrillidis
CLL
50
2
0
09 Nov 2022
MVP: Multi-task Supervised Pre-training for Natural Language Generation
MVP: Multi-task Supervised Pre-training for Natural Language Generation
Tianyi Tang
Junyi Li
Wayne Xin Zhao
Ji-Rong Wen
109
24
0
24 Jun 2022
A Domain-adaptive Pre-training Approach for Language Bias Detection in
  News
A Domain-adaptive Pre-training Approach for Language Bias Detection in News
Jan-David Krieger
Timo Spinde
Terry Ruas
Juhi Kulshrestha
Bela Gipp
AI4CE
85
21
0
22 May 2022
Explaining the Effectiveness of Multi-Task Learning for Efficient
  Knowledge Extraction from Spine MRI Reports
Explaining the Effectiveness of Multi-Task Learning for Efficient Knowledge Extraction from Spine MRI Reports
Arijit Sehanobish
M. Sandora
Nabila Abraham
Jayashri Pawar
Danielle Torres
Anasuya Das
M. Becker
Richard Herzog
Benjamin Odry
Ron Vianu
75
3
0
06 May 2022
Efficient Extraction of Pathologies from C-Spine Radiology Reports using
  Multi-Task Learning
Efficient Extraction of Pathologies from C-Spine Radiology Reports using Multi-Task Learning
Arijit Sehanobish
Nathaniel K. Brown
Ishita Daga
Jayashri Pawar
Danielle Torres
Anasuya Das
M. Becker
Richard Herzog
Benjamin Odry
Ron Vianu
MedIm
57
2
0
09 Apr 2022
Bench-Marking And Improving Arabic Automatic Image Captioning Through
  The Use Of Multi-Task Learning Paradigm
Bench-Marking And Improving Arabic Automatic Image Captioning Through The Use Of Multi-Task Learning Paradigm
Muhy Eddin Za'ter
Bashar Talafha
VLM
50
2
0
11 Feb 2022
Generative multitask learning mitigates target-causing confounding
Generative multitask learning mitigates target-causing confounding
Taro Makino
Krzysztof J. Geras
Kyunghyun Cho
OOD
56
6
0
08 Feb 2022
New Tight Relaxations of Rank Minimization for Multi-Task Learning
New Tight Relaxations of Rank Minimization for Multi-Task Learning
Wei Chang
Feiping Nie
Rong Wang
Xuelong Li
46
5
0
09 Dec 2021
Learning to Transfer for Traffic Forecasting via Multi-task Learning
Learning to Transfer for Traffic Forecasting via Multi-task Learning
Y. Lu
AI4TS
73
7
0
27 Nov 2021
Finetuned Language Models Are Zero-Shot Learners
Finetuned Language Models Are Zero-Shot Learners
Jason W. Wei
Maarten Bosma
Vincent Zhao
Kelvin Guu
Adams Wei Yu
Brian Lester
Nan Du
Andrew M. Dai
Quoc V. Le
ALMUQCV
283
3,801
0
03 Sep 2021
MEDIC: A Multi-Task Learning Dataset for Disaster Image Classification
MEDIC: A Multi-Task Learning Dataset for Disaster Image Classification
Firoj Alam
Tanvirul Alam
Md. Arid Hasan
A. Hasnat
Muhammad Imran
Ferda Ofli
VLM
117
53
0
29 Aug 2021
AMMUS : A Survey of Transformer-based Pretrained Models in Natural
  Language Processing
AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing
Katikapalli Subramanyam Kalyan
A. Rajasekharan
S. Sangeetha
VLMLM&MA
103
270
0
12 Aug 2021
Greedy-layer Pruning: Speeding up Transformer Models for Natural
  Language Processing
Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing
David Peer
Sebastian Stabinger
Stefan Engl
A. Rodríguez-Sánchez
29
28
0
31 May 2021
Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent
  Detection and Slot Filling
Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling
Yutai Hou
Y. Lai
Cheng Chen
Wanxiang Che
Ting Liu
61
14
0
25 May 2021
WASSA@IITK at WASSA 2021: Multi-task Learning and Transformer Finetuning
  for Emotion Classification and Empathy Prediction
WASSA@IITK at WASSA 2021: Multi-task Learning and Transformer Finetuning for Emotion Classification and Empathy Prediction
Jay Mundra
Rohan Gupta
Sagnik Mukherjee
32
14
0
20 Apr 2021
FewJoint: A Few-shot Learning Benchmark for Joint Language Understanding
FewJoint: A Few-shot Learning Benchmark for Joint Language Understanding
Yutai Hou
Jiafeng Mao
Y. Lai
Cheng Chen
Wanxiang Che
Zhigang Chen
Ting Liu
74
16
0
17 Sep 2020
MT-Clinical BERT: Scaling Clinical Information Extraction with Multitask
  Learning
MT-Clinical BERT: Scaling Clinical Information Extraction with Multitask Learning
Andriy Mulyar
Bridget T. McInnes
71
56
0
21 Apr 2020
1