ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.10603
  4. Cited By
Reasonable Effectiveness of Random Weighting: A Litmus Test for
  Multi-Task Learning

Reasonable Effectiveness of Random Weighting: A Litmus Test for Multi-Task Learning

20 November 2021
Baijiong Lin
Feiyang Ye
Yu Zhang
Ivor W. Tsang
ArXivPDFHTML

Papers citing "Reasonable Effectiveness of Random Weighting: A Litmus Test for Multi-Task Learning"

14 / 14 papers shown
Title
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
MTL-UE: Learning to Learn Nothing for Multi-Task Learning
Yi Yu
Song Xia
Siyuan Yang
Chenqi Kong
Wenhan Yang
Shijian Lu
Yap-Peng Tan
Alex Chichung Kot
46
0
0
08 May 2025
Transforming Vision Transformer: Towards Efficient Multi-Task Asynchronous Learning
Transforming Vision Transformer: Towards Efficient Multi-Task Asynchronous Learning
Hanwen Zhong
Jiaxin Chen
Yutong Zhang
Di Huang
Yunhong Wang
MoE
42
0
0
12 Jan 2025
Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate
Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate
Zhiqi Bu
Xiaomeng Jin
Bhanukiran Vinzamuri
Anil Ramakrishna
Kai-Wei Chang
V. Cevher
Mingyi Hong
MU
83
6
0
29 Oct 2024
Using dynamic loss weighting to boost improvements in forecast stability
Using dynamic loss weighting to boost improvements in forecast stability
Daan Caljon
Jeff Vercauteren
Simon De Vos
Wouter Verbeke
Jente Van Belle
37
0
0
26 Sep 2024
Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences
Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences
Nikolaos Dimitriadis
Pascal Frossard
F. Fleuret
MoE
59
6
0
10 Jul 2024
CoTBal: Comprehensive Task Balancing for Multi-Task Visual Instruction Tuning
CoTBal: Comprehensive Task Balancing for Multi-Task Visual Instruction Tuning
Yanqi Dai
Dong Jing
Nanyi Fei
Zhiwu Lu
Nanyi Fei
Guoxing Yang
Zhiwu Lu
50
3
0
07 Mar 2024
Robust Analysis of Multi-Task Learning Efficiency: New Benchmarks on
  Light-Weighed Backbones and Effective Measurement of Multi-Task Learning
  Challenges by Feature Disentanglement
Robust Analysis of Multi-Task Learning Efficiency: New Benchmarks on Light-Weighed Backbones and Effective Measurement of Multi-Task Learning Challenges by Feature Disentanglement
Dayou Mao
Yuhao Chen
Yifan Wu
Maximilian Gilles
Alexander Wong
AAML
33
0
0
05 Feb 2024
A First-Order Multi-Gradient Algorithm for Multi-Objective Bi-Level
  Optimization
A First-Order Multi-Gradient Algorithm for Multi-Objective Bi-Level Optimization
Feiyang Ye
Baijiong Lin
Xiao-Qun Cao
Yu Zhang
Ivor Tsang
45
6
0
17 Jan 2024
Concurrent ischemic lesion age estimation and segmentation of CT brain
  using a Transformer-based network
Concurrent ischemic lesion age estimation and segmentation of CT brain using a Transformer-based network
A. Marcus
P. Bentley
Daniel Rueckert
MedIm
10
9
0
21 Jun 2023
Sample-Level Weighting for Multi-Task Learning with Auxiliary Tasks
Sample-Level Weighting for Multi-Task Learning with Auxiliary Tasks
Emilie Grégoire
M. H. Chaudhary
Sam Verboven
24
1
0
07 Jun 2023
Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and
  Beyond
Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and Beyond
Zhu Liu
Jinyuan Liu
Guanyao Wu
Long Ma
Xin-Yue Fan
Risheng Liu
26
34
0
11 May 2023
Pareto Manifold Learning: Tackling multiple tasks via ensembles of
  single-task models
Pareto Manifold Learning: Tackling multiple tasks via ensembles of single-task models
Nikolaos Dimitriadis
P. Frossard
Franccois Fleuret
16
25
0
18 Oct 2022
Efficiently Identifying Task Groupings for Multi-Task Learning
Efficiently Identifying Task Groupings for Multi-Task Learning
Christopher Fifty
Ehsan Amid
Zhe Zhao
Tianhe Yu
Rohan Anil
Chelsea Finn
201
238
1
10 Sep 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
275
2,888
0
15 Sep 2016
1