Uniform Loss vs. Specialized Optimization: A Comparative Analysis in Multi-Task Learning

Specialized Multi-Task Optimizers (SMTOs) balance task learning in Multi-Task Learning by addressing issues like conflicting gradients and differing gradient norms, which hinder equal-weighted task training. However, recent critiques suggest that equally weighted tasks can achieve competitive results compared to SMTOs, arguing that previous SMTO results were influenced by poor hyperparameter optimization and lack of regularization. In this work, we evaluate these claims through an extensive empirical evaluation of SMTOs, including some of the latest methods, on more complex multi-task problems to clarify this behavior. Our findings indicate that SMTOs perform well compared to uniform loss and that fixed weights can achieve competitive performance compared to SMTOs. Furthermore, we demonstrate why uniform loss perform similarly to SMTOs in some instances. The code will be made publicly available.
View on arXiv@article{gama2025_2505.10347, title={ Uniform Loss vs. Specialized Optimization: A Comparative Analysis in Multi-Task Learning }, author={ Gabriel S. Gama and Valdir Grassi Jr }, journal={arXiv preprint arXiv:2505.10347}, year={ 2025 } }