ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11513
46
0

MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models

17 February 2025
Zhen Zhang
Y. Yang
Kai Zhen
Nathan Susanj
Athanasios Mouchtaris
Siegfried Kunzmann
Zheng Zhang
ArXivPDFHTML
Abstract

Large language models have demonstrated exceptional capabilities across diverse tasks, but their fine-tuning demands significant memory, posing challenges for resource-constrained environments. Zeroth-order (ZO) optimization provides a memory-efficient alternative by eliminating the need for backpropagation. However, ZO optimization suffers from high gradient variance, and prior research has largely focused on single-task learning, leaving its application to multi-task learning unexplored. Multi-task learning is crucial for leveraging shared knowledge across tasks to improve generalization, yet it introduces unique challenges under ZO settings, such as amplified gradient variance and collinearity. In this paper, we present MaZO, the first framework specifically designed for multi-task LLM fine-tuning under ZO optimization. MaZO tackles these challenges at the parameter level through two key innovations: a weight importance metric to identify critical parameters and a multi-task weight update mask to selectively update these parameters, reducing the dimensionality of the parameter space and mitigating task conflicts. Experiments demonstrate that MaZO achieves state-of-the-art performance, surpassing even multi-task learning methods designed for first-order optimization.

View on arXiv
@article{zhang2025_2502.11513,
  title={ MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models },
  author={ Zhen Zhang and Yifan Yang and Kai Zhen and Nathan Susanj and Athanasios Mouchtaris and Siegfried Kunzmann and Zheng Zhang },
  journal={arXiv preprint arXiv:2502.11513},
  year={ 2025 }
}
Comments on this paper