ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.00982
32
0

Accelerating Deep Neural Network Training via Distributed Hybrid Order Optimization

2 May 2025
Shunxian Gu
Chaoqun You
Bangbang Ren
Lailong Luo
Junxu Xia
Deke Guo
ArXivPDFHTML
Abstract

Scaling deep neural network (DNN) training to more devices can reduce time-to-solution. However, it is impractical for users with limited computing resources. FOSI, as a hybrid order optimizer, converges faster than conventional optimizers by taking advantage of both gradient information and curvature information when updating the DNN model. Therefore, it provides a new chance for accelerating DNN training in the resource-constrained setting. In this paper, we explore its distributed design, namely DHO2_22​, including distributed calculation of curvature information and model update with partial curvature information to accelerate DNN training with a low memory burden. To further reduce the training time, we design a novel strategy to parallelize the calculation of curvature information and the model update on different devices. Experimentally, our distributed design can achieve an approximate linear reduction of memory burden on each device with the increase of the device number. Meanwhile, it achieves 1.4×∼2.1×1.4\times\sim2.1\times1.4×∼2.1× speedup in the total training time compared with other distributed designs based on conventional first- and second-order optimizers.

View on arXiv
@article{gu2025_2505.00982,
  title={ Accelerating Deep Neural Network Training via Distributed Hybrid Order Optimization },
  author={ Shunxian Gu and Chaoqun You and Bangbang Ren and Lailong Luo and Junxu Xia and Deke Guo },
  journal={arXiv preprint arXiv:2505.00982},
  year={ 2025 }
}
Comments on this paper