ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.00235
11
0

Efficient AlphaFold2 Training using Parallel Evoformer and Branch Parallelism

1 November 2022
Guoxia Wang
Zhihua Wu
Xiaomin Fang
Yingfei Xiang
Yiqun Liu
Dianhai Yu
Yanjun Ma
ArXivPDFHTML
Abstract

The accuracy of AlphaFold2, a frontier end-to-end structure prediction system, is already close to that of the experimental determination techniques. Due to the complex model architecture and large memory consumption, it requires lots of computational resources and time to train AlphaFold2 from scratch. Efficient AlphaFold2 training could accelerate the development of life science. In this paper, we propose a Parallel Evoformer and Branch Parallelism to speed up the training of AlphaFold2. We conduct sufficient experiments on UniFold implemented in PyTorch and HelixFold implemented in PaddlePaddle, and Branch Parallelism can improve the training performance by 38.67% and 36.93%, respectively. We also demonstrate that the accuracy of Parallel Evoformer could be on par with AlphaFold2 on the CASP14 and CAMEO datasets. The source code is available on https://github.com/PaddlePaddle/PaddleFleetX

View on arXiv
Comments on this paper