371

Sequence Parallelism: Long Sequence Training from System Perspective

Annual Meeting of the Association for Computational Linguistics (ACL), 2021
Abstract

Self-attention suffers from quadratic memory requirements with respect to the sequence length. In this work, we propose sequence parallelism, a memory-efficient parallelism method to help us break input sequence length limitation and train with longer sequences on GPUs efficiently. Our approach is compatible with most existing parallelisms. More importantly, we no longer require a single device to hold the whole sequence. Specifically, we split the input sequence into multiple chunks and feed each chunk into its corresponding device (i.e. GPU). To compute the attention output, we integrated ring-style communication with self-attention calculation and proposed Ring Self-Attention (RSA). Experiments show that sequence parallelism performs well when scaling with batch size and sequence length. Compared with tensor parallelism, our approach achieved 13.7×13.7\times and 3.0×3.0\times maximum batch size and sequence length respectively when scaling up to 64 NVIDIA P100 GPUs.

View on arXiv
Comments on this paper