7

Stabilizing Transformer Training Through Consensus

Shyam Venkatasubramanian
Sean Moushegian
Michael Lin
Mir Park
Ankit Singhal
Connor Lee
Main:8 Pages
10 Figures
Bibliography:3 Pages
12 Tables
Appendix:16 Pages
Abstract

Standard attention-based transformers are known to exhibit instability under learning rate overspecification during training, particularly at high learning rates. While various methods have been proposed to improve resilience to such overspecification by modifying the optimization procedure, fundamental architectural innovations to this end remain underexplored. In this work, we illustrate that the consensus mechanism, a drop-in replacement for attention, stabilizes transformer training across a wider effective range of learning rates. We formulate consensus as a graphical model and provide extensive empirical analysis demonstrating improved stability across learning rate sweeps on text, DNA, and protein modalities. We further propose a hybrid consensus-attention framework that preserves performance while improving stability. We provide theoretical analysis characterizing the properties of consensus.

View on arXiv
Comments on this paper