39
0

BurTorch: Revisiting Training from First Principles by Coupling Autodiff, Math Optimization, and Systems

Abstract

In this work, we introduce BurTorch, a compact high-performance framework designed to optimize Deep Learning (DL) training on single-node workstations through an exceptionally efficient CPU-based backpropagation (Rumelhart et al., 1986; Linnainmaa, 1970) implementation. Although modern DL frameworks rely on compilerlike optimizations internally, BurTorch takes a different path. It adopts a minimalist design and demonstrates that, in these circumstances, classical compiled programming languages can play a significant role in DL research. By eliminating the overhead of large frameworks and making efficient implementation choices, BurTorch achieves orders-of-magnitude improvements in performance and memory efficiency when computing f(x)\nabla f(x) on a CPU. BurTorch features a compact codebase designed to achieve two key goals simultaneously. First, it provides a user experience similar to script-based programming environments. Second, it dramatically minimizes runtime overheads. In large DL frameworks, the primary source of memory overhead for relatively small computation graphs f(x)f(x) is due to feature-heavy implementations. We benchmarked BurTorch against widely used DL frameworks in their execution modes: JAX (Bradbury et al., 2018), PyTorch (Paszke et al., 2019), TensorFlow (Abadi et al., 2016); and several standalone libraries: Autograd (Maclaurin et al., 2015), Micrograd (Karpathy, 2020), Apple MLX (Hannun et al., 2023). For small compute graphs, BurTorch outperforms best-practice solutions by up to ×2000\times 2000 in runtime and reduces memory consumption by up to ×3500\times 3500. For a miniaturized GPT-3 model (Brown et al., 2020), BurTorch achieves up to a ×20\times 20 speedup and reduces memory up to ×80\times 80 compared to PyTorch.

View on arXiv
@article{burlachenko2025_2503.13795,
  title={ BurTorch: Revisiting Training from First Principles by Coupling Autodiff, Math Optimization, and Systems },
  author={ Konstantin Burlachenko and Peter Richtárik },
  journal={arXiv preprint arXiv:2503.13795},
  year={ 2025 }
}
Comments on this paper