48
0

scDataset: Scalable Data Loading for Deep Learning on Large-Scale Single-Cell Omics

Main:9 Pages
5 Figures
Bibliography:3 Pages
2 Tables
Abstract

Modern single-cell datasets now comprise hundreds of millions of cells, presenting significant challenges for training deep learning models that require shuffled, memory-efficient data loading. While the AnnData format is the community standard for storing single-cell datasets, existing data loading solutions for AnnData are often inadequate: some require loading all data into memory, others convert to dense formats that increase storage demands, and many are hampered by slow random disk access. We present scDataset, a PyTorch IterableDataset that operates directly on one or more AnnData files without the need for format conversion. The core innovation is a combination of block sampling and batched fetching, which together balance randomness and I/O efficiency. On the Tahoe 100M dataset, scDataset achieves up to a 48×\times speed-up over AnnLoader, a 27×\times speed-up over HuggingFace Datasets, and an 18×\times speed-up over BioNeMo in single-core settings. These advances democratize large-scale single-cell model training for the broader research community.

View on arXiv
@article{dáscenzo2025_2506.01883,
  title={ scDataset: Scalable Data Loading for Deep Learning on Large-Scale Single-Cell Omics },
  author={ Davide DÁscenzo and Sebastiano Cultrera di Montesano },
  journal={arXiv preprint arXiv:2506.01883},
  year={ 2025 }
}
Comments on this paper