56

Single layer tiny Co4^4 outpaces GPT-2 and GPT-BERT

Main:5 Pages
1 Figures
Bibliography:4 Pages
4 Tables
Appendix:1 Pages
Abstract

We show that a tiny Co4^4 machine(Adeel,2025) with a single layer, two heads, and 8M parameters, operating at an approximate cost of O(N)O(N) (where NN is the number of input tokens), outpaces the BabyLM Challenge baselines GPT-2 (124M, 12 layers, O(N2))O(N^2)) and GPT-BERT (30M, 12 layers, O(N2))O(N^2)) in just two epochs, while both are trained for ten. Co4^4 achieves orders-of-magnitude greater training efficiency on 10M tokens, demonstrating highly sample efficient pretraining. Using the BabyLM challenge evaluation pipeline across complex benchmarks, Co4^4 exhibits strong zero-shot and fine-tuning performance on SuperGLUE tasks. Specifically, Co4^4 outperforms GPT-2 on 5 out of 7 zero-shot metrics and 6 out of 7 fine-tuning tasks, and GPT-BERT on 4 out of 7 metrics in both cases. These results suggest the need to rethink prevailing deep learning paradigms and associated scaling laws.

View on arXiv
Comments on this paper