Small Models, Smarter Learning: The Power of Joint Task Training
Multi-task learning improves generalization, but when does it reduce the model capacity required to learn? We provide a systematic study of how joint training affects the learning transition, the minimum model size at which a task can be learned, using nested arithmetic (ListOps) and permutation groups as controlled testbeds. Certain task pairings dramatically reduce model size requirements: combining easy operations (MAX, MIN, PROD) with hard ones (modular addition, permutation products) enables learning with 2-7 times fewer parameters. Crucially, we also identify when synergies fail: pairing structurally similar hard tasks (e.g., ADD with alternating-sign NADD) provides no benefit, nor does pairing tasks lacking shared computational primitives. PCA of learned embeddings reveals that successful joint training induces structured number representations (ordering, parity, modular structure) absent in single-task models. Transfer experiments confirm these representations are causal: models pretrained on easy tasks learn addition at 7 times smaller sizes. Our results establish that task compatibility, not mere diversity, determines whether joint training reduces capacity requirements, providing quantitative guidance for curriculum design.
View on arXiv