Many recent breakthroughs in deep learning were achieved by training
increasingly larger models on massive datasets. However, training such models
can be prohibitively expensive. For instance, the cluster used to train GPT-3
costs over \250 million. As a result, most researchers cannot afford to train
state of the art models and contribute to their development. Hypothetically, a
researcher could crowdsource the training of large neural networks with
thousands of regular PCs provided by volunteers. The raw computing power of a
hundred thousand \2500 desktops dwarfs that of a \250Mserverpod,butonecannotutilizethatpowerefficientlywithconventionaldistributedtrainingmethods.Inthiswork,weproposeLearning@home:anovelneuralnetworktrainingparadigmdesignedtohandlelargeamountsofpoorlyconnectedparticipants.Weanalyzetheperformance,reliability,andarchitecturalconstraintsofthisparadigmandcompareitagainstexistingdistributedtrainingtechniques.