Large language models (LLMs) are considered important approaches towards foundational machine intelligence, achieving remarkable success in Natural Language Processing and multimodal tasks, among others. However, the carbon footprints and financial costs originating from heavy pre-training computation is a non-negligible issue. Progressive training methods, inspired by the neurogenesis process that grows neural structures, have shown potential to accelerate LLM pre-training. However, the algorithms, implementation, and practices for progressively training LLMs beyond 100B parameters remain underexplored. In this paper, we show that our model, namely FLM-101B, trained with our growth strategy under a budget of \100K,reaches80%ofthebaselines′performanceswithonly10%oftheirfloating−pointoperations.WebelievethatfurtherstudiesonprogressivetrainingwillbenefitthecommunitybycuttingdownthecostsandpromotinggreenAI.ThecheckpointofFLM−101BisreleasedatthishttpsURL.