ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.03801
19
26

Winning solutions and post-challenge analyses of the ChaLearn AutoDL challenge 2019

11 January 2022
Zhengying Liu
Adrien Pavao
Zhen Xu
Sergio Escalera
Fabio Ferreira
Isabelle M Guyon
Sirui Hong
Frank Hutter
Rongrong Ji
Julio C. S. Jacques Junior
Ge Li
Marius Lindauer
Zhipeng Luo
Meysam Madadi
Thomas Nierhoff
Kangning Niu
Chunguang Pan
Daniel Stoll
Sébastien Treguer
Jin Wang
Peng Wang
Chenglin Wu
Youcheng Xiong
Arber Zela
Yang Zhang
    AAML
ArXivPDFHTML
Abstract

This paper reports the results and post-challenge analyses of ChaLearn's AutoDL challenge series, which helped sorting out a profusion of AutoML solutions for Deep Learning (DL) that had been introduced in a variety of settings, but lacked fair comparisons. All input data modalities (time series, images, videos, text, tabular) were formatted as tensors and all tasks were multi-label classification problems. Code submissions were executed on hidden tasks, with limited time and computational resources, pushing solutions that get results quickly. In this setting, DL methods dominated, though popular Neural Architecture Search (NAS) was impractical. Solutions relied on fine-tuned pre-trained networks, with architectures matching data modality. Post-challenge tests did not reveal improvements beyond the imposed time limit. While no component is particularly original or novel, a high level modular organization emerged featuring a "meta-learner", "data ingestor", "model selector", "model/learner", and "evaluator". This modularity enabled ablation studies, which revealed the importance of (off-platform) meta-learning, ensembling, and efficient data management. Experiments on heterogeneous module combinations further confirm the (local) optimality of the winning solutions. Our challenge legacy includes an ever-lasting benchmark (http://autodl.chalearn.org), the open-sourced code of the winners, and a free "AutoDL self-service".

View on arXiv
Comments on this paper