ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.11724
19
6

LID 2020: The Learning from Imperfect Data Challenge Results

17 October 2020
Yunchao Wei
Shuai Zheng
Ming-Ming Cheng
Hang Zhao
Liwei Wang
Errui Ding
Yi Yang
Antonio Torralba
Ting Liu
Guolei Sun
Wenguan Wang
Luc Van Gool
Wonho Bae
Junhyug Noh
Jinhwan Seo
Gunhee Kim
Hao Zhao
Ming Lu
Anbang Yao
Yiwen Guo
Yurong Chen
Li Zhang
Chuangchuang Tan
Tao Ruan
Guanghua Gu
Shikui Wei
Yao Zhao
Mariia Dobko
Ostap Viniavskyi
Oles Dobosevych
Zhendong Wang
Zhenyuan Chen
Chen Gong
Huanqing Yan
Jun He
ArXivPDFHTML
Abstract

Learning from imperfect data becomes an issue in many industrial applications after the research community has made profound progress in supervised learning from perfectly annotated datasets. The purpose of the Learning from Imperfect Data (LID) workshop is to inspire and facilitate the research in developing novel approaches that would harness the imperfect data and improve the data-efficiency during training. A massive amount of user-generated data nowadays available on multiple internet services. How to leverage those and improve the machine learning models is a high impact problem. We organize the challenges in conjunction with the workshop. The goal of these challenges is to find the state-of-the-art approaches in the weakly supervised learning setting for object detection, semantic segmentation, and scene parsing. There are three tracks in the challenge, i.e., weakly supervised semantic segmentation (Track 1), weakly supervised scene parsing (Track 2), and weakly supervised object localization (Track 3). In Track 1, based on ILSVRC DET, we provide pixel-level annotations of 15K images from 200 categories for evaluation. In Track 2, we provide point-based annotations for the training set of ADE20K. In Track 3, based on ILSVRC CLS-LOC, we provide pixel-level annotations of 44,271 images for evaluation. Besides, we further introduce a new evaluation metric proposed by \cite{zhang2020rethinking}, i.e., IoU curve, to measure the quality of the generated object localization maps. This technical report summarizes the highlights from the challenge. The challenge submission server and the leaderboard will continue to open for the researchers who are interested in it. More details regarding the challenge and the benchmarks are available at https://lidchallenge.github.io

View on arXiv
Comments on this paper