ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.11894
12
15

Webly Supervised Image Classification with Self-Contained Confidence

27 August 2020
Jingkang Yang
Litong Feng
Weirong Chen
Xiaopeng Yan
Huabin Zheng
Ping Luo
Wayne Zhang
ArXivPDFHTML
Abstract

This paper focuses on webly supervised learning (WSL), where datasets are built by crawling samples from the Internet and directly using search queries as web labels. Although WSL benefits from fast and low-cost data collection, noises in web labels hinder better performance of the image classification model. To alleviate this problem, in recent works, self-label supervised loss Ls\mathcal{L}_sLs​ is utilized together with webly supervised loss Lw\mathcal{L}_wLw​. Ls\mathcal{L}_sLs​ relies on pseudo labels predicted by the model itself. Since the correctness of the web label or pseudo label is usually on a case-by-case basis for each web sample, it is desirable to adjust the balance between Ls\mathcal{L}_sLs​ and Lw\mathcal{L}_wLw​ on sample level. Inspired by the ability of Deep Neural Networks (DNNs) in confidence prediction, we introduce Self-Contained Confidence (SCC) by adapting model uncertainty for WSL setting, and use it to sample-wisely balance Ls\mathcal{L}_sLs​ and Lw\mathcal{L}_wLw​. Therefore, a simple yet effective WSL framework is proposed. A series of SCC-friendly regularization approaches are investigated, among which the proposed graph-enhanced mixup is the most effective method to provide high-quality confidence to enhance our framework. The proposed WSL framework has achieved the state-of-the-art results on two large-scale WSL datasets, WebVision-1000 and Food101-N. Code is available at https://github.com/bigvideoresearch/SCC.

View on arXiv
Comments on this paper