ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.10657
  4. Cited By
UltraClean: A Simple Framework to Train Robust Neural Networks against
  Backdoor Attacks

UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks

17 December 2023
Bingyin Zhao
Yingjie Lao
    AAML
ArXivPDFHTML

Papers citing "UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks"

3 / 3 papers shown
Title
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping
  Attacks
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks
Orson Mengara
AAML
35
4
0
29 Mar 2024
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring
Konrad Rieck
AAML
51
70
0
19 Mar 2020
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
287
0
02 Dec 2018
1