ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.05474
  4. Cited By
A black-box adversarial attack for poisoning clustering

A black-box adversarial attack for poisoning clustering

9 September 2020
Antonio Emanuele Cinà
Alessandro Torcinovich
Marcello Pelillo
    AAML
ArXivPDFHTML

Papers citing "A black-box adversarial attack for poisoning clustering"

4 / 4 papers shown
Title
Robust Fair Clustering: A Novel Fairness Attack and Defense Framework
Robust Fair Clustering: A Novel Fairness Attack and Defense Framework
Anshuman Chhabra
Peizhao Li
P. Mohapatra
Hongfu Liu
OOD
19
22
0
04 Oct 2022
On the Robustness of Deep Clustering Models: Adversarial Attacks and
  Defenses
On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses
Anshuman Chhabra
Ashwin Sekhari
P. Mohapatra
OOD
AAML
37
8
0
04 Oct 2022
Adversarial attacks on an optical neural network
Adversarial attacks on an optical neural network
Shuming Jiao
Z. Song
S. Xiang
AAML
19
2
0
29 Apr 2022
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison
  Linear Classifiers?
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
19
9
0
23 Mar 2021
1