ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.09092
  4. Cited By
Indiscriminate Data Poisoning Attacks on Neural Networks

Indiscriminate Data Poisoning Attacks on Neural Networks

19 April 2022
Yiwei Lu
Gautam Kamath
Yaoliang Yu
    AAML
ArXivPDFHTML

Papers citing "Indiscriminate Data Poisoning Attacks on Neural Networks"

21 / 21 papers shown
Title
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift
Jiamin Chang
H. Li
Hammond Pearce
Ruoxi Sun
Bo-wen Li
Minhui Xue
38
0
0
28 Apr 2025
Position: Curvature Matrices Should Be Democratized via Linear Operators
Position: Curvature Matrices Should Be Democratized via Linear Operators
Felix Dangel
Runa Eschenhagen
Weronika Ormaniec
Andres Fernandez
Lukas Tatzel
Agustinus Kristiadi
48
3
0
31 Jan 2025
BridgePure: Limited Protection Leakage Can Break Black-Box Data Protection
BridgePure: Limited Protection Leakage Can Break Black-Box Data Protection
Yihan Wang
Yiwei Lu
Xiao-Shan Gao
Gautam Kamath
Yaoliang Yu
29
0
0
30 Dec 2024
SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach
Ruoxi Sun
Jiamin Chang
Hammond Pearce
Chaowei Xiao
B. Li
Qi Wu
Surya Nepal
Minhui Xue
30
0
0
17 Nov 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
52
8
0
25 Jun 2024
Disguised Copyright Infringement of Latent Diffusion Models
Disguised Copyright Infringement of Latent Diffusion Models
Yiwei Lu
Matthew Y.R. Yang
Zuoqiu Liu
Gautam Kamath
Yaoliang Yu
WIGM
23
7
0
10 Apr 2024
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Yiwei Lu
Matthew Y.R. Yang
Gautam Kamath
Yaoliang Yu
AAML
SILM
29
8
0
20 Feb 2024
Game-Theoretic Unlearnable Example Generator
Game-Theoretic Unlearnable Example Generator
Shuang Liu
Yihan Wang
Xiao-Shan Gao
AAML
11
7
0
31 Jan 2024
Detection and Defense of Unlearnable Examples
Detection and Defense of Unlearnable Examples
Yifan Zhu
Lijia Yu
Xiao-Shan Gao
AAML
9
7
0
14 Dec 2023
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image
  Generative Models
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Shawn Shan
Wenxin Ding
Josephine Passananti
Stanley Wu
Haitao Zheng
Ben Y. Zhao
SILM
DiffM
24
44
0
20 Oct 2023
Dropout Attacks
Dropout Attacks
Andrew Yuan
Alina Oprea
Cheng Tan
4
0
0
04 Sep 2023
What Distributions are Robust to Indiscriminate Poisoning Attacks for
  Linear Learners?
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?
Fnu Suya
X. Zhang
Yuan Tian
David E. Evans
OOD
AAML
19
2
0
03 Jul 2023
Sharpness-Aware Data Poisoning Attack
Sharpness-Aware Data Poisoning Attack
Pengfei He
Han Xu
J. Ren
Yingqian Cui
Hui Liu
Charu C. Aggarwal
Jiliang Tang
AAML
34
7
0
24 May 2023
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning
  Attacks
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
37
18
0
07 Mar 2023
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
19
44
0
21 Dec 2022
A law of adversarial risk, interpolation, and label noise
A law of adversarial risk, interpolation, and label noise
Daniel Paleka
Amartya Sanyal
NoLa
AAML
13
9
0
08 Jul 2022
Manipulating SGD with Data Ordering Attacks
Manipulating SGD with Data Ordering Attacks
Ilia Shumailov
Zakhar Shumaylov
Dmitry Kazhdan
Yiren Zhao
Nicolas Papernot
Murat A. Erdogdu
Ross J. Anderson
AAML
112
90
0
19 Apr 2021
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
136
190
0
13 Jan 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
248
1,986
0
31 Dec 2020
Threats to Federated Learning: A Survey
Threats to Federated Learning: A Survey
Lingjuan Lyu
Han Yu
Qiang Yang
FedML
186
432
0
04 Mar 2020
On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach
On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach
Yuanhao Wang
Guodong Zhang
Jimmy Ba
29
100
0
16 Oct 2019
1