ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.07922
  4. Cited By
LowKey: Leveraging Adversarial Attacks to Protect Social Media Users
  from Facial Recognition

LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition

20 January 2021
Valeriia Cherepanova
Micah Goldblum
Harrison Foley
Shiyuan Duan
John P. Dickerson
Gavin Taylor
Tom Goldstein
    AAML
    PICV
ArXivPDFHTML

Papers citing "LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition"

30 / 30 papers shown
Title
Dormant: Defending against Pose-driven Human Image Animation
Dormant: Defending against Pose-driven Human Image Animation
Jiachen Zhou
Mingsi Wang
Tianlin Li
Guozhu Meng
Kai Chen
67
3
0
22 Sep 2024
Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI
Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI
Robert Honig
Javier Rando
Nicholas Carlini
Florian Tramèr
WIGM
AAML
55
16
0
17 Jun 2024
Talking Nonsense: Probing Large Language Models' Understanding of
  Adversarial Gibberish Inputs
Talking Nonsense: Probing Large Language Models' Understanding of Adversarial Gibberish Inputs
Valeriia Cherepanova
James Zou
AAML
33
4
0
26 Apr 2024
NeRFTAP: Enhancing Transferability of Adversarial Patches on Face
  Recognition using Neural Radiance Fields
NeRFTAP: Enhancing Transferability of Adversarial Patches on Face Recognition using Neural Radiance Fields
Xiaoliang Liu
Shen Furao
Feng Han
Jian Zhao
Changhai Nie
AAML
28
0
0
29 Nov 2023
IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
  Against Unauthorized Data Usage in Diffusion-Based Generative AI
IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Bochuan Cao
Changjiang Li
Ting Wang
Jinyuan Jia
Bo Li
Jinghui Chen
DiffM
31
21
0
30 Oct 2023
Segue: Side-information Guided Generative Unlearnable Examples for
  Facial Privacy Protection in Real World
Segue: Side-information Guided Generative Unlearnable Examples for Facial Privacy Protection in Real World
Zhiling Zhang
Jie Zhang
Kui Zhang
Wenbo Zhou
Weiming Zhang
Neng H. Yu
23
1
0
24 Oct 2023
My Art My Choice: Adversarial Protection Against Unruly AI
My Art My Choice: Adversarial Protection Against Unruly AI
Anthony Rhodes
Ram Bhagat
U. Ciftci
Ilke Demir
DiffM
45
4
0
06 Sep 2023
Face Encryption via Frequency-Restricted Identity-Agnostic Attacks
Xinjie Dong
Rui Wang
Siyuan Liang
Aishan Liu
Lihua Jing
AAML
PICV
29
8
0
11 Aug 2023
CryptoMask : Privacy-preserving Face Recognition
CryptoMask : Privacy-preserving Face Recognition
Jianli Bai
Xiaowu Zhang
Xiangfu Song
Hang Shao
Qifan Wang
Shujie Cui
Giovanni Russello
PICV
36
3
0
22 Jul 2023
Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal
  Data
Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data
Xinzhe Li
Ming Liu
Shang Gao
MU
35
8
0
02 Jul 2023
Anti-DreamBooth: Protecting users from personalized text-to-image
  synthesis
Anti-DreamBooth: Protecting users from personalized text-to-image synthesis
T. Le
Hao Phung
Thuan Hoang Nguyen
Quan Dao
Ngoc N. Tran
Anh Tran
28
92
0
27 Mar 2023
Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
Shawn Shan
Jenna Cryan
Emily Wenger
Haitao Zheng
Rana Hanocka
Ben Y. Zhao
WIGM
17
176
0
08 Feb 2023
Unlocking Metaverse-as-a-Service The three pillars to watch: Privacy and
  Security, Edge Computing, and Blockchain
Unlocking Metaverse-as-a-Service The three pillars to watch: Privacy and Security, Edge Computing, and Blockchain
Vesal Ahsani
Alireza Rahimi
Mehdi Letafati
B. Khalaj
36
15
0
01 Jan 2023
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Jiaming Zhang
Xingjun Ma
Qiaomin Yi
Jitao Sang
Yugang Jiang
Yaowei Wang
Changsheng Xu
21
24
0
31 Dec 2022
UPTON: Preventing Authorship Leakage from Public Text Release via Data
  Poisoning
UPTON: Preventing Authorship Leakage from Public Text Release via Data Poisoning
Ziyao Wang
Thai Le
Dongwon Lee
36
1
0
17 Nov 2022
Hierarchical Perceptual Noise Injection for Social Media Fingerprint
  Privacy Protection
Hierarchical Perceptual Noise Injection for Social Media Fingerprint Privacy Protection
Simin Li
Huangxinxin Xu
Jiakai Wang
Aishan Liu
Fazhi He
Xianglong Liu
Dacheng Tao
AAML
26
5
0
23 Aug 2022
ReFace: Real-time Adversarial Attacks on Face Recognition Systems
ReFace: Real-time Adversarial Attacks on Face Recognition Systems
Shehzeen Samarah Hussain
Todd P. Huster
Chris Mesterharm
Paarth Neekhara
Kevin R. An
Malhar Jere
Harshvardhan Digvijay Sikka
F. Koushanfar
AAML
12
6
0
09 Jun 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Hao He
Kaiwen Zha
Dina Katabi
AAML
34
32
0
22 Feb 2022
SoK: Anti-Facial Recognition Technology
SoK: Anti-Facial Recognition Technology
Emily Wenger
Shawn Shan
Haitao Zheng
Ben Y. Zhao
PICV
32
13
0
08 Dec 2021
Availability Attacks Create Shortcuts
Availability Attacks Create Shortcuts
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
AAML
31
57
0
01 Nov 2021
Addressing Privacy Threats from Machine Learning
Addressing Privacy Threats from Machine Learning
Mary Anne Smart
26
2
0
25 Oct 2021
Data Poisoning Won't Save You From Facial Recognition
Data Poisoning Won't Save You From Facial Recognition
Evani Radiya-Dixit
Sanghyun Hong
Nicholas Carlini
Florian Tramèr
AAML
PICV
15
57
0
28 Jun 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
32
132
0
21 Jun 2021
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities
  in Machine Learning Systems
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems
Yue Gao
Ilia Shumailov
Kassem Fawaz
AAML
27
10
0
18 Apr 2021
Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web
  APIs under Deepfake Impersonation Attack
Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attack
Shahroz Tariq
Sowon Jeon
Simon S. Woo
32
25
0
01 Mar 2021
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
  Dataset Release
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Liam H. Fowl
Ping Yeh-Chiang
Micah Goldblum
Jonas Geiping
Arpit Bansal
W. Czaja
Tom Goldstein
21
43
0
16 Feb 2021
Technical Challenges for Training Fair Neural Networks
Technical Challenges for Training Fair Neural Networks
Valeriia Cherepanova
V. Nanda
Micah Goldblum
John P. Dickerson
Tom Goldstein
FaML
22
22
0
12 Feb 2021
On Success and Simplicity: A Second Look at Transferable Targeted
  Attacks
On Success and Simplicity: A Second Look at Transferable Targeted Attacks
Zhengyu Zhao
Zhuoran Liu
Martha Larson
AAML
38
122
0
21 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,842
0
08 Jul 2016
1