ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.06280
  4. Cited By
Stateful Defenses for Machine Learning Models Are Not Yet Secure Against
  Black-box Attacks
v1v2v3 (latest)

Stateful Defenses for Machine Learning Models Are Not Yet Secure Against Black-box Attacks

Conference on Computer and Communications Security (CCS), 2023
11 March 2023
Ryan Feng
Ashish Hooda
Neal Mangaokar
Kassem Fawaz
S. Jha
Atul Prakash
    AAML
ArXiv (abs)PDFHTMLGithub (17★)

Papers citing "Stateful Defenses for Machine Learning Models Are Not Yet Secure Against Black-box Attacks"

8 / 8 papers shown
Benchmarking Misuse Mitigation Against Covert Adversaries
Benchmarking Misuse Mitigation Against Covert Adversaries
Davis Brown
Mahdi Sabbaghi
Luze Sun
Avi Schwarzschild
George Pappas
Eric Wong
Hamed Hassani
198
5
0
06 Jun 2025
How stealthy is stealthy? Studying the Efficacy of Black-Box Adversarial Attacks in the Real World
How stealthy is stealthy? Studying the Efficacy of Black-Box Adversarial Attacks in the Real WorldIFIP International Information Security Conference (IFIP SEC), 2025
Francesco Panebianco
Mario DÓnghia
Stefano Zanero aand Michele Carminati
AAML
207
0
0
03 Jun 2025
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML
  Through the Lens of Evasion Attacks
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks
Kevin Eykholt
Farhan Ahmed
Pratik Vaishnavi
Amir Rahmati
AAML
395
2
0
15 Oct 2024
AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial
  Contrastive Prompt Tuning
AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt TuningACM Multimedia (MM), 2024
Xin Wang
Kai-xiang Chen
Jiabo He
Zhineng Chen
Yue Yu
Yu-Gang Jiang
AAML
361
11
0
04 Aug 2024
Stealing Part of a Production Language Model
Stealing Part of a Production Language ModelInternational Conference on Machine Learning (ICML), 2024
Nicholas Carlini
Daniel Paleka
Krishnamurthy Dvijotham
Thomas Steinke
Jonathan Hayase
...
Arthur Conmy
Itay Yona
Eric Wallace
David Rolnick
Florian Tramèr
MLAUAAML
402
154
0
11 Mar 2024
L-AutoDA: Leveraging Large Language Models for Automated Decision-based
  Adversarial Attacks
L-AutoDA: Leveraging Large Language Models for Automated Decision-based Adversarial Attacks
Ping Guo
Fei Liu
Xi Lin
Qingchuan Zhao
Qingfu Zhang
375
0
0
27 Jan 2024
PubDef: Defending Against Transfer Attacks From Public Models
PubDef: Defending Against Transfer Attacks From Public ModelsInternational Conference on Learning Representations (ICLR), 2023
Chawin Sitawarin
Jaewon Chang
David Huang
Wesson Altoyan
David Wagner
AAML
344
9
0
26 Oct 2023
D4: Detection of Adversarial Diffusion Deepfakes Using Disjoint
  Ensembles
D4: Detection of Adversarial Diffusion Deepfakes Using Disjoint EnsemblesIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Ashish Hooda
Neal Mangaokar
Ryan Feng
Kassem Fawaz
S. Jha
Atul Prakash
356
15
0
11 Feb 2022
1
Page 1 of 1