ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.14315
  4. Cited By
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice

"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice

29 December 2022
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
    AAML
ArXivPDFHTML

Papers citing ""Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice"

50 / 53 papers shown
Title
On the Robustness of Transformers against Context Hijacking for Linear Classification
On the Robustness of Transformers against Context Hijacking for Linear Classification
Tianle Li
Chenyang Zhang
Xingwu Chen
Yuan Cao
Difan Zou
67
0
0
24 Feb 2025
OverThink: Slowdown Attacks on Reasoning LLMs
OverThink: Slowdown Attacks on Reasoning LLMs
A. Kumar
Jaechul Roh
A. Naseh
Marzena Karpinska
Mohit Iyyer
Amir Houmansadr
Eugene Bagdasarian
LRM
57
12
0
04 Feb 2025
Defending against Adversarial Malware Attacks on ML-based Android Malware Detection Systems
Defending against Adversarial Malware Attacks on ML-based Android Malware Detection Systems
Ping He
Lorenzo Cavallaro
Shouling Ji
AAML
32
0
0
23 Jan 2025
Lessons From Red Teaming 100 Generative AI Products
Lessons From Red Teaming 100 Generative AI Products
Blake Bullwinkel
Amanda Minnich
Shiven Chawla
Gary Lopez
Martin Pouliot
...
Pete Bryan
Ram Shankar Siva Kumar
Yonatan Zunger
Chang Kawaguchi
Mark Russinovich
AAML
VLM
37
4
0
13 Jan 2025
Position: A taxonomy for reporting and describing AI security incidents
Position: A taxonomy for reporting and describing AI security incidents
L. Bieringer
Kevin Paeth
Andreas Wespi
Kathrin Grosse
Alexandre Alahi
Kathrin Grosse
78
0
0
19 Dec 2024
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML
  Through the Lens of Evasion Attacks
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks
Kevin Eykholt
Farhan Ahmed
Pratik Vaishnavi
Amir Rahmati
AAML
19
0
0
15 Oct 2024
TA3: Testing Against Adversarial Attacks on Machine Learning Models
TA3: Testing Against Adversarial Attacks on Machine Learning Models
Yuanzhe Jin
Min Chen
18
0
0
06 Oct 2024
Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in
  Red Teaming GenAI
Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI
Ambrish Rawat
Stefan Schoepf
Giulio Zizzo
Giandomenico Cornacchia
Muhammad Zaid Hameed
...
Elizabeth M. Daly
Mark Purcell
P. Sattigeri
Pin-Yu Chen
Kush R. Varshney
AAML
40
6
0
23 Sep 2024
Introducing Perturb-ability Score (PS) to Enhance Robustness Against Problem-Space Evasion Adversarial Attacks on Flow-based ML-NIDS
Introducing Perturb-ability Score (PS) to Enhance Robustness Against Problem-Space Evasion Adversarial Attacks on Flow-based ML-NIDS
Mohamed elShehaby
Ashraf Matrawy
AAML
21
0
0
11 Sep 2024
Characterizing and Evaluating the Reliability of LLMs against Jailbreak
  Attacks
Characterizing and Evaluating the Reliability of LLMs against Jailbreak Attacks
Kexin Chen
Yi Liu
Dongxia Wang
Jiaying Chen
Wenhai Wang
44
1
0
18 Aug 2024
From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks
From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks
Aditya Kulkarni
Vivek Balachandran
D. Divakaran
Tamal Das
AAML
27
4
0
29 Jul 2024
Do LLMs dream of elephants (when told not to)? Latent concept
  association and associative memory in transformers
Do LLMs dream of elephants (when told not to)? Latent concept association and associative memory in transformers
Yibo Jiang
Goutham Rajendran
Pradeep Ravikumar
Bryon Aragam
CLL
KELM
29
6
0
26 Jun 2024
GPT-4 Jailbreaks Itself with Near-Perfect Success Using Self-Explanation
GPT-4 Jailbreaks Itself with Near-Perfect Success Using Self-Explanation
Govind Ramesh
Yao Dou
Wei-ping Xu
PILM
32
10
0
21 May 2024
Machine Learning in Space: Surveying the Robustness of on-board ML
  models to Radiation
Machine Learning in Space: Surveying the Robustness of on-board ML models to Radiation
Kevin Lange
Federico Fontana
Francesco Rossi
Mattia Varile
Giovanni Apruzzese
14
0
0
04 May 2024
"Are Adversarial Phishing Webpages a Threat in Reality?" Understanding
  the Users' Perception of Adversarial Webpages
"Are Adversarial Phishing Webpages a Threat in Reality?" Understanding the Users' Perception of Adversarial Webpages
Ying Yuan
Qingying Hao
Giovanni Apruzzese
Mauro Conti
Gang Wang
AAML
29
5
0
03 Apr 2024
FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart
  Electrical Grids
FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids
Emad Efatinasab
Francesco Marchiori
Alessandro Brighente
M. Rampazzo
Mauro Conti
AAML
18
3
0
26 Mar 2024
Towards Non-Adversarial Algorithmic Recourse
Towards Non-Adversarial Algorithmic Recourse
Tobias Leemann
Martin Pawelczyk
Bardh Prenkaj
Gjergji Kasneci
AAML
21
0
0
15 Mar 2024
SoK: Analyzing Adversarial Examples: A Framework to Study Adversary
  Knowledge
SoK: Analyzing Adversarial Examples: A Framework to Study Adversary Knowledge
L. Fenaux
Florian Kerschbaum
AAML
29
0
0
22 Feb 2024
Adversarial Robustness on Image Classification with $k$-means
Adversarial Robustness on Image Classification with kkk-means
Rollin Omari
Junae Kim
Paul Montague
OOD
VLM
11
0
0
15 Dec 2023
Tree of Attacks: Jailbreaking Black-Box LLMs Automatically
Tree of Attacks: Jailbreaking Black-Box LLMs Automatically
Anay Mehrotra
Manolis Zampetakis
Paul Kassianik
Blaine Nelson
Hyrum Anderson
Yaron Singer
Amin Karbasi
20
201
0
04 Dec 2023
"Do Users fall for Real Adversarial Phishing?" Investigating the Human
  response to Evasive Webpages
"Do Users fall for Real Adversarial Phishing?" Investigating the Human response to Evasive Webpages
Ajka Draganovic
Savino Dambra
Javier Aldana-Iuit
Kevin A. Roundy
Giovanni Apruzzese
10
6
0
28 Nov 2023
Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based
  Wireless Communication Systems
Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based Wireless Communication Systems
Jung-Woo Chang
Ke Sun
Nasimeh Heydaribeni
Seira Hidano
Xinyu Zhang
F. Koushanfar
AAML
17
1
0
01 Nov 2023
PubDef: Defending Against Transfer Attacks From Public Models
PubDef: Defending Against Transfer Attacks From Public Models
Chawin Sitawarin
Jaewon Chang
David Huang
Wesson Altoyan
David A. Wagner
AAML
11
5
0
26 Oct 2023
SoK: Pitfalls in Evaluating Black-Box Attacks
SoK: Pitfalls in Evaluating Black-Box Attacks
Fnu Suya
Anshuman Suri
Tingwei Zhang
Jingtao Hong
Yuan Tian
David E. Evans
AAML
8
6
0
26 Oct 2023
An LLM can Fool Itself: A Prompt-Based Adversarial Attack
An LLM can Fool Itself: A Prompt-Based Adversarial Attack
Xilie Xu
Keyi Kong
Ning Liu
Li-zhen Cui
Di Wang
Jingfeng Zhang
Mohan S. Kankanhalli
AAML
SILM
17
68
0
20 Oct 2023
Survey of Vulnerabilities in Large Language Models Revealed by
  Adversarial Attacks
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Erfan Shayegani
Md Abdullah Al Mamun
Yu Fu
Pedram Zaree
Yue Dong
Nael B. Abu-Ghazaleh
AAML
147
139
0
16 Oct 2023
Adversarial Machine Learning for Social Good: Reframing the Adversary as
  an Ally
Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally
Shawqi Al-Maliki
Adnan Qayyum
Hassan Ali
M. Abdallah
Junaid Qadir
D. Hoang
Dusit Niyato
Ala I. Al-Fuqaha
AAML
26
3
0
05 Oct 2023
Your Battery Is a Blast! Safeguarding Against Counterfeit Batteries with
  Authentication
Your Battery Is a Blast! Safeguarding Against Counterfeit Batteries with Authentication
Francesco Marchiori
Mauro Conti
19
5
0
07 Sep 2023
Provably safe systems: the only path to controllable AGI
Provably safe systems: the only path to controllable AGI
Max Tegmark
Steve Omohundro
6
21
0
05 Sep 2023
Efficient Query-Based Attack against ML-Based Android Malware Detection
  under Zero Knowledge Setting
Efficient Query-Based Attack against ML-Based Android Malware Detection under Zero Knowledge Setting
Ping He
Yifan Xia
Xuhong Zhang
Shouling Ji
AAML
13
10
0
05 Sep 2023
MasterKey: Automated Jailbreak Across Multiple Large Language Model
  Chatbots
MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots
Gelei Deng
Yi Liu
Yuekang Li
Kailong Wang
Ying Zhang
Zefeng Li
Haoyu Wang
Tianwei Zhang
Yang Liu
SILM
28
118
0
16 Jul 2023
Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial
  Transferability
Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability
Marco Alecci
Mauro Conti
Francesco Marchiori
L. Martinelli
Luca Pajola
AAML
11
6
0
27 Jun 2023
Prompt Injection attack against LLM-integrated Applications
Prompt Injection attack against LLM-integrated Applications
Yi Liu
Gelei Deng
Yuekang Li
Kailong Wang
Zihao Wang
...
Tianwei Zhang
Yepang Liu
Haoyu Wang
Yanhong Zheng
Yang Liu
SILM
10
310
0
08 Jun 2023
Transferable Adversarial Robustness for Categorical Data via Universal
  Robust Embeddings
Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings
Klim Kireev
Maksym Andriushchenko
Carmela Troncoso
Nicolas Flammarion
OOD
27
1
0
06 Jun 2023
Evading Black-box Classifiers Without Breaking Eggs
Evading Black-box Classifiers Without Breaking Eggs
Edoardo Debenedetti
Nicholas Carlini
Florian Tramèr
MLAU
AAML
14
7
0
05 Jun 2023
Web Content Filtering through knowledge distillation of Large Language
  Models
Web Content Filtering through knowledge distillation of Large Language Models
Tamás Vörös
Sean P. Bergeron
Konstantin Berlin
11
7
0
08 May 2023
SoK: Pragmatic Assessment of Machine Learning for Network Intrusion
  Detection
SoK: Pragmatic Assessment of Machine Learning for Network Intrusion Detection
Giovanni Apruzzese
P. Laskov
J. Schneider
20
24
0
30 Apr 2023
Boosting Big Brother: Attacking Search Engines with Encodings
Boosting Big Brother: Attacking Search Engines with Encodings
Nicholas Boucher
Luca Pajola
Ilia Shumailov
Ross J. Anderson
Mauro Conti
SILM
24
10
0
27 Apr 2023
Not what you've signed up for: Compromising Real-World LLM-Integrated
  Applications with Indirect Prompt Injection
Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
Kai Greshake
Sahar Abdelnabi
Shailesh Mishra
C. Endres
Thorsten Holz
Mario Fritz
SILM
15
426
0
23 Feb 2023
MalProtect: Stateful Defense Against Adversarial Query Attacks in
  ML-based Malware Detection
MalProtect: Stateful Defense Against Adversarial Query Attacks in ML-based Malware Detection
Aqib Rashid
Jose Such
AAML
12
7
0
21 Feb 2023
Benchmarking Robustness to Adversarial Image Obfuscations
Benchmarking Robustness to Adversarial Image Obfuscations
Florian Stimberg
Ayan Chakrabarti
Chun-Ta Lu
Hussein Hazimeh
Otilia Stretcu
...
Merve Kaya
Cyrus Rashtchian
Ariel Fuxman
Mehmet Tek
Sven Gowal
AAML
16
10
0
30 Jan 2023
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks
Stephen Casper
K. Hariharan
Dylan Hadfield-Menell
AAML
11
11
0
18 Nov 2022
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial
  Examples
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples
Giovanni Apruzzese
Rodion Vladimirov
A.T. Tastemirova
P. Laskov
AAML
10
14
0
04 Jul 2022
StratDef: Strategic Defense Against Adversarial Attacks in ML-based
  Malware Detection
StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection
Aqib Rashid
Jose Such
AAML
9
5
0
15 Feb 2022
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
86
50
0
13 Oct 2021
A Framework for Cluster and Classifier Evaluation in the Absence of
  Reference Labels
A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels
R. Joyce
Edward Raff
Charles K. Nicholas
25
16
0
23 Sep 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
139
68
0
04 May 2021
Robust Adversarial Attacks Against DNN-Based Wireless Communication
  Systems
Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems
Alireza Bahramali
Milad Nasr
Amir Houmansadr
Dennis Goeckel
Don Towsley
AAML
21
53
0
01 Feb 2021
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
102
595
0
27 Dec 2020
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,798
0
14 Dec 2020
12
Next