Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.05120
Cited By
Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks
10 February 2023
Piotr Gaiñski
Klaudia Bałazy
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks"
5 / 5 papers shown
Title
Pointing out the Shortcomings of Relation Extraction Models with Semantically Motivated Adversarials
Gennaro Nolano
Moritz Blum
Basil Ell
Philipp Cimiano
27
1
0
29 Feb 2024
Privacy in Large Language Models: Attacks, Defenses and Future Directions
Haoran Li
Yulin Chen
Jinglong Luo
Yan Kang
Xiaojin Zhang
Qi Hu
Chunkit Chan
Yangqiu Song
PILM
40
41
0
16 Oct 2023
Gradient-based Adversarial Attacks against Text Transformers
Chuan Guo
Alexandre Sablayrolles
Hervé Jégou
Douwe Kiela
SILM
98
227
0
15 Apr 2021
Token-Modification Adversarial Attacks for Natural Language Processing: A Survey
Tom Roth
Yansong Gao
A. Abuadbba
Surya Nepal
Wei Liu
AAML
23
12
0
01 Mar 2021
Recent Advances in Adversarial Training for Adversarial Robustness
Tao Bai
Jinqi Luo
Jun Zhao
B. Wen
Qian Wang
AAML
73
473
0
02 Feb 2021
1