Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2107.11275
Cited By
A Differentiable Language Model Adversarial Attack on Text Classifiers
23 July 2021
I. Fursov
Alexey Zaytsev
Pavel Burnyshev
Ekaterina Dmitrieva
Nikita Klyuchnikov
A. Kravchenko
Ekaterina Artemova
Evgeny Burnaev
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Differentiable Language Model Adversarial Attack on Text Classifiers"
10 / 10 papers shown
Title
Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets
Vatsal Gupta
Pranshu Pandya
Tushar Kataria
Vivek Gupta
Dan Roth
AAML
57
1
0
03 Jan 2025
Uncertainty Estimation of Transformers' Predictions via Topological Analysis of the Attention Matrices
Elizaveta Kostenok
D. Cherniavskii
Alexey Zaytsev
56
5
0
22 Aug 2023
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing
Yatong Bai
Brendon G. Anderson
Aerin Kim
Somayeh Sojoudi
AAML
33
18
0
29 Jan 2023
Can Language Representation Models Think in Bets?
Zhi–Bin Tang
Mayank Kejriwal
15
6
0
14 Oct 2022
Usage of specific attention improves change point detection
Anna Dmitrienko
Evgenia Romanenkova
Alexey Zaytsev
13
0
0
18 Apr 2022
Adversarial Bone Length Attack on Action Recognition
Nariki Tanaka
Hiroshi Kera
K. Kawamoto
AAML
27
13
0
13 Sep 2021
It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations
Samson Tan
Shafiq R. Joty
Min-Yen Kan
R. Socher
166
103
0
09 May 2020
Robust Encodings: A Framework for Combating Adversarial Typos
Erik Jones
Robin Jia
Aditi Raghunathan
Percy Liang
AAML
142
102
0
04 May 2020
Stanza: A Python Natural Language Processing Toolkit for Many Human Languages
Peng Qi
Yuhao Zhang
Yuhui Zhang
Jason Bolton
Christopher D. Manning
AI4TS
213
1,654
0
16 Mar 2020
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
273
3,110
0
04 Nov 2016
1