Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.11199
Cited By
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model
19 March 2022
Jiayi Wang
Rongzhou Bao
Zhuosheng Zhang
Hai Zhao
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model"
5 / 5 papers shown
Title
DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization
Songyang Gao
Shihan Dou
Yan Liu
Xiao Wang
Qi Zhang
Zhongyu Wei
Jin Ma
Yingchun Shan
OOD
12
3
0
27 Jun 2023
VoteTRANS: Detecting Adversarial Text without Training by Voting on Hard Labels of Transformations
Hoang-Quoc Nguyen-Son
Seira Hidano
Kazuhide Fukushima
S. Kiyomoto
Isao Echizen
20
0
0
02 Jun 2023
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text
Ashim Gupta
Carter Blum
Temma Choji
Yingjie Fei
Shalin S Shah
Alakananda Vempala
Vivek Srikumar
AAML
19
8
0
25 May 2023
Certified Robustness to Adversarial Word Substitutions
Robin Jia
Aditi Raghunathan
Kerem Göksel
Percy Liang
AAML
178
290
0
03 Sep 2019
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks
Mohit Iyyer
John Wieting
Kevin Gimpel
Luke Zettlemoyer
AAML
GAN
185
711
0
17 Apr 2018
1