Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.02764
Cited By
Assessing Adversarial Robustness of Large Language Models: An Empirical Study
4 May 2024
Zeyu Yang
Zhao Meng
Xiaochen Zheng
Roger Wattenhofer
ELM
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Assessing Adversarial Robustness of Large Language Models: An Empirical Study"
3 / 3 papers shown
Title
Self-Supervised Contrastive Learning with Adversarial Perturbations for Defending Word Substitution-based Attacks
Zhao Meng
Yihan Dong
Mrinmaya Sachan
Roger Wattenhofer
AAML
66
10
0
15 Jul 2021
Gradient-based Adversarial Attacks against Text Transformers
Chuan Guo
Alexandre Sablayrolles
Hervé Jégou
Douwe Kiela
SILM
98
227
0
15 Apr 2021
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
245
914
0
21 Apr 2018
1