ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.02764
  4. Cited By
Assessing Adversarial Robustness of Large Language Models: An Empirical
  Study

Assessing Adversarial Robustness of Large Language Models: An Empirical Study

4 May 2024
Zeyu Yang
Zhao Meng
Xiaochen Zheng
Roger Wattenhofer
    ELM
    AAML
ArXivPDFHTML

Papers citing "Assessing Adversarial Robustness of Large Language Models: An Empirical Study"

3 / 3 papers shown
Title
Self-Supervised Contrastive Learning with Adversarial Perturbations for
  Defending Word Substitution-based Attacks
Self-Supervised Contrastive Learning with Adversarial Perturbations for Defending Word Substitution-based Attacks
Zhao Meng
Yihan Dong
Mrinmaya Sachan
Roger Wattenhofer
AAML
66
10
0
15 Jul 2021
Gradient-based Adversarial Attacks against Text Transformers
Gradient-based Adversarial Attacks against Text Transformers
Chuan Guo
Alexandre Sablayrolles
Hervé Jégou
Douwe Kiela
SILM
98
227
0
15 Apr 2021
Generating Natural Language Adversarial Examples
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
245
914
0
21 Apr 2018
1