ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.05271
  4. Cited By
TextBugger: Generating Adversarial Text Against Real-world Applications

TextBugger: Generating Adversarial Text Against Real-world Applications

13 December 2018
Jinfeng Li
S. Ji
Tianyu Du
Bo Li
Ting Wang
    SILM
    AAML
ArXivPDFHTML

Papers citing "TextBugger: Generating Adversarial Text Against Real-world Applications"

50 / 69 papers shown
Title
aiXamine: Simplified LLM Safety and Security
aiXamine: Simplified LLM Safety and Security
Fatih Deniz
Dorde Popovic
Yazan Boshmaf
Euisuh Jeong
M. Ahmad
Sanjay Chawla
Issa M. Khalil
ELM
75
0
0
21 Apr 2025
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks
Yue Gao
Ilia Shumailov
Kassem Fawaz
AAML
121
0
0
21 Feb 2025
Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets
Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets
Vatsal Gupta
Pranshu Pandya
Tushar Kataria
Vivek Gupta
Dan Roth
AAML
53
1
0
03 Jan 2025
Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Nilanjana Das
Edward Raff
Manas Gaur
AAML
101
1
0
20 Dec 2024
TaeBench: Improving Quality of Toxic Adversarial Examples
TaeBench: Improving Quality of Toxic Adversarial Examples
Xuan Zhu
Dmitriy Bespalov
Liwen You
Ninad Kulkarni
Yanjun Qi
AAML
63
0
0
08 Oct 2024
Reducing and Exploiting Data Augmentation Noise through Meta Reweighting
  Contrastive Learning for Text Classification
Reducing and Exploiting Data Augmentation Noise through Meta Reweighting Contrastive Learning for Text Classification
Guanyi Mou
Yichuan Li
Kyumin Lee
26
3
0
26 Sep 2024
Jailbreaking Text-to-Image Models with LLM-Based Agents
Jailbreaking Text-to-Image Models with LLM-Based Agents
Yingkai Dong
Zheng Li
Xiangtao Meng
Ning Yu
Shanqing Guo
LLMAG
36
13
0
01 Aug 2024
Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction
  Amplification
Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction Amplification
Boyang Zhang
Yicong Tan
Yun Shen
Ahmed Salem
Michael Backes
Savvas Zannettou
Yang Zhang
LLMAG
AAML
40
14
0
30 Jul 2024
Human-Interpretable Adversarial Prompt Attack on Large Language Models
  with Situational Context
Human-Interpretable Adversarial Prompt Attack on Large Language Models with Situational Context
Nilanjana Das
Edward Raff
Manas Gaur
AAML
33
2
0
19 Jul 2024
IDT: Dual-Task Adversarial Attacks for Privacy Protection
IDT: Dual-Task Adversarial Attacks for Privacy Protection
Pedro Faustini
Shakila Mahjabin Tonni
Annabelle McIver
Qiongkai Xu
Mark Dras
SILM
AAML
33
0
0
28 Jun 2024
Spiking Convolutional Neural Networks for Text Classification
Spiking Convolutional Neural Networks for Text Classification
Changze Lv
Jianhan Xu
Xiaoqing Zheng
41
26
0
27 Jun 2024
Adversarial Evasion Attack Efficiency against Large Language Models
Adversarial Evasion Attack Efficiency against Large Language Models
João Vitorino
Eva Maia
Isabel Praça
AAML
31
2
0
12 Jun 2024
Advancing the Robustness of Large Language Models through Self-Denoised
  Smoothing
Advancing the Robustness of Large Language Models through Self-Denoised Smoothing
Jiabao Ji
Bairu Hou
Zhen Zhang
Guanhua Zhang
Wenqi Fan
Qing Li
Yang Zhang
Gaowen Liu
Sijia Liu
Shiyu Chang
AAML
27
5
0
18 Apr 2024
VertAttack: Taking advantage of Text Classifiers' horizontal vision
VertAttack: Taking advantage of Text Classifiers' horizontal vision
Jonathan Rusert
AAML
19
1
0
12 Apr 2024
RITFIS: Robust input testing framework for LLMs-based intelligent
  software
RITFIS: Robust input testing framework for LLMs-based intelligent software
Ming-Ming Xiao
Yan Xiao
Hai Dong
Shunhui Ji
Pengcheng Zhang
AAML
36
5
0
21 Feb 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
13
76
0
25 Jan 2024
Towards Effective Paraphrasing for Information Disguise
Towards Effective Paraphrasing for Information Disguise
Anmol Agarwal
Shrey Gupta
Vamshi Bonagiri
Manas Gaur
Joseph M. Reagle
Ponnurangam Kumaraguru
17
3
0
08 Nov 2023
Toward Stronger Textual Attack Detectors
Toward Stronger Textual Attack Detectors
Pierre Colombo
Marine Picot
Nathan Noiry
Guillaume Staerman
Pablo Piantanida
28
5
0
21 Oct 2023
Text-CRS: A Generalized Certified Robustness Framework against Textual
  Adversarial Attacks
Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Xinyu Zhang
Hanbin Hong
Yuan Hong
Peng Huang
Binghui Wang
Zhongjie Ba
Kui Ren
SILM
8
18
0
31 Jul 2023
FATRER: Full-Attention Topic Regularizer for Accurate and Robust
  Conversational Emotion Recognition
FATRER: Full-Attention Topic Regularizer for Accurate and Robust Conversational Emotion Recognition
Yuzhao Mao
Di Lu
Xiaojie Wang
Yang Zhang
15
1
0
23 Jul 2023
Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis,
  and LLMs Evaluations
Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Lifan Yuan
Yangyi Chen
Ganqu Cui
Hongcheng Gao
Fangyuan Zou
Xingyi Cheng
Heng Ji
Zhiyuan Liu
Maosong Sun
32
72
0
07 Jun 2023
From Adversarial Arms Race to Model-centric Evaluation: Motivating a
  Unified Automatic Robustness Evaluation Framework
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Yangyi Chen
Hongcheng Gao
Ganqu Cui
Lifan Yuan
Dehan Kong
...
Longtao Huang
H. Xue
Zhiyuan Liu
Maosong Sun
Heng Ji
AAML
ELM
18
6
0
29 May 2023
Another Dead End for Morphological Tags? Perturbed Inputs and Parsing
Another Dead End for Morphological Tags? Perturbed Inputs and Parsing
Alberto Muñoz-Ortiz
David Vilares
23
1
0
24 May 2023
Adversarial Demonstration Attacks on Large Language Models
Adversarial Demonstration Attacks on Large Language Models
Jiong Wang
Zi-yang Liu
Keun Hee Park
Zhuojun Jiang
Zhaoheng Zheng
Zhuofeng Wu
Muhao Chen
Chaowei Xiao
SILM
15
51
0
24 May 2023
A Survey of Safety and Trustworthiness of Large Language Models through
  the Lens of Verification and Validation
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Xiaowei Huang
Wenjie Ruan
Wei Huang
Gao Jin
Yizhen Dong
...
Sihao Wu
Peipei Xu
Dengyu Wu
André Freitas
Mustafa A. Mustafa
ALM
27
81
0
19 May 2023
Iterative Adversarial Attack on Image-guided Story Ending Generation
Iterative Adversarial Attack on Image-guided Story Ending Generation
Youze Wang
Wenbo Hu
Richang Hong
32
3
0
16 May 2023
Assessing Hidden Risks of LLMs: An Empirical Study on Robustness,
  Consistency, and Credibility
Assessing Hidden Risks of LLMs: An Empirical Study on Robustness, Consistency, and Credibility
Wen-song Ye
Mingfeng Ou
Tianyi Li
Yipeng Chen
Xuetao Ma
...
Sai Wu
Jie Fu
Gang Chen
Haobo Wang
J. Zhao
42
36
0
15 May 2023
No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment
  using Adversarial Learning
No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning
Thorsten Eisenhofer
Erwin Quiring
Jonas Moller
Doreen Riepel
Thorsten Holz
Konrad Rieck
AAML
16
6
0
25 Mar 2023
RETVec: Resilient and Efficient Text Vectorizer
RETVec: Resilient and Efficient Text Vectorizer
Elie Bursztein
Marina Zhang
Owen Vallis
Xinyu Jia
Alexey Kurakin
VLM
11
4
0
18 Feb 2023
Semantic Adversarial Attacks on Face Recognition through Significant
  Attributes
Semantic Adversarial Attacks on Face Recognition through Significant Attributes
Yasmeen M. Khedr
Yifeng Xiong
Kun He
AAML
23
2
0
28 Jan 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
24
75
0
29 Dec 2022
Generating Textual Adversaries with Minimal Perturbation
Generating Textual Adversaries with Minimal Perturbation
Xingyi Zhao
Lu Zhang
Depeng Xu
Shuhan Yuan
DeLMO
AAML
12
2
0
12 Nov 2022
Preserving Semantics in Textual Adversarial Attacks
Preserving Semantics in Textual Adversarial Attacks
David Herel
Hugo Cisneros
Tomáš Mikolov
AAML
22
6
0
08 Nov 2022
RoChBert: Towards Robust BERT Fine-tuning for Chinese
RoChBert: Towards Robust BERT Fine-tuning for Chinese
Zihan Zhang
Jinfeng Li
Ning Shi
Bo Yuan
Xiangyu Liu
Rong Zhang
Hui Xue
Donghong Sun
Chao Zhang
AAML
16
4
0
28 Oct 2022
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and
  Model Uncertainty Estimation
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation
Fan Yin
Yao Li
Cho-Jui Hsieh
Kai-Wei Chang
AAML
55
4
0
22 Oct 2022
TCAB: A Large-Scale Text Classification Attack Benchmark
TCAB: A Large-Scale Text Classification Attack Benchmark
Kalyani Asthana
Zhouhang Xie
Wencong You
Adam Noack
Jonathan Brophy
Sameer Singh
Daniel Lowd
22
3
0
21 Oct 2022
An Empirical Analysis of SMS Scam Detection Systems
An Empirical Analysis of SMS Scam Detection Systems
Muhammad Salman
Muhammad Ikram
M. Kâafar
25
8
0
19 Oct 2022
PromptAttack: Prompt-based Attack for Language Models via Gradient
  Search
PromptAttack: Prompt-based Attack for Language Models via Gradient Search
Yundi Shi
Piji Li
Changchun Yin
Zhaoyang Han
Lu Zhou
Zhe Liu
AAML
SILM
11
18
0
05 Sep 2022
CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming
  Language Models
CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming Language Models
Akshita Jha
Chandan K. Reddy
SILM
ELM
AAML
17
58
0
31 May 2022
Learning to Ignore Adversarial Attacks
Learning to Ignore Adversarial Attacks
Yiming Zhang
Yan Zhou
Samuel Carton
Chenhao Tan
41
2
0
23 May 2022
A Simple Yet Efficient Method for Adversarial Word-Substitute Attack
A Simple Yet Efficient Method for Adversarial Word-Substitute Attack
Tianle Li
Yi Yang
AAML
11
0
0
07 May 2022
Don't sweat the small stuff, classify the rest: Sample Shielding to
  protect text classifiers against adversarial attacks
Don't sweat the small stuff, classify the rest: Sample Shielding to protect text classifiers against adversarial attacks
Jonathan Rusert
P. Srinivasan
AAML
11
3
0
03 May 2022
BERTops: Studying BERT Representations under a Topological Lens
BERTops: Studying BERT Representations under a Topological Lens
Jatin Chauhan
Manohar Kaul
14
3
0
02 May 2022
DDDM: a Brain-Inspired Framework for Robust Classification
DDDM: a Brain-Inspired Framework for Robust Classification
Xiyuan Chen
Xingyu Li
Yi Zhou
Tianming Yang
AAML
DiffM
25
7
0
01 May 2022
Adversarial Training for Improving Model Robustness? Look at Both
  Prediction and Interpretation
Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Hanjie Chen
Yangfeng Ji
OOD
AAML
VLM
11
21
0
23 Mar 2022
On The Robustness of Offensive Language Classifiers
On The Robustness of Offensive Language Classifiers
Jonathan Rusert
Zubair Shafiq
P. Srinivasan
AAML
9
11
0
21 Mar 2022
Distinguishing Non-natural from Natural Adversarial Samples for More
  Robust Pre-trained Language Model
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model
Jiayi Wang
Rongzhou Bao
Zhuosheng Zhang
Hai Zhao
AAML
6
4
0
19 Mar 2022
MaMaDroid2.0 -- The Holes of Control Flow Graphs
MaMaDroid2.0 -- The Holes of Control Flow Graphs
Harel Berger
Chen Hajaj
Enrico Mariconti
A. Dvir
12
4
0
28 Feb 2022
Identifying Adversarial Attacks on Text Classifiers
Identifying Adversarial Attacks on Text Classifiers
Zhouhang Xie
Jonathan Brophy
Adam Noack
Wencong You
Kalyani Asthana
Carter Perkins
Sabrina Reis
Sameer Singh
Daniel Lowd
AAML
11
9
0
21 Jan 2022
Measure and Improve Robustness in NLP Models: A Survey
Measure and Improve Robustness in NLP Models: A Survey
Xuezhi Wang
Haohan Wang
Diyi Yang
139
130
0
15 Dec 2021
12
Next