ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.11508
  4. Cited By
Text Processing Like Humans Do: Visually Attacking and Shielding NLP
  Systems

Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems

27 March 2019
Steffen Eger
Gözde Gül Sahin
Andreas Rucklé
Ji-Ung Lee
Claudia Schulz
Mohsen Mesgar
Krishnkant Swarnkar
Edwin Simpson
Iryna Gurevych
    AAML
ArXivPDFHTML

Papers citing "Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems"

50 / 101 papers shown
Title
Tokenization is Sensitive to Language Variation
Tokenization is Sensitive to Language Variation
Anna Wegmann
Dong Nguyen
David Jurgens
82
1
0
24 Feb 2025
Confidence Elicitation: A New Attack Vector for Large Language Models
Confidence Elicitation: A New Attack Vector for Large Language Models
Brian Formento
Chuan-Sheng Foo
See-Kiong Ng
AAML
99
0
0
07 Feb 2025
Tougher Text, Smarter Models: Raising the Bar for Adversarial Defence Benchmarks
Tougher Text, Smarter Models: Raising the Bar for Adversarial Defence Benchmarks
Yang Wang
Chenghua Lin
ELM
37
0
0
05 Jan 2025
Are Language Models Agnostic to Linguistically Grounded Perturbations? A
  Case Study of Indic Languages
Are Language Models Agnostic to Linguistically Grounded Perturbations? A Case Study of Indic Languages
Poulami Ghosh
Raj Dabre
Pushpak Bhattacharyya
AAML
70
0
0
14 Dec 2024
BinarySelect to Improve Accessibility of Black-Box Attack Research
BinarySelect to Improve Accessibility of Black-Box Attack Research
Shatarupa Ghosh
Jonathan Rusert
AAML
74
0
0
13 Dec 2024
Pay Attention to the Robustness of Chinese Minority Language Models!
  Syllable-level Textual Adversarial Attack on Tibetan Script
Pay Attention to the Robustness of Chinese Minority Language Models! Syllable-level Textual Adversarial Attack on Tibetan Script
Xi Cao
Dolma Dawa
Nuo Qun
Trashi Nyima
AAML
91
3
0
03 Dec 2024
IAE: Irony-based Adversarial Examples for Sentiment Analysis Systems
IAE: Irony-based Adversarial Examples for Sentiment Analysis Systems
Xiaoyin Yi
Jiacheng Huang
AAML
59
0
0
12 Nov 2024
Legilimens: Practical and Unified Content Moderation for Large Language
  Model Services
Legilimens: Practical and Unified Content Moderation for Large Language Model Services
Jialin Wu
Jiangyi Deng
Shengyuan Pang
Yanjiao Chen
Jiayang Xu
Xinfeng Li
Wenyuan Xu
37
6
0
28 Aug 2024
Probing the Robustness of Vision-Language Pretrained Models: A
  Multimodal Adversarial Attack Approach
Probing the Robustness of Vision-Language Pretrained Models: A Multimodal Adversarial Attack Approach
Jiwei Guan
Tianyu Ding
Longbing Cao
Lei Pan
Chen Wang
Xi Zheng
AAML
33
1
0
24 Aug 2024
Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction
  Amplification
Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction Amplification
Boyang Zhang
Yicong Tan
Yun Shen
Ahmed Salem
Michael Backes
Savvas Zannettou
Yang Zhang
LLMAG
AAML
44
14
0
30 Jul 2024
Typos that Broke the RAG's Back: Genetic Attack on RAG Pipeline by
  Simulating Documents in the Wild via Low-level Perturbations
Typos that Broke the RAG's Back: Genetic Attack on RAG Pipeline by Simulating Documents in the Wild via Low-level Perturbations
Sukmin Cho
Soyeong Jeong
Jeongyeon Seo
Taeho Hwang
Jong C. Park
SILM
AAML
50
26
0
22 Apr 2024
VertAttack: Taking advantage of Text Classifiers' horizontal vision
VertAttack: Taking advantage of Text Classifiers' horizontal vision
Jonathan Rusert
AAML
35
1
0
12 Apr 2024
SemRoDe: Macro Adversarial Training to Learn Representations That are
  Robust to Word-Level Attacks
SemRoDe: Macro Adversarial Training to Learn Representations That are Robust to Word-Level Attacks
Brian Formento
Wenjie Feng
Chuan-Sheng Foo
Anh Tuan Luu
See-Kiong Ng
AAML
34
6
0
27 Mar 2024
The Impact of Quantization on the Robustness of Transformer-based Text
  Classifiers
The Impact of Quantization on the Robustness of Transformer-based Text Classifiers
Seyed Parsa Neshaei
Yasaman Boreshban
Gholamreza Ghassem-Sani
Seyed Abolghasem Mirroshandel
MQ
36
0
0
08 Mar 2024
Investigating the Impact of Model Instability on Explanations and
  Uncertainty
Investigating the Impact of Model Instability on Explanations and Uncertainty
Sara Vera Marjanović
Isabelle Augenstein
Christina Lioma
AAML
40
0
0
20 Feb 2024
Stumbling Blocks: Stress Testing the Robustness of Machine-Generated
  Text Detectors Under Attacks
Stumbling Blocks: Stress Testing the Robustness of Machine-Generated Text Detectors Under Attacks
Yichen Wang
Shangbin Feng
Abe Bohan Hou
Xiao Pu
Chao Shen
Xiaoming Liu
Yulia Tsvetkov
Tianxing He
DeLMO
45
17
0
18 Feb 2024
Benchmarking Large Multimodal Models against Common Corruptions
Benchmarking Large Multimodal Models against Common Corruptions
Jiawei Zhang
Tianyu Pang
Chao Du
Yi Ren
Bo-wen Li
Min-Bin Lin
MLLM
30
14
0
22 Jan 2024
PIXAR: Auto-Regressive Language Modeling in Pixel Space
PIXAR: Auto-Regressive Language Modeling in Pixel Space
Yintao Tai
Xiyang Liao
Alessandro Suglia
Antonio Vergari
MLLM
26
7
0
06 Jan 2024
A Novel Evaluation Framework for Assessing Resilience Against Prompt
  Injection Attacks in Large Language Models
A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models
Daniel Wankit Yip
Aysan Esmradi
C. Chan
AAML
28
11
0
02 Jan 2024
SenTest: Evaluating Robustness of Sentence Encoders
SenTest: Evaluating Robustness of Sentence Encoders
Tanmay Chavan
Shantanu Patankar
Aditya Kane
Omkar Gokhale
Geetanjali Kale
Raviraj Joshi
14
0
0
29 Nov 2023
Generating Valid and Natural Adversarial Examples with Large Language
  Models
Generating Valid and Natural Adversarial Examples with Large Language Models
Zimu Wang
Wei Wang
Qi Chen
Qiufeng Wang
Anh Nguyen
AAML
21
4
0
20 Nov 2023
CT-GAT: Cross-Task Generative Adversarial Attack based on
  Transferability
CT-GAT: Cross-Task Generative Adversarial Attack based on Transferability
Minxuan Lv
Chengwei Dai
Kun Li
Wei Zhou
Song Hu
AAML
32
6
0
22 Oct 2023
Fooling the Textual Fooler via Randomizing Latent Representations
Fooling the Textual Fooler via Randomizing Latent Representations
Duy C. Hoang
Quang H. Nguyen
Saurav Manchanda
MinLong Peng
Kok-Seng Wong
Khoa D. Doan
SILM
AAML
15
0
0
02 Oct 2023
The Trickle-down Impact of Reward (In-)consistency on RLHF
The Trickle-down Impact of Reward (In-)consistency on RLHF
Lingfeng Shen
Sihao Chen
Linfeng Song
Lifeng Jin
Baolin Peng
Haitao Mi
Daniel Khashabi
Dong Yu
32
21
0
28 Sep 2023
SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via
  Substitution
SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution
Zhongjie Ba
Jieming Zhong
Jiachen Lei
Pengyu Cheng
Qinglong Wang
Zhan Qin
Zhibo Wang
Kui Ren
18
17
0
25 Sep 2023
LEAP: Efficient and Automated Test Method for NLP Software
LEAP: Efficient and Automated Test Method for NLP Software
Ming-Ming Xiao
Yan Xiao
Hai Dong
Shunhui Ji
Pengcheng Zhang
AAML
14
8
0
22 Aug 2023
Hiding Backdoors within Event Sequence Data via Poisoning Attacks
Hiding Backdoors within Event Sequence Data via Poisoning Attacks
Elizaveta Kovtun
A. Ermilova
Dmitry Berestnev
Alexey Zaytsev
SILM
AAML
24
1
0
20 Aug 2023
SCAT: Robust Self-supervised Contrastive Learning via Adversarial
  Training for Text Classification
SCAT: Robust Self-supervised Contrastive Learning via Adversarial Training for Text Classification
J. Wu
Dit-Yan Yeung
SILM
25
0
0
04 Jul 2023
Evaluating the Robustness of Text-to-image Diffusion Models against
  Real-world Attacks
Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Hongcheng Gao
Hao Zhang
Yinpeng Dong
Zhijie Deng
AAML
33
21
0
16 Jun 2023
From Adversarial Arms Race to Model-centric Evaluation: Motivating a
  Unified Automatic Robustness Evaluation Framework
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Yangyi Chen
Hongcheng Gao
Ganqu Cui
Lifan Yuan
Dehan Kong
...
Longtao Huang
H. Xue
Zhiyuan Liu
Maosong Sun
Heng Ji
AAML
ELM
27
6
0
29 May 2023
Assessing Hidden Risks of LLMs: An Empirical Study on Robustness,
  Consistency, and Credibility
Assessing Hidden Risks of LLMs: An Empirical Study on Robustness, Consistency, and Credibility
Wen-song Ye
Mingfeng Ou
Tianyi Li
Yipeng Chen
Xuetao Ma
...
Sai Wu
Jie Fu
Gang Chen
Haobo Wang
J. Zhao
44
36
0
15 May 2023
In ChatGPT We Trust? Measuring and Characterizing the Reliability of
  ChatGPT
In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT
Xinyue Shen
Z. Chen
Michael Backes
Yang Zhang
19
55
0
18 Apr 2023
Masked Language Model Based Textual Adversarial Example Detection
Masked Language Model Based Textual Adversarial Example Detection
Xiaomei Zhang
Zhaoxi Zhang
Qi Zhong
Xufei Zheng
Yanjun Zhang
Shengshan Hu
L. Zhang
AAML
26
0
0
18 Apr 2023
No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment
  using Adversarial Learning
No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning
Thorsten Eisenhofer
Erwin Quiring
Jonas Moller
Doreen Riepel
Thorsten Holz
Konrad Rieck
AAML
26
6
0
25 Mar 2023
NoisyHate: Mining Online Human-Written Perturbations for Realistic Robustness Benchmarking of Content Moderation Models
NoisyHate: Mining Online Human-Written Perturbations for Realistic Robustness Benchmarking of Content Moderation Models
Yiran Ye
Thai Le
Dongwon Lee
AAML
DeLMO
33
3
0
18 Mar 2023
Verifying the Robustness of Automatic Credibility Assessment
Verifying the Robustness of Automatic Credibility Assessment
Piotr Przybyła
A. Shvets
Horacio Saggion
DeLMO
AAML
30
6
0
14 Mar 2023
Learning the Legibility of Visual Text Perturbations
Learning the Legibility of Visual Text Perturbations
D. Seth
Rickard Stureborg
Danish Pruthi
Bhuwan Dhingra
AAML
46
4
0
09 Mar 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
MTTM: Metamorphic Testing for Textual Content Moderation Software
MTTM: Metamorphic Testing for Textual Content Moderation Software
Wenxuan Wang
Jen-tse Huang
Weibin Wu
Jianping Zhang
Yizhan Huang
Shuqing Li
Pinjia He
Michael Lyu
50
29
0
11 Feb 2023
TextShield: Beyond Successfully Detecting Adversarial Sentences in Text
  Classification
TextShield: Beyond Successfully Detecting Adversarial Sentences in Text Classification
Lingfeng Shen
Ze Zhang
Haiyun Jiang
Ying Chen
AAML
41
5
0
03 Feb 2023
On Robustness of Prompt-based Semantic Parsing with Large Pre-trained
  Language Model: An Empirical Study on Codex
On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex
Terry Yue Zhuo
Zhuang Li
Yujin Huang
Fatemeh Shiri
Weiqing Wang
Gholamreza Haffari
Yuan-Fang Li
AAML
26
53
0
30 Jan 2023
CRYPTEXT: Database and Interactive Toolkit of Human-Written Text
  Perturbations in the Wild
CRYPTEXT: Database and Interactive Toolkit of Human-Written Text Perturbations in the Wild
Thai Le
Ye Yiran
Yifan Hu
Dongwon Lee
17
3
0
16 Jan 2023
A Mutation-based Text Generation for Adversarial Machine Learning
  Applications
A Mutation-based Text Generation for Adversarial Machine Learning Applications
Jesus Guerrero
G. Liang
I. Alsmadi
DeLMO
MedIm
33
1
0
21 Dec 2022
Accelerating Adversarial Perturbation by 50% with Semi-backward
  Propagation
Accelerating Adversarial Perturbation by 50% with Semi-backward Propagation
Zhiqi Bu
AAML
25
0
0
09 Nov 2022
TCAB: A Large-Scale Text Classification Attack Benchmark
TCAB: A Large-Scale Text Classification Attack Benchmark
Kalyani Asthana
Zhouhang Xie
Wencong You
Adam Noack
Jonathan Brophy
Sameer Singh
Daniel Lowd
39
3
0
21 Oct 2022
AugCSE: Contrastive Sentence Embedding with Diverse Augmentations
AugCSE: Contrastive Sentence Embedding with Diverse Augmentations
Zilu Tang
Muhammed Yusuf Kocyigit
Derry Wijaya
35
8
0
20 Oct 2022
The State of Profanity Obfuscation in Natural Language Processing
The State of Profanity Obfuscation in Natural Language Processing
Debora Nozza
Dirk Hovy
42
7
0
14 Oct 2022
Layer or Representation Space: What makes BERT-based Evaluation Metrics
  Robust?
Layer or Representation Space: What makes BERT-based Evaluation Metrics Robust?
Doan Nam Long Vu
N. Moosavi
Steffen Eger
21
9
0
06 Sep 2022
MockingBERT: A Method for Retroactively Adding Resilience to NLP Models
MockingBERT: A Method for Retroactively Adding Resilience to NLP Models
Jan Jezabek
A. Singh
SILM
KELM
15
0
0
21 Aug 2022
Adversarial Robustness of Visual Dialog
Adversarial Robustness of Visual Dialog
Lu Yu
Verena Rieser
AAML
28
0
0
06 Jul 2022
123
Next