ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.11508
  4. Cited By
Text Processing Like Humans Do: Visually Attacking and Shielding NLP
  Systems

Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems

27 March 2019
Steffen Eger
Gözde Gül Sahin
Andreas Rucklé
Ji-Ung Lee
Claudia Schulz
Mohsen Mesgar
Krishnkant Swarnkar
Edwin Simpson
Iryna Gurevych
    AAML
ArXivPDFHTML

Papers citing "Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems"

50 / 101 papers shown
Title
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal
  Fake News Detection
Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Jinyin Chen
Chengyu Jia
Haibin Zheng
Ruoxi Chen
Chenbo Fu
AAML
22
10
0
17 Jun 2022
Adversarial Text Normalization
Adversarial Text Normalization
Joanna Bitton
Maya Pavlova
Ivan Evtimov
AAML
22
2
0
08 Jun 2022
Certified Robustness Against Natural Language Attacks by Causal
  Intervention
Certified Robustness Against Natural Language Attacks by Causal Intervention
Haiteng Zhao
Chang Ma
Xinshuai Dong
A. Luu
Zhi-Hong Deng
Hanwang Zhang
AAML
33
35
0
24 May 2022
Don't sweat the small stuff, classify the rest: Sample Shielding to
  protect text classifiers against adversarial attacks
Don't sweat the small stuff, classify the rest: Sample Shielding to protect text classifiers against adversarial attacks
Jonathan Rusert
P. Srinivasan
AAML
19
3
0
03 May 2022
What do we Really Know about State of the Art NER?
What do we Really Know about State of the Art NER?
Sowmya Vajjala
Ramya Balasubramaniam
19
15
0
29 Apr 2022
"That Is a Suspicious Reaction!": Interpreting Logits Variation to
  Detect NLP Adversarial Attacks
"That Is a Suspicious Reaction!": Interpreting Logits Variation to Detect NLP Adversarial Attacks
Edoardo Mosca
Shreyash Agarwal
Javier Rando
Georg Groh
AAML
27
30
0
10 Apr 2022
On The Robustness of Offensive Language Classifiers
On The Robustness of Offensive Language Classifiers
Jonathan Rusert
Zubair Shafiq
P. Srinivasan
AAML
13
12
0
21 Mar 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei-Ye Zhao
Yang Gao
Steffen Eger
AAML
ELM
22
20
0
21 Mar 2022
On Robust Prefix-Tuning for Text Classification
On Robust Prefix-Tuning for Text Classification
Zonghan Yang
Yang Liu
VLM
13
20
0
19 Mar 2022
Perturbations in the Wild: Leveraging Human-Written Text Perturbations
  for Realistic Adversarial Attack and Defense
Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense
Thai Le
Jooyoung Lee
Kevin Yen
Yifan Hu
Dongwon Lee
AAML
6
17
0
19 Mar 2022
Data-Driven Mitigation of Adversarial Text Perturbation
Data-Driven Mitigation of Adversarial Text Perturbation
Rasika Bhalerao
Mohammad Al-Rubaie
Anand Bhaskar
Igor L. Markov
11
8
0
19 Feb 2022
Identifying Adversarial Attacks on Text Classifiers
Identifying Adversarial Attacks on Text Classifiers
Zhouhang Xie
Jonathan Brophy
Adam Noack
Wencong You
Kalyani Asthana
Carter Perkins
Sabrina Reis
Sameer Singh
Daniel Lowd
AAML
24
9
0
21 Jan 2022
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations
Chris Emmery
Ákos Kádár
Grzegorz Chrupała
Walter Daelemans
17
5
0
17 Jan 2022
Robust Natural Language Processing: Recent Advances, Challenges, and
  Future Directions
Robust Natural Language Processing: Recent Advances, Challenges, and Future Directions
Marwan Omar
Soohyeon Choi
Daehun Nyang
David A. Mohaisen
26
57
0
03 Jan 2022
Understanding and Measuring Robustness of Multimodal Learning
Understanding and Measuring Robustness of Multimodal Learning
Nishant Vishwamitra
Hongxin Hu
Ziming Zhao
Long Cheng
Feng Luo
AAML
19
5
0
22 Dec 2021
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial
  Robustness?
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?
Xinhsuai Dong
Anh Tuan Luu
Min-Bin Lin
Shuicheng Yan
Hanwang Zhang
SILM
AAML
20
55
0
22 Dec 2021
A Black-box NLP Classifier Attacker
A Black-box NLP Classifier Attacker
Yueyang Liu
Hunmin Lee
Zhipeng Cai
AAML
13
4
0
22 Dec 2021
NL-Augmenter: A Framework for Task-Sensitive Natural Language
  Augmentation
NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Kaustubh D. Dhole
Varun Gangal
Sebastian Gehrmann
Aadesh Gupta
Zhenhao Li
...
Tianbao Xie
Usama Yaseen
Michael A. Yee
Jing Zhang
Yue Zhang
174
86
0
06 Dec 2021
Effective and Imperceptible Adversarial Textual Attack via
  Multi-objectivization
Effective and Imperceptible Adversarial Textual Attack via Multi-objectivization
Shengcai Liu
Ning Lu
W. Hong
Chao Qian
Ke Tang
AAML
14
15
0
02 Nov 2021
Adversarial Attacks and Defenses for Social Network Text Processing
  Applications: Techniques, Challenges and Future Research Directions
Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions
I. Alsmadi
Kashif Ahmad
Mahmoud Nazzal
Firoj Alam
Ala I. Al-Fuqaha
Abdallah Khreishah
A. Algosaibi
AAML
19
16
0
26 Oct 2021
Generating Watermarked Adversarial Texts
Generating Watermarked Adversarial Texts
Mingjie Li
Hanzhou Wu
Xinpeng Zhang
AAML
WaLM
19
1
0
25 Oct 2021
Interpreting the Robustness of Neural NLP Models to Textual
  Perturbations
Interpreting the Robustness of Neural NLP Models to Textual Perturbations
Yunxiang Zhang
Liangming Pan
Samson Tan
Min-Yen Kan
25
21
0
14 Oct 2021
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text
  Style Transfer
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Fanchao Qi
Yangyi Chen
Xurui Zhang
Mukai Li
Zhiyuan Liu
Maosong Sun
AAML
SILM
82
175
0
14 Oct 2021
Global Explainability of BERT-Based Evaluation Metrics by Disentangling
  along Linguistic Factors
Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic Factors
Marvin Kaster
Wei-Ye Zhao
Steffen Eger
25
24
0
08 Oct 2021
Multi-granularity Textual Adversarial Attack with Behavior Cloning
Multi-granularity Textual Adversarial Attack with Behavior Cloning
Yangyi Chen
Jingtong Su
Wei Wei
AAML
17
32
0
09 Sep 2021
Efficient Combinatorial Optimization for Word-level Adversarial Textual
  Attack
Efficient Combinatorial Optimization for Word-level Adversarial Textual Attack
Shengcai Liu
Ning Lu
Cheng Chen
Ke Tang
AAML
17
32
0
06 Sep 2021
Towards Robustness Against Natural Language Word Substitutions
Towards Robustness Against Natural Language Word Substitutions
Xinshuai Dong
A. Luu
Rongrong Ji
Hong Liu
SILM
AAML
32
113
0
28 Jul 2021
A Differentiable Language Model Adversarial Attack on Text Classifiers
A Differentiable Language Model Adversarial Attack on Text Classifiers
I. Fursov
Alexey Zaytsev
Pavel Burnyshev
Ekaterina Dmitrieva
Nikita Klyuchnikov
A. Kravchenko
Ekaterina Artemova
Evgeny Burnaev
SILM
17
15
0
23 Jul 2021
Adversarial Reinforced Instruction Attacker for Robust Vision-Language
  Navigation
Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation
Bingqian Lin
Yi Zhu
Yanxin Long
Xiaodan Liang
QiXiang Ye
Liang Lin
AAML
39
16
0
23 Jul 2021
BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively
  Inspired Orthographic Adversarial Attacks
BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively Inspired Orthographic Adversarial Attacks
Yannik Keller
J. Mackensen
Steffen Eger
AAML
21
29
0
02 Jun 2021
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]
Jiehang Zeng
Xiaoqing Zheng
Jianhan Xu
Linyang Li
Liping Yuan
Xuanjing Huang
AAML
20
67
0
08 May 2021
Robust Open-Vocabulary Translation from Visual Text Representations
Robust Open-Vocabulary Translation from Visual Text Representations
Elizabeth Salesky
David Etter
Matt Post
VLM
8
39
0
16 Apr 2021
Token-Modification Adversarial Attacks for Natural Language Processing:
  A Survey
Token-Modification Adversarial Attacks for Natural Language Processing: A Survey
Tom Roth
Yansong Gao
A. Abuadbba
Surya Nepal
Wei Liu
AAML
23
12
0
01 Mar 2021
Adversarial Stylometry in the Wild: Transferable Lexical Substitution
  Attacks on Author Profiling
Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling
Chris Emmery
Ákos Kádár
Grzegorz Chrupała
AAML
35
20
0
27 Jan 2021
Adversarial Machine Learning in Text Analysis and Generation
Adversarial Machine Learning in Text Analysis and Generation
I. Alsmadi
AAML
18
5
0
14 Jan 2021
From Hero to Zéroe: A Benchmark of Low-Level Adversarial Attacks
From Hero to Zéroe: A Benchmark of Low-Level Adversarial Attacks
Steffen Eger
Yannik Benz
AAML
11
45
0
12 Oct 2020
Adversarial Attacks Against Deep Learning Systems for ICD-9 Code
  Assignment
Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment
Sharan Raja
Rudraksh Tuwani
AAML
6
3
0
29 Sep 2020
Learning to Attack: Towards Textual Adversarial Attacking in Real-world
  Situations
Learning to Attack: Towards Textual Adversarial Attacking in Real-world Situations
Yuan Zang
Bairu Hou
Fanchao Qi
Zhiyuan Liu
Xiaojun Meng
Maosong Sun
6
11
0
19 Sep 2020
OpenAttack: An Open-source Textual Adversarial Attack Toolkit
OpenAttack: An Open-source Textual Adversarial Attack Toolkit
Guoyang Zeng
Fanchao Qi
Qianrui Zhou
Ting Zhang
Zixian Ma
Bairu Hou
Yuan Zang
Zhiyuan Liu
Maosong Sun
AAML
13
118
0
19 Sep 2020
Visual Attack and Defense on Text
Visual Attack and Defense on Text
Shengjun Liu
Ningkang Jiang
Yuanbin Wu
AAML
13
0
0
07 Aug 2020
It's Morphin' Time! Combating Linguistic Discrimination with
  Inflectional Perturbations
It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations
Samson Tan
Shafiq R. Joty
Min-Yen Kan
R. Socher
166
103
0
09 May 2020
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs
  and Adversarial Attacks
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks
Winston Wu
Dustin L. Arendt
Svitlana Volkova
AAML
15
5
0
01 May 2020
Frequency-Guided Word Substitutions for Detecting Textual Adversarial
  Examples
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples
Maximilian Mozes
Pontus Stenetorp
Bennett Kleinberg
Lewis D. Griffin
AAML
25
99
0
13 Apr 2020
Directions in Abusive Language Training Data: Garbage In, Garbage Out
Directions in Abusive Language Training Data: Garbage In, Garbage Out
Bertie Vidgen
Leon Derczynski
6
251
0
03 Apr 2020
Gödel's Sentence Is An Adversarial Example But Unsolvable
Gödel's Sentence Is An Adversarial Example But Unsolvable
Xiaodong Qi
Lansheng Han
AAML
20
0
0
25 Feb 2020
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP
  Applications
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications
Dou Goodman
Zhonghou Lv
Minghua Wang
AAML
6
6
0
31 Jan 2020
A Visual Analytics Framework for Adversarial Text Generation
A Visual Analytics Framework for Adversarial Text Generation
Brandon Laughlin
C. Collins
K. Sankaranarayanan
K. El-Khatib
AAML
14
10
0
24 Sep 2019
MoverScore: Text Generation Evaluating with Contextualized Embeddings
  and Earth Mover Distance
MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance
Wei-Ye Zhao
Maxime Peyrard
Fei Liu
Yang Gao
Christian M. Meyer
Steffen Eger
22
583
0
05 Sep 2019
Towards Scalable and Reliable Capsule Networks for Challenging NLP
  Applications
Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications
Wei-Ye Zhao
Haiyun Peng
Steffen Eger
Erik Cambria
Min Yang
GNN
11
122
0
06 Jun 2019
Generating Natural Language Adversarial Examples
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
245
914
0
21 Apr 2018
Previous
123
Next