ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.12316
  4. Cited By
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for
  Discrete Data

Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data

31 May 2018
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
    AAMLSILM
ArXiv (abs)PDFHTML

Papers citing "Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data"

38 / 38 papers shown
Title
A Comprehensive Analysis of Adversarial Attacks against Spam Filters
A Comprehensive Analysis of Adversarial Attacks against Spam Filters
Esra Hotoğlu
Sevil Sen
Burcu Can
AAML
62
0
0
04 May 2025
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
127
16
0
27 Jul 2024
Revisiting character-level adversarial attacks
Revisiting character-level adversarial attacks
Elias Abad Rocamora
Yongtao Wu
Fanghui Liu
Grigorios G. Chrysos
Volkan Cevher
AAML
96
4
0
07 May 2024
Semantic Stealth: Adversarial Text Attacks on NLP Using Several Methods
Semantic Stealth: Adversarial Text Attacks on NLP Using Several Methods
Roopkatha Dey
Aivy Debnath
Sayak Kumar Dutta
Kaustav Ghosh
Arijit Mitra
Arghya Roy Chowdhury
Jaydip Sen
AAMLSILM
61
1
0
08 Apr 2024
Data Poisoning for In-context Learning
Data Poisoning for In-context Learning
Pengfei He
Han Xu
Yue Xing
Hui Liu
Makoto Yamada
Jiliang Tang
SILMAAML
100
13
0
03 Feb 2024
The Best Defense is Attack: Repairing Semantics in Textual Adversarial
  Examples
The Best Defense is Attack: Repairing Semantics in Textual Adversarial Examples
Heng Yang
Ke Li
AAML
92
3
0
06 May 2023
Towards Efficient and Domain-Agnostic Evasion Attack with
  High-dimensional Categorical Inputs
Towards Efficient and Domain-Agnostic Evasion Attack with High-dimensional Categorical Inputs
Hongyan Bao
Yufei Han
Yujun Zhou
Xin Gao
Xiangliang Zhang
AAML
70
5
0
13 Dec 2022
AdvCat: Domain-Agnostic Robustness Assessment for Cybersecurity-Critical
  Applications with Categorical Inputs
AdvCat: Domain-Agnostic Robustness Assessment for Cybersecurity-Critical Applications with Categorical Inputs
Helene Orsini
Hongyan Bao
Yujun Zhou
Xiangrui Xu
Yufei Han
Longyang Yi
Wei Wang
Xin Gao
Xiangliang Zhang
AAML
87
1
0
13 Dec 2022
Generating Textual Adversaries with Minimal Perturbation
Generating Textual Adversaries with Minimal Perturbation
Xingyi Zhao
Lu Zhang
Depeng Xu
Shuhan Yuan
DeLMOAAML
45
2
0
12 Nov 2022
Are AlphaZero-like Agents Robust to Adversarial Perturbations?
Are AlphaZero-like Agents Robust to Adversarial Perturbations?
Li-Cheng Lan
Huan Zhang
Ti-Rong Wu
Meng-Yu Tsai
I-Chen Wu
Cho-Jui Hsieh
AAML
75
11
0
07 Nov 2022
Towards Generating Adversarial Examples on Mixed-type Data
Towards Generating Adversarial Examples on Mixed-type Data
Han Xu
Menghai Pan
Zhimeng Jiang
Huiyuan Chen
Xiaoting Li
Mahashweta Das
Hao Yang
AAMLSILM
110
0
0
17 Oct 2022
Probabilistic Categorical Adversarial Attack & Adversarial Training
Probabilistic Categorical Adversarial Attack & Adversarial Training
Han Xu
Penghei He
Jie Ren
Yuxuan Wan
Zitao Liu
Hui Liu
Jiliang Tang
AAMLSILM
52
0
0
17 Oct 2022
Adversarial Robustness for Tabular Data through Cost and Utility
  Awareness
Adversarial Robustness for Tabular Data through Cost and Utility Awareness
Klim Kireev
B. Kulynych
Carmela Troncoso
AAML
83
18
0
27 Aug 2022
Fooling Explanations in Text Classifiers
Fooling Explanations in Text Classifiers
Adam Ivankay
Ivan Girardi
Chiara Marchiori
P. Frossard
AAML
80
19
0
07 Jun 2022
CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming
  Language Models
CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming Language Models
Akshita Jha
Chandan K. Reddy
SILMELMAAML
112
67
0
31 May 2022
A Review of Adversarial Attack and Defense for Classification Methods
A Review of Adversarial Attack and Defense for Classification Methods
Yao Li
Minhao Cheng
Cho-Jui Hsieh
T. C. Lee
AAML
68
69
0
18 Nov 2021
Adversarial Attacks and Defenses for Social Network Text Processing
  Applications: Techniques, Challenges and Future Research Directions
Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions
I. Alsmadi
Kashif Ahmad
Mahmoud Nazzal
Firoj Alam
Ala I. Al-Fuqaha
Abdallah Khreishah
A. Algosaibi
AAML
57
16
0
26 Oct 2021
A Review of the Gumbel-max Trick and its Extensions for Discrete
  Stochasticity in Machine Learning
A Review of the Gumbel-max Trick and its Extensions for Discrete Stochasticity in Machine Learning
Iris A. M. Huijben
W. Kool
Max B. Paulus
Ruud J. G. van Sloun
113
98
0
04 Oct 2021
Virtual Data Augmentation: A Robust and General Framework for
  Fine-tuning Pre-trained Models
Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models
Kun Zhou
Wayne Xin Zhao
Sirui Wang
Fuzheng Zhang
Wei Wu
Ji-Rong Wen
AAML
51
8
0
13 Sep 2021
Searching for an Effective Defender: Benchmarking Defense against
  Adversarial Word Substitution
Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution
Zongyi Li
Jianhan Xu
Jiehang Zeng
Linyang Li
Xiaoqing Zheng
Qi Zhang
Kai-Wei Chang
Cho-Jui Hsieh
AAML
50
74
0
29 Aug 2021
A Differentiable Language Model Adversarial Attack on Text Classifiers
A Differentiable Language Model Adversarial Attack on Text Classifiers
I. Fursov
Alexey Zaytsev
Pavel Burnyshev
Ekaterina Dmitrieva
Nikita Klyuchnikov
A. Kravchenko
Ekaterina Artemova
Evgeny Burnaev
SILM
67
15
0
23 Jul 2021
Robust Learning for Text Classification with Multi-source Noise
  Simulation and Hard Example Mining
Robust Learning for Text Classification with Multi-source Noise Simulation and Hard Example Mining
Guowei Xu
Wenbiao Ding
Weiping Fu
Zhongqin Wu
Zitao Liu
OOD
95
2
0
15 Jul 2021
Improving Model Robustness with Latent Distribution Locally and Globally
Improving Model Robustness with Latent Distribution Locally and Globally
Zhuang Qian
Shufei Zhang
Kaizhu Huang
Qiufeng Wang
Rui Zhang
Xinping Yi
AAML
64
14
0
08 Jul 2021
Exploring Misclassifications of Robust Neural Networks to Enhance
  Adversarial Attacks
Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks
Leo Schwinn
René Raab
A. Nguyen
Dario Zanca
Bjoern M. Eskofier
AAML
83
61
0
21 May 2021
Improved and Efficient Text Adversarial Attacks using Target Information
Improved and Efficient Text Adversarial Attacks using Target Information
M. Hossam
Trung Le
He Zhao
Viet Huynh
Dinh Q. Phung
AAML
33
1
0
27 Apr 2021
Double Perturbation: On the Robustness of Robustness and Counterfactual
  Bias Evaluation
Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation
Chong Zhang
Jieyu Zhao
Huan Zhang
Kai-Wei Chang
Cho-Jui Hsieh
AAML
66
10
0
12 Apr 2021
Adversarial Machine Learning in Text Analysis and Generation
Adversarial Machine Learning in Text Analysis and Generation
I. Alsmadi
AAML
104
5
0
14 Jan 2021
On the Transferability of Adversarial Attacksagainst Neural Text
  Classifier
On the Transferability of Adversarial Attacksagainst Neural Text Classifier
Liping Yuan
Xiaoqing Zheng
Yi Zhou
Cho-Jui Hsieh
Kai-Wei Chang
SILMAAML
58
26
0
17 Nov 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
114
48
0
19 Oct 2020
Explain2Attack: Text Adversarial Attacks via Cross-Domain
  Interpretability
Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability
M. Hossam
Trung Le
He Zhao
Dinh Q. Phung
SILMAAML
57
6
0
14 Oct 2020
Frequency-Guided Word Substitutions for Detecting Textual Adversarial
  Examples
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples
Maximilian Mozes
Pontus Stenetorp
Bennett Kleinberg
Lewis D. Griffin
AAML
177
103
0
13 Apr 2020
Knowing When to Stop: Evaluation and Verification of Conformity to
  Output-size Specifications
Knowing When to Stop: Evaluation and Verification of Conformity to Output-size Specifications
Chenglong Wang
Rudy Bunel
Krishnamurthy Dvijotham
Po-Sen Huang
Edward Grefenstette
Pushmeet Kohli
58
5
0
26 Apr 2019
Adversarial Attacks on Deep Learning Models in Natural Language
  Processing: A Survey
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey
W. Zhang
Quan Z. Sheng
A. Alhazmi
Chenliang Li
AAML
114
57
0
21 Jan 2019
Analysis Methods in Neural Language Processing: A Survey
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov
James R. Glass
104
558
0
21 Dec 2018
Discrete Adversarial Attacks and Submodular Optimization with
  Applications to Text Classification
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification
Qi Lei
Lingfei Wu
Pin-Yu Chen
A. Dimakis
Inderjit S. Dhillon
Michael Witbrock
AAML
99
92
0
01 Dec 2018
Adversarial Reprogramming of Text Classification Neural Networks
Adversarial Reprogramming of Text Classification Neural Networks
Paarth Neekhara
Shehzeen Samarah Hussain
Shlomo Dubnov
F. Koushanfar
AAMLSILM
98
9
0
06 Sep 2018
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with
  Adversarial Examples
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples
Minhao Cheng
Jinfeng Yi
Pin-Yu Chen
Huan Zhang
Cho-Jui Hsieh
SILMAAML
116
244
0
03 Mar 2018
Learning to Explain: An Information-Theoretic Perspective on Model
  Interpretation
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
MLTFAtt
182
576
0
21 Feb 2018
1