ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00190
  4. Cited By
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs
  and Adversarial Attacks

Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks

1 May 2020
Winston Wu
Dustin L. Arendt
Svitlana Volkova
    AAML
ArXivPDFHTML

Papers citing "Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks"

3 / 3 papers shown
Title
Generating Natural Language Adversarial Examples
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
243
914
0
21 Apr 2018
Adversarial Example Generation with Syntactically Controlled Paraphrase
  Networks
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks
Mohit Iyyer
John Wieting
Kevin Gimpel
Luke Zettlemoyer
AAML
GAN
185
711
0
17 Apr 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,672
0
28 Feb 2017
1