ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.00872
  4. Cited By
On the Efficacy of Adversarial Data Collection for Question Answering:
  Results from a Large-Scale Randomized Study

On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study

Annual Meeting of the Association for Computational Linguistics (ACL), 2021
2 June 2021
Divyansh Kaushik
Douwe Kiela
Zachary Chase Lipton
Anuj Kumar
    AAML
ArXiv (abs)PDFHTML

Papers citing "On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study"

24 / 24 papers shown
MirrorCheck: Efficient Adversarial Defense for Vision-Language Models
MirrorCheck: Efficient Adversarial Defense for Vision-Language Models
Samar Fares
Klea Ziu
Toluwani Aremu
Nikita Durasov
Martin Takáč
Pascal Fua
Karthik Nandakumar
Ivan Laptev
VLMAAML
236
9
0
13 Jun 2024
An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts
  on Vision-Language Models
An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts on Vision-Language Models
Haochen Luo
Jindong Gu
Fengyuan Liu
Juil Sock
VLMVPVLMAAML
277
35
0
14 Mar 2024
Fighting Bias with Bias: Promoting Model Robustness by Amplifying
  Dataset Biases
Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset BiasesAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Yuval Reif
Roy Schwartz
272
8
0
30 May 2023
On Evaluating Adversarial Robustness of Large Vision-Language Models
On Evaluating Adversarial Robustness of Large Vision-Language ModelsNeural Information Processing Systems (NeurIPS), 2023
Yunqing Zhao
Tianyu Pang
Chao Du
Xiao Yang
Chongxuan Li
Ngai-Man Cheung
Min Lin
VLMAAMLMLLM
478
264
0
26 May 2023
On Degrees of Freedom in Defining and Testing Natural Language
  Understanding
On Degrees of Freedom in Defining and Testing Natural Language UnderstandingAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Saku Sugawara
S. Tsugita
ELM
323
2
0
24 May 2023
Think Twice: Measuring the Efficiency of Eliminating Prediction
  Shortcuts of Question Answering Models
Think Twice: Measuring the Efficiency of Eliminating Prediction Shortcuts of Question Answering ModelsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
Lukávs Mikula
Michal vStefánik
Marek Petrovivc
Petr Sojka
198
6
0
11 May 2023
Supporting Human-AI Collaboration in Auditing LLMs with LLMs
Supporting Human-AI Collaboration in Auditing LLMs with LLMsAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2023
Charvi Rastogi
Marco Tulio Ribeiro
Nicholas King
Harsha Nori
Saleema Amershi
ALM
250
87
0
19 Apr 2023
AGRO: Adversarial Discovery of Error-prone groups for Robust
  Optimization
AGRO: Adversarial Discovery of Error-prone groups for Robust OptimizationInternational Conference on Learning Representations (ICLR), 2022
Bhargavi Paranjape
Pradeep Dasigi
Vivek Srikumar
Luke Zettlemoyer
Hannaneh Hajishirzi
238
9
0
02 Dec 2022
Bridging the Training-Inference Gap for Dense Phrase Retrieval
Bridging the Training-Inference Gap for Dense Phrase RetrievalConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Gyuwan Kim
Jinhyuk Lee
Barlas Oğuz
Wenhan Xiong
Yizhe Zhang
Yashar Mehdad
William Yang Wang
143
2
0
25 Oct 2022
Benchmarking Long-tail Generalization with Likelihood Splits
Benchmarking Long-tail Generalization with Likelihood SplitsFindings (Findings), 2022
Ameya Godbole
Robin Jia
ALM
203
10
0
13 Oct 2022
CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation
CORE: A Retrieve-then-Edit Framework for Counterfactual Data GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Tanay Dixit
Bhargavi Paranjape
Hannaneh Hajishirzi
Luke Zettlemoyer
SyDa
410
33
0
10 Oct 2022
Possible Stories: Evaluating Situated Commonsense Reasoning under
  Multiple Possible Scenarios
Possible Stories: Evaluating Situated Commonsense Reasoning under Multiple Possible ScenariosInternational Conference on Computational Linguistics (COLING), 2022
Mana Ashida
Saku Sugawara
193
6
0
16 Sep 2022
longhorns at DADC 2022: How many linguists does it take to fool a
  Question Answering model? A systematic approach to adversarial attacks
longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks
Venelin Kovatchev
Trina Chatterjee
Venkata S Govindarajan
Jifan Chen
Eunsol Choi
...
K. Erk
Matthew Lease
Junyi Jessy Li
Yating Wu
Kyle Mahowald
AAMLELM
197
11
0
29 Jun 2022
Collecting high-quality adversarial data for machine reading
  comprehension tasks with humans and models in the loop
Collecting high-quality adversarial data for machine reading comprehension tasks with humans and models in the loop
Damian Y. Romero Diaz
M. Aniol
John M. Culnan
138
0
0
28 Jun 2022
Resolving the Human Subjects Status of Machine Learning's Crowdworkers
Resolving the Human Subjects Status of Machine Learning's CrowdworkersQueue (ACM Queue), 2022
Divyansh Kaushik
Zachary Chase Lipton
A. London
211
4
0
08 Jun 2022
What Makes Reading Comprehension Questions Difficult?
What Makes Reading Comprehension Questions Difficult?Annual Meeting of the Association for Computational Linguistics (ACL), 2022
Saku Sugawara
Nikita Nangia
Alex Warstadt
Sam Bowman
ELMRALM
159
14
0
12 Mar 2022
WANLI: Worker and AI Collaboration for Natural Language Inference
  Dataset Creation
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset CreationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Alisa Liu
Swabha Swayamdipta
Noah A. Smith
Yejin Choi
634
250
0
16 Jan 2022
Models in the Loop: Aiding Crowdworkers with Generative Annotation
  Assistants
Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants
Max Bartolo
Tristan Thrush
Sebastian Riedel
Pontus Stenetorp
Robin Jia
Douwe Kiela
336
35
0
16 Dec 2021
Combining Data-driven Supervision with Human-in-the-loop Feedback for
  Entity Resolution
Combining Data-driven Supervision with Human-in-the-loop Feedback for Entity Resolution
Wenpeng Yin
Shelby Heinecke
Jia Li
N. Keskar
Michael J. Jones
Shouzhong Shi
Stanislav Georgiev
Kurt Milich
Joseph Esposito
Caiming Xiong
119
3
0
20 Nov 2021
Adversarially Constructed Evaluation Sets Are More Challenging, but May
  Not Be Fair
Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair
Jason Phang
Angelica Chen
William Huang
Samuel R. Bowman
AAML
166
14
0
16 Nov 2021
Analyzing Dynamic Adversarial Training Data in the Limit
Analyzing Dynamic Adversarial Training Data in the Limit
Eric Wallace
Adina Williams
Robin Jia
Douwe Kiela
496
31
0
16 Oct 2021
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP
  Systems Fail
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP Systems Fail
Sam Bowman
OffRL
362
48
0
15 Oct 2021
Retrieval-guided Counterfactual Generation for QA
Retrieval-guided Counterfactual Generation for QA
Bhargavi Paranjape
Matthew Lamm
Ian Tenney
294
37
0
14 Oct 2021
Break, Perturb, Build: Automatic Perturbation of Reasoning Paths Through
  Question Decomposition
Break, Perturb, Build: Automatic Perturbation of Reasoning Paths Through Question DecompositionTransactions of the Association for Computational Linguistics (TACL), 2021
Mor Geva
Tomer Wolfson
Jonathan Berant
ReLMLRM
200
24
0
29 Jul 2021
1