ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.07790
  4. Cited By
Avoiding the Hypothesis-Only Bias in Natural Language Inference via
  Ensemble Adversarial Training

Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training

16 April 2020
Joe Stacey
Pasquale Minervini
Haim Dubossarsky
Sebastian Riedel
Tim Rocktaschel
    AI4CE
ArXivPDFHTML

Papers citing "Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training"

4 / 4 papers shown
Title
Shortcut Learning of Large Language Models in Natural Language
  Understanding
Shortcut Learning of Large Language Models in Natural Language Understanding
Mengnan Du
Fengxiang He
Na Zou
Dacheng Tao
Xia Hu
KELM
OffRL
19
82
0
25 Aug 2022
Hypothesis Only Baselines in Natural Language Inference
Hypothesis Only Baselines in Natural Language Inference
Adam Poliak
Jason Naradowsky
Aparajita Haldar
Rachel Rudinger
Benjamin Van Durme
187
576
0
02 May 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
Adversarial Example Generation with Syntactically Controlled Paraphrase
  Networks
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks
Mohit Iyyer
John Wieting
Kevin Gimpel
Luke Zettlemoyer
AAML
GAN
185
711
0
17 Apr 2018
1