ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08161
  4. Cited By
Back to Square One: Artifact Detection, Training and Commonsense
  Disentanglement in the Winograd Schema

Back to Square One: Artifact Detection, Training and Commonsense Disentanglement in the Winograd Schema

16 April 2021
Yanai Elazar
Hongming Zhang
Yoav Goldberg
Dan Roth
    ReLM
    LRM
ArXivPDFHTML

Papers citing "Back to Square One: Artifact Detection, Training and Commonsense Disentanglement in the Winograd Schema"

15 / 15 papers shown
Title
MASS: Overcoming Language Bias in Image-Text Matching
MASS: Overcoming Language Bias in Image-Text Matching
Jiwan Chung
Seungwon Lim
Sangkyu Lee
Youngjae Yu
VLM
30
0
0
20 Jan 2025
WinoPron: Revisiting English Winogender Schemas for Consistency,
  Coverage, and Grammatical Case
WinoPron: Revisiting English Winogender Schemas for Consistency, Coverage, and Grammatical Case
Vagrant Gautam
Julius Steuer
Eileen Bingert
Ray Johns
Anne Lauscher
Dietrich Klakow
48
3
0
09 Sep 2024
Picturing Ambiguity: A Visual Twist on the Winograd Schema Challenge
Picturing Ambiguity: A Visual Twist on the Winograd Schema Challenge
Brendan Park
Madeline Janecek
Naser Ezzati-Jivan
Yifeng Li
Ali Emami
32
0
0
25 May 2024
CompA: Addressing the Gap in Compositional Reasoning in Audio-Language
  Models
CompA: Addressing the Gap in Compositional Reasoning in Audio-Language Models
Sreyan Ghosh
Ashish Seth
Sonal Kumar
Utkarsh Tyagi
Chandra Kiran Reddy Evuru
S. Ramaneswaran
S. Sakshi
Oriol Nieto
R. Duraiswami
Dinesh Manocha
AuLLM
VLM
CoGe
35
21
0
12 Oct 2023
Causal interventions expose implicit situation models for commonsense
  language understanding
Causal interventions expose implicit situation models for commonsense language understanding
Takateru Yamakoshi
James L. McClelland
A. Goldberg
Robert D. Hawkins
17
6
0
06 Jun 2023
Event knowledge in large language models: the gap between the impossible
  and the unlikely
Event knowledge in large language models: the gap between the impossible and the unlikely
Carina Kauf
Anna A. Ivanova
Giulia Rambelli
Emmanuele Chersoni
Jingyuan Selena She
Zawad Chowdhury
Evelina Fedorenko
Alessandro Lenci
30
67
0
02 Dec 2022
Measuring Reliability of Large Language Models through Semantic
  Consistency
Measuring Reliability of Large Language Models through Semantic Consistency
Harsh Raj
Domenic Rosati
S. Majumdar
HILM
22
30
0
10 Nov 2022
CIKQA: Learning Commonsense Inference with a Unified
  Knowledge-in-the-loop QA Paradigm
CIKQA: Learning Commonsense Inference with a Unified Knowledge-in-the-loop QA Paradigm
Hongming Zhang
Yintong Huo
Yanai Elazar
Yangqiu Song
Yoav Goldberg
Dan Roth
LRM
25
3
0
12 Oct 2022
On the Limitations of Dataset Balancing: The Lost Battle Against
  Spurious Correlations
On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations
Roy Schwartz
Gabriel Stanovsky
22
24
0
27 Apr 2022
Testing the Ability of Language Models to Interpret Figurative Language
Testing the Ability of Language Models to Interpret Figurative Language
Emmy Liu
Chenxuan Cui
Kenneth Zheng
Graham Neubig
ELM
LRM
17
65
0
26 Apr 2022
Commonsense Knowledge in Word Associations and ConceptNet
Commonsense Knowledge in Word Associations and ConceptNet
Chunhua Liu
Trevor Cohn
Lea Frermann
14
7
0
20 Sep 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
258
346
0
01 Feb 2021
The Sensitivity of Language Models and Humans to Winograd Schema
  Perturbations
The Sensitivity of Language Models and Humans to Winograd Schema Perturbations
Mostafa Abdou
Vinit Ravishankar
Maria Barrett
Yonatan Belinkov
Desmond Elliott
Anders Søgaard
ReLM
LRM
54
34
0
04 May 2020
Are We Modeling the Task or the Annotator? An Investigation of Annotator
  Bias in Natural Language Understanding Datasets
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
237
319
0
21 Aug 2019
Hypothesis Only Baselines in Natural Language Inference
Hypothesis Only Baselines in Natural Language Inference
Adam Poliak
Jason Naradowsky
Aparajita Haldar
Rachel Rudinger
Benjamin Van Durme
190
576
0
02 May 2018
1