ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.00288
  4. Cited By
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and
  Improving Models

Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models

1 January 2021
Tongshuang Wu
Marco Tulio Ribeiro
Jeffrey Heer
Daniel S. Weld
ArXivPDFHTML

Papers citing "Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models"

28 / 178 papers shown
Title
Counterfactually Evaluating Explanations in Recommender Systems
Counterfactually Evaluating Explanations in Recommender Systems
Yuanshun Yao
Chong Wang
Hang Li
OffRL
LRM
36
6
0
02 Mar 2022
Automatically Generating Counterfactuals for Relation Classification
Automatically Generating Counterfactuals for Relation Classification
Mi Zhang
T. Qian
Tingyu Zhang
CML
19
0
0
22 Feb 2022
Prediction Sensitivity: Continual Audit of Counterfactual Fairness in
  Deployed Classifiers
Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers
Krystal Maughan
Ivoline C. Ngong
Joseph P. Near
6
2
0
09 Feb 2022
Red Teaming Language Models with Language Models
Red Teaming Language Models with Language Models
Ethan Perez
Saffron Huang
Francis Song
Trevor Cai
Roman Ring
John Aslanides
Amelia Glaese
Nat McAleese
G. Irving
AAML
8
609
0
07 Feb 2022
Analogies and Feature Attributions for Model Agnostic Explanation of
  Similarity Learners
Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners
K. Ramamurthy
Amit Dhurandhar
Dennis L. Wei
Zaid Bin Tariq
FAtt
25
3
0
02 Feb 2022
ROCK: Causal Inference Principles for Reasoning about Commonsense
  Causality
ROCK: Causal Inference Principles for Reasoning about Commonsense Causality
Jiayao Zhang
Hongming Zhang
Weijie J. Su
Dan Roth
CML
LRM
163
24
0
31 Jan 2022
Models in the Loop: Aiding Crowdworkers with Generative Annotation
  Assistants
Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants
Max Bartolo
Tristan Thrush
Sebastian Riedel
Pontus Stenetorp
Robin Jia
Douwe Kiela
19
33
0
16 Dec 2021
Measure and Improve Robustness in NLP Models: A Survey
Measure and Improve Robustness in NLP Models: A Survey
Xuezhi Wang
Haohan Wang
Diyi Yang
139
130
0
15 Dec 2021
NL-Augmenter: A Framework for Task-Sensitive Natural Language
  Augmentation
NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Kaustubh D. Dhole
Varun Gangal
Sebastian Gehrmann
Aadesh Gupta
Zhenhao Li
...
Tianbao Xie
Usama Yaseen
Michael A. Yee
Jing Zhang
Yue Zhang
169
86
0
06 Dec 2021
How Emotionally Stable is ALBERT? Testing Robustness with Stochastic
  Weight Averaging on a Sentiment Analysis Task
How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task
Urja Khurana
Eric T. Nalisnick
Antske Fokkens
MoMe
16
6
0
18 Nov 2021
SynthBio: A Case Study in Human-AI Collaborative Curation of Text
  Datasets
SynthBio: A Case Study in Human-AI Collaborative Curation of Text Datasets
Ann Yuan
Daphne Ippolito
Vitaly Nikolaev
Chris Callison-Burch
Andy Coenen
Sebastian Gehrmann
SyDa
104
20
0
11 Nov 2021
Counterfactual Explanations for Models of Code
Counterfactual Explanations for Models of Code
Jürgen Cito
Işıl Dillig
V. Murali
S. Chandra
AAML
LRM
24
47
0
10 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MA
VLM
AI4CE
55
1,029
0
01 Nov 2021
Retrieval-guided Counterfactual Generation for QA
Retrieval-guided Counterfactual Generation for QA
Bhargavi Paranjape
Matthew Lamm
Ian Tenney
22
31
0
14 Oct 2021
AI Chains: Transparent and Controllable Human-AI Interaction by Chaining
  Large Language Model Prompts
AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts
Tongshuang Wu
Michael Terry
Carrie J. Cai
LLMAG
AI4CE
LRM
24
444
0
04 Oct 2021
Enhancing Model Robustness and Fairness with Causality: A Regularization
  Approach
Enhancing Model Robustness and Fairness with Causality: A Regularization Approach
Zhao Wang
Kai Shu
A. Culotta
OOD
13
14
0
03 Oct 2021
Let the CAT out of the bag: Contrastive Attributed explanations for Text
Let the CAT out of the bag: Contrastive Attributed explanations for Text
Saneem A. Chemmengath
A. Azad
Ronny Luss
Amit Dhurandhar
FAtt
26
10
0
16 Sep 2021
Post-hoc Interpretability for Neural NLP: A Survey
Post-hoc Interpretability for Neural NLP: A Survey
Andreas Madsen
Siva Reddy
A. Chandar
XAI
19
222
0
10 Aug 2021
Break, Perturb, Build: Automatic Perturbation of Reasoning Paths Through
  Question Decomposition
Break, Perturb, Build: Automatic Perturbation of Reasoning Paths Through Question Decomposition
Mor Geva
Tomer Wolfson
Jonathan Berant
ReLM
LRM
20
21
0
29 Jul 2021
Tailor: Generating and Perturbing Text with Semantic Controls
Tailor: Generating and Perturbing Text with Semantic Controls
Alexis Ross
Tongshuang Wu
Hao Peng
Matthew E. Peters
Matt Gardner
136
77
0
15 Jul 2021
An Investigation of the (In)effectiveness of Counterfactually Augmented
  Data
An Investigation of the (In)effectiveness of Counterfactually Augmented Data
Nitish Joshi
He He
OODD
19
46
0
01 Jul 2021
Counterfactual Invariance to Spurious Correlations: Why and How to Pass
  Stress Tests
Counterfactual Invariance to Spurious Correlations: Why and How to Pass Stress Tests
Victor Veitch
Alexander DÁmour
Steve Yadlowsky
Jacob Eisenstein
OOD
16
91
0
31 May 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
19
48
0
20 Mar 2021
Contrastive Explanations for Model Interpretability
Contrastive Explanations for Model Interpretability
Alon Jacovi
Swabha Swayamdipta
Shauli Ravfogel
Yanai Elazar
Yejin Choi
Yoav Goldberg
33
95
0
02 Mar 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
33
218
0
25 Feb 2021
Teach Me to Explain: A Review of Datasets for Explainable Natural
  Language Processing
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
Sarah Wiegreffe
Ana Marasović
XAI
11
141
0
24 Feb 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
Adversarial Example Generation with Syntactically Controlled Paraphrase
  Networks
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks
Mohit Iyyer
John Wieting
Kevin Gimpel
Luke Zettlemoyer
AAML
GAN
185
711
0
17 Apr 2018
Previous
1234