ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.06029
  4. Cited By
Mediators: Conversational Agents Explaining NLP Model Behavior

Mediators: Conversational Agents Explaining NLP Model Behavior

13 June 2022
Nils Feldhus
A. Ravichandran
Sebastian Möller
ArXivPDFHTML

Papers citing "Mediators: Conversational Agents Explaining NLP Model Behavior"

13 / 13 papers shown
Title
JailbreakLens: Visual Analysis of Jailbreak Attacks Against Large
  Language Models
JailbreakLens: Visual Analysis of Jailbreak Attacks Against Large Language Models
Yingchaojie Feng
Zhizhang Chen
Zhining Kang
Sijia Wang
Minfeng Zhu
Wei Zhang
Wei Chen
35
3
0
12 Apr 2024
InterroLang: Exploring NLP Models and Datasets through Dialogue-based
  Explanations
InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations
Nils Feldhus
Qianli Wang
Tatiana Anikina
Sahil Chopra
Cennet Oguz
Sebastian Möller
17
9
0
09 Oct 2023
May I Ask a Follow-up Question? Understanding the Benefits of
  Conversations in Neural Network Explainability
May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability
Tong Zhang
X. J. Yang
Boyang Albert Li
8
3
0
25 Sep 2023
Diagnosing Infeasible Optimization Problems Using Large Language Models
Diagnosing Infeasible Optimization Problems Using Large Language Models
Hao Chen
Gonzalo E. Constante-Flores
Canzhou Li
AI4CE
8
20
0
23 Aug 2023
CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning
  Capabilities of Natural Language Models
CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning Capabilities of Natural Language Models
Xingbo Wang
Renfei Huang
Zhihua Jin
Tianqing Fang
Huamin Qu
VLM
ReLM
LRM
24
1
0
23 Jul 2023
Saliency Map Verbalization: Comparing Feature Importance Representations
  from Model-free and Instruction-based Methods
Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods
Nils Feldhus
Leonhard Hennig
Maximilian Dustin Nasert
Christopher Ebert
Robert Schwarzenberg
Sebastian Möller
FAtt
10
19
0
13 Oct 2022
TalkToModel: Explaining Machine Learning Models with Interactive Natural
  Language Conversations
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
Dylan Slack
Satyapriya Krishna
Himabindu Lakkaraju
Sameer Singh
12
73
0
08 Jul 2022
Human Interpretation of Saliency-based Explanation Over Text
Human Interpretation of Saliency-based Explanation Over Text
Hendrik Schuff
Alon Jacovi
Heike Adel
Yoav Goldberg
Ngoc Thang Vu
MILM
XAI
FAtt
144
38
0
27 Jan 2022
Measuring Association Between Labels and Free-Text Rationales
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
274
170
0
24 Oct 2020
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
  Goals of Human Trust in AI
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
241
417
0
15 Oct 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
223
4,424
0
23 Jan 2020
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
252
618
0
04 Dec 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,658
0
28 Feb 2017
1