ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.12114
  4. Cited By
Rethinking Attention-Model Explainability through Faithfulness Violation
  Test

Rethinking Attention-Model Explainability through Faithfulness Violation Test

28 January 2022
Y. Liu
Haoliang Li
Yangyang Guo
Chen Kong
Jing Li
Shiqi Wang
    FAtt
ArXivPDFHTML

Papers citing "Rethinking Attention-Model Explainability through Faithfulness Violation Test"

2 / 2 papers shown
Title
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
292
6,003
0
20 Apr 2018
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
206
7,687
0
17 Aug 2015
1