ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.04925
  4. Cited By
How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer
  Representations

How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations

11 September 2019
Betty van Aken
B. Winter
Alexander Loser
Felix Alexander Gers
ArXivPDFHTML

Papers citing "How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations"

13 / 13 papers shown
Title
Block-wise Bit-Compression of Transformer-based Models
Gaochen Dong
W. Chen
16
0
0
16 Mar 2023
Towards Practical Few-shot Federated NLP
Towards Practical Few-shot Federated NLP
Dongqi Cai
Yaozong Wu
Haitao Yuan
Shangguang Wang
F. Lin
Mengwei Xu
FedML
23
6
0
01 Dec 2022
A context-aware knowledge transferring strategy for CTC-based ASR
A context-aware knowledge transferring strategy for CTC-based ASR
Keda Lu
Kuan-Yu Chen
15
14
0
12 Oct 2022
Interactive Question Answering Systems: Literature Review
Interactive Question Answering Systems: Literature Review
Giovanni Maria Biancofiore
Yashar Deldjoo
T. D. Noia
E. Sciascio
F. Narducci
32
13
0
04 Sep 2022
Why Do Neural Language Models Still Need Commonsense Knowledge to Handle
  Semantic Variations in Question Answering?
Why Do Neural Language Models Still Need Commonsense Knowledge to Handle Semantic Variations in Question Answering?
Sunjae Kwon
Cheongwoong Kang
Jiyeon Han
Jaesik Choi
16
0
0
01 Sep 2022
QRelScore: Better Evaluating Generated Questions with Deeper
  Understanding of Context-aware Relevance
QRelScore: Better Evaluating Generated Questions with Deeper Understanding of Context-aware Relevance
Xiaoqiang Wang
Bang Liu
Siliang Tang
Lingfei Wu
16
9
0
29 Apr 2022
Identifying Introductions in Podcast Episodes from Automatically
  Generated Transcripts
Identifying Introductions in Podcast Episodes from Automatically Generated Transcripts
Elise Jing
K. Schneck
Dennis Egan
Scott A. Waterman
15
2
0
14 Oct 2021
What's in your Head? Emergent Behaviour in Multi-Task Transformer Models
What's in your Head? Emergent Behaviour in Multi-Task Transformer Models
Mor Geva
Uri Katz
Aviv Ben-Arie
Jonathan Berant
LRM
35
11
0
13 Apr 2021
An Embarrassingly Simple Model for Dialogue Relation Extraction
An Embarrassingly Simple Model for Dialogue Relation Extraction
Fuzhao Xue
Aixin Sun
Hao Zhang
Jinjie Ni
E. Chng
22
27
0
27 Dec 2020
Inserting Information Bottlenecks for Attribution in Transformers
Inserting Information Bottlenecks for Attribution in Transformers
Zhiying Jiang
Raphael Tang
Ji Xin
Jimmy J. Lin
30
6
0
27 Dec 2020
Modifying Memories in Transformer Models
Modifying Memories in Transformer Models
Chen Zhu
A. S. Rawat
Manzil Zaheer
Srinadh Bhojanapalli
Daliang Li
Felix X. Yu
Sanjiv Kumar
KELM
13
190
0
01 Dec 2020
The Devil is in the Details: Evaluating Limitations of Transformer-based
  Methods for Granular Tasks
The Devil is in the Details: Evaluating Limitations of Transformer-based Methods for Granular Tasks
Brihi Joshi
Neil Shah
Francesco Barbieri
Leonardo Neves
31
5
0
02 Nov 2020
Attention Flows: Analyzing and Comparing Attention Mechanisms in
  Language Models
Attention Flows: Analyzing and Comparing Attention Mechanisms in Language Models
Joseph F DeRose
Jiayao Wang
M. Berger
15
83
0
03 Sep 2020
1