ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.07926
  4. Cited By
Have Attention Heads in BERT Learned Constituency Grammar?
v1v2 (latest)

Have Attention Heads in BERT Learned Constituency Grammar?

Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2021
16 February 2021
Ziyang Luo
ArXiv (abs)PDFHTML

Papers citing "Have Attention Heads in BERT Learned Constituency Grammar?"

4 / 4 papers shown
Title
Linguistic Interpretability of Transformer-based Language Models: a systematic review
Linguistic Interpretability of Transformer-based Language Models: a systematic review
Miguel López-Otal
Jorge Gracia
Jordi Bernad
Carlos Bobed
Lucía Pitarch-Ballesteros
Emma Anglés-Herrero
VLM
324
5
0
09 Apr 2025
Are there identifiable structural parts in the sentence embedding whole?
Are there identifiable structural parts in the sentence embedding whole?
Vivi Nastase
Paola Merlo
158
5
0
24 Jun 2024
Improving word mover's distance by leveraging self-attention matrix
Improving word mover's distance by leveraging self-attention matrixConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Hiroaki Yamagiwa
Sho Yokoi
Hidetoshi Shimodaira
OT
144
6
0
11 Nov 2022
Acceptability Judgements via Examining the Topology of Attention Maps
Acceptability Judgements via Examining the Topology of Attention MapsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
D. Cherniavskii
Eduard Tulchinskii
Vladislav Mikhailov
Irina Proskurina
Laida Kushnareva
Ekaterina Artemova
S. Barannikov
Irina Piontkovskaya
D. Piontkovski
Evgeny Burnaev
941
23
0
19 May 2022
1