ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.05265
  4. Cited By
Unsupervised Distillation of Syntactic Information from Contextualized
  Word Representations

Unsupervised Distillation of Syntactic Information from Contextualized Word Representations

11 October 2020
Shauli Ravfogel
Yanai Elazar
Jacob Goldberger
Yoav Goldberg
ArXivPDFHTML

Papers citing "Unsupervised Distillation of Syntactic Information from Contextualized Word Representations"

3 / 3 papers shown
Title
Exploiting Inductive Bias in Transformers for Unsupervised
  Disentanglement of Syntax and Semantics with VAEs
Exploiting Inductive Bias in Transformers for Unsupervised Disentanglement of Syntax and Semantics with VAEs
G. Felhi
Joseph Le Roux
Djamé Seddah
DRL
26
2
0
12 May 2022
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
263
346
0
01 Feb 2021
What you can cram into a single vector: Probing sentence embeddings for
  linguistic properties
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
201
882
0
03 May 2018
1