ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00737
  4. Cited By
Are Pre-trained Language Models Aware of Phrases? Simple but Strong
  Baselines for Grammar Induction

Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction

30 January 2020
Taeuk Kim
Jihun Choi
Daniel Edmiston
Sang-goo Lee
ArXivPDFHTML

Papers citing "Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction"

17 / 17 papers shown
Title
An Overview on Language Models: Recent Developments and Outlook
An Overview on Language Models: Recent Developments and Outlook
Chengwei Wei
Yun Cheng Wang
Bin Wang
C.-C. Jay Kuo
17
41
0
10 Mar 2023
Syntactic Substitutability as Unsupervised Dependency Syntax
Syntactic Substitutability as Unsupervised Dependency Syntax
Jasper Jian
Siva Reddy
16
3
0
29 Nov 2022
Emergent Linguistic Structures in Neural Networks are Fragile
Emergent Linguistic Structures in Neural Networks are Fragile
Emanuele La Malfa
Matthew Wicker
Marta Kiatkowska
15
1
0
31 Oct 2022
CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models
  for Programming Language Attend Code Structure
CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure
Nuo Chen
Qiushi Sun
Renyu Zhu
Xiang Li
Xuesong Lu
Ming Gao
36
10
0
07 Oct 2022
Revisiting the Practical Effectiveness of Constituency Parse Extraction
  from Pre-trained Language Models
Revisiting the Practical Effectiveness of Constituency Parse Extraction from Pre-trained Language Models
Taeuk Kim
35
1
0
15 Sep 2022
An Interpretability Evaluation Benchmark for Pre-trained Language Models
An Interpretability Evaluation Benchmark for Pre-trained Language Models
Ya-Ming Shen
Lijie Wang
Ying Chen
Xinyan Xiao
Jing Liu
Hua-Hong Wu
27
4
0
28 Jul 2022
Unsupervised Slot Schema Induction for Task-oriented Dialog
Unsupervised Slot Schema Induction for Task-oriented Dialog
Dian Yu
Mingqiu Wang
Yuan Cao
Izhak Shafran
Laurent El Shafey
H. Soltau
31
13
0
09 May 2022
Bridging Pre-trained Language Models and Hand-crafted Features for
  Unsupervised POS Tagging
Bridging Pre-trained Language Models and Hand-crafted Features for Unsupervised POS Tagging
Houquan Zhou
Yang Li
Zhenghua Li
Min Zhang
BDL
22
3
0
19 Mar 2022
What Do They Capture? -- A Structural Analysis of Pre-Trained Language
  Models for Source Code
What Do They Capture? -- A Structural Analysis of Pre-Trained Language Models for Source Code
Yao Wan
Wei-Ye Zhao
Hongyu Zhang
Yulei Sui
Guandong Xu
Hairong Jin
27
105
0
14 Feb 2022
MDERank: A Masked Document Embedding Rank Approach for Unsupervised
  Keyphrase Extraction
MDERank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction
Linhan Zhang
Qian Chen
Wen Wang
Chong Deng
Shiliang Zhang
Bing Li
Wei Wang
Xin Cao
39
56
0
13 Oct 2021
Co-training an Unsupervised Constituency Parser with Weak Supervision
Co-training an Unsupervised Constituency Parser with Weak Supervision
Nickil Maveli
Shay B. Cohen
SSL
41
3
0
05 Oct 2021
Analysing the Effect of Masking Length Distribution of MLM: An
  Evaluation Framework and Case Study on Chinese MRC Datasets
Analysing the Effect of Masking Length Distribution of MLM: An Evaluation Framework and Case Study on Chinese MRC Datasets
Changchang Zeng
Shaobo Li
16
6
0
29 Sep 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis
  of Head and Prompt Tuning
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
22
96
0
17 Jun 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
24
811
0
14 Jun 2021
UCPhrase: Unsupervised Context-aware Quality Phrase Tagging
UCPhrase: Unsupervised Context-aware Quality Phrase Tagging
Xiaotao Gu
Zihan Wang
Zhenyu Bi
Yu Meng
Liyuan Liu
Jiawei Han
Jingbo Shang
60
36
0
28 May 2021
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,450
0
18 Mar 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
1