ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.05950
  4. Cited By
BERT Rediscovers the Classical NLP Pipeline

BERT Rediscovers the Classical NLP Pipeline

15 May 2019
Ian Tenney
Dipanjan Das
Ellie Pavlick
    MILM
    SSeg
ArXivPDFHTML

Papers citing "BERT Rediscovers the Classical NLP Pipeline"

50 / 231 papers shown
Title
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model
  Fine-tuning
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning
Zhen-Ru Zhang
Chuanqi Tan
Haiyang Xu
Chengyu Wang
Jun Huang
Songfang Huang
25
29
0
24 May 2023
MuLER: Detailed and Scalable Reference-based Evaluation
MuLER: Detailed and Scalable Reference-based Evaluation
Taelin Karidi
Leshem Choshen
Gal Patel
Omri Abend
25
0
0
24 May 2023
On Robustness of Finetuned Transformer-based NLP Models
On Robustness of Finetuned Transformer-based NLP Models
Pavan Kalyan Reddy Neerudu
S. Oota
Mounika Marreddy
Venkateswara Rao Kagita
Manish Gupta
21
7
0
23 May 2023
A Trip Towards Fairness: Bias and De-Biasing in Large Language Models
A Trip Towards Fairness: Bias and De-Biasing in Large Language Models
Leonardo Ranaldi
Elena Sofia Ruzzetti
Davide Venditti
Dario Onorati
Fabio Massimo Zanzotto
27
33
0
23 May 2023
Automatic Readability Assessment for Closely Related Languages
Automatic Readability Assessment for Closely Related Languages
Joseph Marvin Imperial
E. Kochmar
16
8
0
22 May 2023
Should We Attend More or Less? Modulating Attention for Fairness
Should We Attend More or Less? Modulating Attention for Fairness
A. Zayed
Gonçalo Mordido
Samira Shabanian
Sarath Chandar
35
10
0
22 May 2023
Explaining How Transformers Use Context to Build Predictions
Explaining How Transformers Use Context to Build Predictions
Javier Ferrando
Gerard I. Gállego
Ioannis Tsiamas
Marta R. Costa-jussá
18
31
0
21 May 2023
PreCog: Exploring the Relation between Memorization and Performance in
  Pre-trained Language Models
PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models
Leonardo Ranaldi
Elena Sofia Ruzzetti
Fabio Massimo Zanzotto
31
6
0
08 May 2023
Redundancy and Concept Analysis for Code-trained Language Models
Redundancy and Concept Analysis for Code-trained Language Models
Arushi Sharma
Zefu Hu
Christopher Quinn
Ali Jannesari
70
1
0
01 May 2023
Towards Efficient Fine-tuning of Pre-trained Code Models: An
  Experimental Study and Beyond
Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond
Ensheng Shi
Yanlin Wang
Hongyu Zhang
Lun Du
Shi Han
Dongmei Zhang
Hongbin Sun
28
42
0
11 Apr 2023
Low-Shot Learning for Fictional Claim Verification
Low-Shot Learning for Fictional Claim Verification
Viswanath Chadalapaka
Derek Nguyen
Joonwon Choi
Shaunak Joshi
Mohammad Rostami
8
1
0
05 Apr 2023
Neural Architecture Search for Effective Teacher-Student Knowledge
  Transfer in Language Models
Neural Architecture Search for Effective Teacher-Student Knowledge Transfer in Language Models
Aashka Trivedi
Takuma Udagawa
Michele Merler
Rameswar Panda
Yousef El-Kurdi
Bishwaranjan Bhattacharjee
22
6
0
16 Mar 2023
An Overview on Language Models: Recent Developments and Outlook
An Overview on Language Models: Recent Developments and Outlook
Chengwei Wei
Yun Cheng Wang
Bin Wang
C.-C. Jay Kuo
17
41
0
10 Mar 2023
Mask-guided BERT for Few Shot Text Classification
Mask-guided BERT for Few Shot Text Classification
Wenxiong Liao
Zheng Liu
Haixing Dai
Zihao Wu
Yiyang Zhang
...
Dajiang Zhu
Tianming Liu
Sheng R. Li
Xiang Li
Hongmin Cai
VLM
36
39
0
21 Feb 2023
CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code
CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code
Shuyan Zhou
Uri Alon
Sumit Agarwal
Graham Neubig
ELM
ALM
27
98
0
10 Feb 2023
The geometry of hidden representations of large transformer models
The geometry of hidden representations of large transformer models
L. Valeriani
Diego Doimo
F. Cuturello
A. Laio
A. Ansuini
Alberto Cazzaniga
MILM
21
48
0
01 Feb 2023
A Discerning Several Thousand Judgments: GPT-3 Rates the Article +
  Adjective + Numeral + Noun Construction
A Discerning Several Thousand Judgments: GPT-3 Rates the Article + Adjective + Numeral + Noun Construction
Kyle Mahowald
22
24
0
29 Jan 2023
Interpretability in Activation Space Analysis of Transformers: A Focused
  Survey
Interpretability in Activation Space Analysis of Transformers: A Focused Survey
Soniya Vijayakumar
AI4CE
27
3
0
22 Jan 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
23
209
0
16 Jan 2023
Deep Learning Models to Study Sentence Comprehension in the Human Brain
Deep Learning Models to Study Sentence Comprehension in the Human Brain
S. Arana
Jacques Pesnot Lerousseau
P. Hagoort
21
10
0
16 Jan 2023
SensePOLAR: Word sense aware interpretability for pre-trained contextual
  word embeddings
SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings
Jan Engler
Sandipan Sikdar
Marlene Lutz
M. Strohmaier
24
7
0
11 Jan 2023
Examining Political Rhetoric with Epistemic Stance Detection
Examining Political Rhetoric with Epistemic Stance Detection
Ankita Gupta
Su Lin Blodgett
Justin H. Gross
Brendan T. O'Connor
20
0
0
29 Dec 2022
Intent Recognition in Conversational Recommender Systems
Intent Recognition in Conversational Recommender Systems
Sahar Moradizeyveh
35
5
0
06 Dec 2022
Event knowledge in large language models: the gap between the impossible
  and the unlikely
Event knowledge in large language models: the gap between the impossible and the unlikely
Carina Kauf
Anna A. Ivanova
Giulia Rambelli
Emmanuele Chersoni
Jingyuan Selena She
Zawad Chowdhury
Evelina Fedorenko
Alessandro Lenci
30
67
0
02 Dec 2022
Syntactic Substitutability as Unsupervised Dependency Syntax
Syntactic Substitutability as Unsupervised Dependency Syntax
Jasper Jian
Siva Reddy
16
3
0
29 Nov 2022
Prototypical Fine-tuning: Towards Robust Performance Under Varying Data
  Sizes
Prototypical Fine-tuning: Towards Robust Performance Under Varying Data Sizes
Yiqiao Jin
Xiting Wang
Y. Hao
Yizhou Sun
Xing Xie
28
11
0
24 Nov 2022
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
Hao Peng
Xiaozhi Wang
Shengding Hu
Hailong Jin
Lei Hou
Juanzi Li
Zhiyuan Liu
Qun Liu
10
22
0
08 Nov 2022
Logographic Information Aids Learning Better Representations for Natural
  Language Inference
Logographic Information Aids Learning Better Representations for Natural Language Inference
Zijian Jin
Duygu Ataman
15
0
0
03 Nov 2022
A Law of Data Separation in Deep Learning
A Law of Data Separation in Deep Learning
Hangfeng He
Weijie J. Su
OOD
21
36
0
31 Oct 2022
Controlled Text Reduction
Controlled Text Reduction
Aviv Slobodkin
Paul Roit
Eran Hirsch
Ori Ernst
Ido Dagan
29
10
0
24 Oct 2022
Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs
Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs
Maarten Sap
Ronan Le Bras
Daniel Fried
Yejin Choi
22
205
0
24 Oct 2022
Structural generalization is hard for sequence-to-sequence models
Structural generalization is hard for sequence-to-sequence models
Yuekun Yao
Alexander Koller
22
21
0
24 Oct 2022
On the Transformation of Latent Space in Fine-Tuned NLP Models
On the Transformation of Latent Space in Fine-Tuned NLP Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Firoj Alam
29
18
0
23 Oct 2022
Probing with Noise: Unpicking the Warp and Weft of Embeddings
Probing with Noise: Unpicking the Warp and Weft of Embeddings
Filip Klubicka
John D. Kelleher
24
4
0
21 Oct 2022
Transformers Learn Shortcuts to Automata
Transformers Learn Shortcuts to Automata
Bingbin Liu
Jordan T. Ash
Surbhi Goel
A. Krishnamurthy
Cyril Zhang
OffRL
LRM
26
155
0
19 Oct 2022
Hidden State Variability of Pretrained Language Models Can Guide
  Computation Reduction for Transfer Learning
Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning
Shuo Xie
Jiahao Qiu
Ankita Pasad
Li Du
Qing Qu
Hongyuan Mei
32
16
0
18 Oct 2022
On the Explainability of Natural Language Processing Deep Models
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
25
82
0
13 Oct 2022
Causal Proxy Models for Concept-Based Model Explanations
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
71
35
0
28 Sep 2022
Revisiting the Practical Effectiveness of Constituency Parse Extraction
  from Pre-trained Language Models
Revisiting the Practical Effectiveness of Constituency Parse Extraction from Pre-trained Language Models
Taeuk Kim
37
1
0
15 Sep 2022
Analyzing Transformers in Embedding Space
Analyzing Transformers in Embedding Space
Guy Dar
Mor Geva
Ankit Gupta
Jonathan Berant
19
83
0
06 Sep 2022
A Syntax Aware BERT for Identifying Well-Formed Queries in a Curriculum
  Framework
A Syntax Aware BERT for Identifying Well-Formed Queries in a Curriculum Framework
Avinash Madasu
Anvesh Rao Vijjini
14
0
0
21 Aug 2022
An Interpretability Evaluation Benchmark for Pre-trained Language Models
An Interpretability Evaluation Benchmark for Pre-trained Language Models
Ya-Ming Shen
Lijie Wang
Ying Chen
Xinyan Xiao
Jing Liu
Hua-Hong Wu
27
4
0
28 Jul 2022
BOSS: Bottom-up Cross-modal Semantic Composition with Hybrid
  Counterfactual Training for Robust Content-based Image Retrieval
BOSS: Bottom-up Cross-modal Semantic Composition with Hybrid Counterfactual Training for Robust Content-based Image Retrieval
Wenqiao Zhang
Jiannan Guo
Meng Li
Haochen Shi
Shengyu Zhang
Juncheng Li
Siliang Tang
Yueting Zhuang
47
6
0
09 Jul 2022
Probing via Prompting
Probing via Prompting
Jiaoda Li
Ryan Cotterell
Mrinmaya Sachan
29
13
0
04 Jul 2022
Towards Unsupervised Content Disentanglement in Sentence Representations
  via Syntactic Roles
Towards Unsupervised Content Disentanglement in Sentence Representations via Syntactic Roles
G. Felhi
Joseph Le Roux
Djamé Seddah
DRL
16
5
0
22 Jun 2022
Evaluating Self-Supervised Learning for Molecular Graph Embeddings
Evaluating Self-Supervised Learning for Molecular Graph Embeddings
Hanchen Wang
Jean Kaddour
Shengchao Liu
Jian Tang
Joan Lasenby
Qi Liu
22
20
0
16 Jun 2022
Transition-based Abstract Meaning Representation Parsing with Contextual
  Embeddings
Transition-based Abstract Meaning Representation Parsing with Contextual Embeddings
Yi Liang
47
0
0
13 Jun 2022
ZeroQuant: Efficient and Affordable Post-Training Quantization for
  Large-Scale Transformers
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
Z. Yao
Reza Yazdani Aminabadi
Minjia Zhang
Xiaoxia Wu
Conglong Li
Yuxiong He
VLM
MQ
39
440
0
04 Jun 2022
On Building Spoken Language Understanding Systems for Low Resourced
  Languages
On Building Spoken Language Understanding Systems for Low Resourced Languages
Akshat Gupta
17
8
0
25 May 2022
What Drives the Use of Metaphorical Language? Negative Insights from
  Abstractness, Affect, Discourse Coherence and Contextualized Word
  Representations
What Drives the Use of Metaphorical Language? Negative Insights from Abstractness, Affect, Discourse Coherence and Contextualized Word Representations
P. Piccirilli
Sabine Schulte im Walde
8
4
0
23 May 2022
Previous
12345
Next