ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.11692
  4. Cited By
RoBERTa: A Robustly Optimized BERT Pretraining Approach

RoBERTa: A Robustly Optimized BERT Pretraining Approach

26 July 2019
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
    AIMat
ArXivPDFHTML

Papers citing "RoBERTa: A Robustly Optimized BERT Pretraining Approach"

50 / 2,937 papers shown
Title
DirectProbe: Studying Representations without Classifiers
DirectProbe: Studying Representations without Classifiers
Yichu Zhou
Vivek Srikumar
17
27
0
13 Apr 2021
Relational World Knowledge Representation in Contextual Language Models:
  A Review
Relational World Knowledge Representation in Contextual Language Models: A Review
Tara Safavi
Danai Koutra
KELM
22
51
0
12 Apr 2021
Escaping the Big Data Paradigm with Compact Transformers
Escaping the Big Data Paradigm with Compact Transformers
Ali Hassani
Steven Walton
Nikhil Shah
Abulikemu Abuduweili
Jiachen Li
Humphrey Shi
54
462
0
12 Apr 2021
NLI Data Sanity Check: Assessing the Effect of Data Corruption on Model
  Performance
NLI Data Sanity Check: Assessing the Effect of Data Corruption on Model Performance
Aarne Talman
Marianna Apidianaki
S. Chatzikyriakidis
Jörg Tiedemann
14
10
0
10 Apr 2021
COVID-19 Named Entity Recognition for Vietnamese
COVID-19 Named Entity Recognition for Vietnamese
Thinh Hung Truong
M. Dao
Dat Quoc Nguyen
21
38
0
08 Apr 2021
Layer Reduction: Accelerating Conformer-Based Self-Supervised Model via
  Layer Consistency
Layer Reduction: Accelerating Conformer-Based Self-Supervised Model via Layer Consistency
Jinchuan Tian
Rongzhi Gu
Helin Wang
Yuexian Zou
21
0
0
08 Apr 2021
HumAID: Human-Annotated Disaster Incidents Data from Twitter with Deep
  Learning Benchmarks
HumAID: Human-Annotated Disaster Incidents Data from Twitter with Deep Learning Benchmarks
Firoj Alam
U. Qazi
Muhammad Imran
Ferda Ofli
15
64
0
07 Apr 2021
Intent Detection and Slot Filling for Vietnamese
Intent Detection and Slot Filling for Vietnamese
M. Dao
Thinh Hung Truong
Dat Quoc Nguyen
VLM
15
35
0
05 Apr 2021
A Heuristic-driven Uncertainty based Ensemble Framework for Fake News
  Detection in Tweets and News Articles
A Heuristic-driven Uncertainty based Ensemble Framework for Fake News Detection in Tweets and News Articles
Sourya Dipta Das
Ayan Basak
S. Dutta
19
47
0
05 Apr 2021
Going deeper with Image Transformers
Going deeper with Image Transformers
Hugo Touvron
Matthieu Cord
Alexandre Sablayrolles
Gabriel Synnaeve
Hervé Jégou
ViT
23
986
0
31 Mar 2021
Deep Neural Approaches to Relation Triplets Extraction: A Comprehensive
  Survey
Deep Neural Approaches to Relation Triplets Extraction: A Comprehensive Survey
Tapas Nayak
Navonil Majumder
Pawan Goyal
Soujanya Poria
ViT
12
49
0
31 Mar 2021
On the Adversarial Robustness of Vision Transformers
On the Adversarial Robustness of Vision Transformers
Rulin Shao
Zhouxing Shi
Jinfeng Yi
Pin-Yu Chen
Cho-Jui Hsieh
ViT
25
137
0
29 Mar 2021
Efficient Explanations from Empirical Explainers
Efficient Explanations from Empirical Explainers
Robert Schwarzenberg
Nils Feldhus
Sebastian Möller
FAtt
27
9
0
29 Mar 2021
Machine Learning Meets Natural Language Processing -- The story so far
Machine Learning Meets Natural Language Processing -- The story so far
N. Galanis
P. Vafiadis
K.-G. Mirzaev
G. Papakostas
30
6
0
27 Mar 2021
Bertinho: Galician BERT Representations
Bertinho: Galician BERT Representations
David Vilares
Marcos Garcia
Carlos Gómez-Rodríguez
43
22
0
25 Mar 2021
Vision Transformers for Dense Prediction
Vision Transformers for Dense Prediction
René Ranftl
Alexey Bochkovskiy
V. Koltun
ViT
MDE
19
1,659
0
24 Mar 2021
FastMoE: A Fast Mixture-of-Expert Training System
FastMoE: A Fast Mixture-of-Expert Training System
Jiaao He
J. Qiu
Aohan Zeng
Zhilin Yang
Jidong Zhai
Jie Tang
ALM
MoE
11
94
0
24 Mar 2021
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning
  Performance of GPT-2
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2
Gregor Betz
Kyle Richardson
Christian Voigt
ReLM
LRM
8
29
0
24 Mar 2021
Czert -- Czech BERT-like Model for Language Representation
Czert -- Czech BERT-like Model for Language Representation
Jakub Sido
O. Pražák
P. Pribán
Jan Pasek
Michal Seják
Miloslav Konopík
8
43
0
24 Mar 2021
The NLP Cookbook: Modern Recipes for Transformer based Deep Learning
  Architectures
The NLP Cookbook: Modern Recipes for Transformer based Deep Learning Architectures
Sushant Singh
A. Mahmood
AI4TS
55
92
0
23 Mar 2021
Are Neural Language Models Good Plagiarists? A Benchmark for Neural
  Paraphrase Detection
Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection
Jan Philip Wahle
Terry Ruas
Norman Meuschke
Bela Gipp
17
34
0
23 Mar 2021
Instance-level Image Retrieval using Reranking Transformers
Instance-level Image Retrieval using Reranking Transformers
Fuwen Tan
Jiangbo Yuan
Vicente Ordonez
ViT
15
89
0
22 Mar 2021
BERT: A Review of Applications in Natural Language Processing and
  Understanding
BERT: A Review of Applications in Natural Language Processing and Understanding
M. V. Koroteev
VLM
17
194
0
22 Mar 2021
Identifying Machine-Paraphrased Plagiarism
Identifying Machine-Paraphrased Plagiarism
Jan Philip Wahle
Terry Ruas
Tomávs Foltýnek
Norman Meuschke
Bela Gipp
11
30
0
22 Mar 2021
DeepViT: Towards Deeper Vision Transformer
DeepViT: Towards Deeper Vision Transformer
Daquan Zhou
Bingyi Kang
Xiaojie Jin
Linjie Yang
Xiaochen Lian
Zihang Jiang
Qibin Hou
Jiashi Feng
ViT
19
510
0
22 Mar 2021
Exploiting Method Names to Improve Code Summarization: A Deliberation
  Multi-Task Learning Approach
Exploiting Method Names to Improve Code Summarization: A Deliberation Multi-Task Learning Approach
Rui Xie
Wei Ye
Jinan Sun
Shikun Zhang
12
26
0
21 Mar 2021
API2Com: On the Improvement of Automatically Generated Code Comments
  Using API Documentations
API2Com: On the Improvement of Automatically Generated Code Comments Using API Documentations
Ramin Shahbazi
Rishab Sharma
Fatemeh H. Fard
11
25
0
19 Mar 2021
GPT Understands, Too
GPT Understands, Too
Xiao Liu
Yanan Zheng
Zhengxiao Du
Ming Ding
Yujie Qian
Zhilin Yang
Jie Tang
VLM
34
1,143
0
18 Mar 2021
On the Role of Images for Analyzing Claims in Social Media
On the Role of Images for Analyzing Claims in Social Media
Gullal Singh Cheema
Sherzod Hakimov
Eric Müller-Budack
Ralph Ewerth
11
10
0
17 Mar 2021
Towards Few-Shot Fact-Checking via Perplexity
Towards Few-Shot Fact-Checking via Perplexity
Nayeon Lee
Yejin Bang
Andrea Madotto
Madian Khabsa
Pascale Fung
AAML
11
90
0
17 Mar 2021
Investigating Monolingual and Multilingual BERTModels for Vietnamese
  Aspect Category Detection
Investigating Monolingual and Multilingual BERTModels for Vietnamese Aspect Category Detection
D. Thin
Lac Si Le
V. Hoang
N. Nguyen
23
10
0
17 Mar 2021
Structural Adapters in Pretrained Language Models for AMR-to-text
  Generation
Structural Adapters in Pretrained Language Models for AMR-to-text Generation
Leonardo F. R. Ribeiro
Yue Zhang
Iryna Gurevych
33
69
0
16 Mar 2021
How Many Data Points is a Prompt Worth?
How Many Data Points is a Prompt Worth?
Teven Le Scao
Alexander M. Rush
VLM
32
295
0
15 Mar 2021
Deep Discourse Analysis for Generating Personalized Feedback in
  Intelligent Tutor Systems
Deep Discourse Analysis for Generating Personalized Feedback in Intelligent Tutor Systems
Matt Grenander
Robert Belfer
E. Kochmar
Iulian Serban
Franccois St-Hilaire
Jackie C.K. Cheung
AI4Ed
11
17
0
13 Mar 2021
Are NLP Models really able to Solve Simple Math Word Problems?
Are NLP Models really able to Solve Simple Math Word Problems?
Arkil Patel
S. Bhattamishra
Navin Goyal
ReLM
LRM
27
763
0
12 Mar 2021
Inductive Relation Prediction by BERT
Inductive Relation Prediction by BERT
H. Zha
Zhiyu Zoey Chen
Xifeng Yan
21
54
0
12 Mar 2021
The Interplay of Variant, Size, and Task Type in Arabic Pre-trained
  Language Models
The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models
Go Inoue
Bashar Alhafni
Nurpeiis Baimukan
Houda Bouamor
Nizar Habash
30
223
0
11 Mar 2021
CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review
CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review
Dan Hendrycks
Collin Burns
Anya Chen
Spencer Ball
ELM
AILaw
13
179
0
10 Mar 2021
BERTese: Learning to Speak to BERT
BERTese: Learning to Speak to BERT
Adi Haviv
Jonathan Berant
Amir Globerson
8
122
0
09 Mar 2021
Text Simplification by Tagging
Text Simplification by Tagging
Kostiantyn Omelianchuk
Vipul Raheja
Oleksandr Skurzhanskyi
8
45
0
08 Mar 2021
Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Jiangang Bai
Yujing Wang
Yiren Chen
Yaming Yang
Jing Bai
J. Yu
Yunhai Tong
37
104
0
07 Mar 2021
MalBERT: Using Transformers for Cybersecurity and Malicious Software
  Detection
MalBERT: Using Transformers for Cybersecurity and Malicious Software Detection
Abir Rahali
M. Akhloufi
19
30
0
05 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
19
31
0
04 Mar 2021
Natural Language Understanding for Argumentative Dialogue Systems in the
  Opinion Building Domain
Natural Language Understanding for Argumentative Dialogue Systems in the Opinion Building Domain
W. A. Abro
Annalena Aicher
Niklas Rach
Stefan Ultes
Wolfgang Minker
Guilin Qi
23
32
0
03 Mar 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
29
8
0
02 Mar 2021
Disentangling Syntax and Semantics in the Brain with Deep Networks
Disentangling Syntax and Semantics in the Brain with Deep Networks
Charlotte Caucheteux
Alexandre Gramfort
J. King
26
69
0
02 Mar 2021
Contrastive Explanations for Model Interpretability
Contrastive Explanations for Model Interpretability
Alon Jacovi
Swabha Swayamdipta
Shauli Ravfogel
Yanai Elazar
Yejin Choi
Yoav Goldberg
30
95
0
02 Mar 2021
A Primer on Contrastive Pretraining in Language Processing: Methods,
  Lessons Learned and Perspectives
A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives
Nils Rethmeier
Isabelle Augenstein
SSL
VLM
79
90
0
25 Feb 2021
ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language
  Model for Reading Comprehension of Abstract Meaning
ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning
Xin Xie
Xiangnan Chen
Xiang Chen
Yong Wang
Ningyu Zhang
Shumin Deng
Huajun Chen
34
2
0
25 Feb 2021
LogME: Practical Assessment of Pre-trained Models for Transfer Learning
LogME: Practical Assessment of Pre-trained Models for Transfer Learning
Kaichao You
Yong Liu
Jianmin Wang
Mingsheng Long
14
178
0
22 Feb 2021
Previous
123...515253...575859
Next