Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.08473
Cited By
COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
16 February 2021
Yu Meng
Chenyan Xiong
Payal Bajaj
Saurabh Tiwary
Paul N. Bennett
Jiawei Han
Xia Song
Re-assign community
ArXiv
PDF
HTML
Papers citing
"COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining"
8 / 8 papers shown
Title
Enhancing User Sequence Modeling through Barlow Twins-based Self-Supervised Learning
Yuhan Liu
Lin Ning
Neo Wu
Karan Singhal
Philip Mansfield
D. Berlowitz
Sushant Prakash
Bradley Green
SSL
28
0
0
02 May 2025
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
225
1,649
0
31 Dec 2020
CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding
Yanru Qu
Dinghan Shen
Yelong Shen
Sandra Sajeev
Jiawei Han
Weizhu Chen
110
59
0
16 Oct 2020
Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks
Nandan Thakur
Nils Reimers
Johannes Daxenberger
Iryna Gurevych
173
190
0
16 Oct 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
215
3,054
0
23 Jan 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
233
1,382
0
21 Jan 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
231
1,436
0
17 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
283
6,003
0
20 Apr 2018
1