Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.10056
Cited By
Generate to Understand for Representation
14 June 2023
Changshan Xue
Xiande Zhong
Xiaoqing Liu
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Generate to Understand for Representation"
13 / 13 papers shown
Title
CSL: A Large-scale Chinese Scientific Literature Dataset
Yudong Li
Yuqing Zhang
Zhe Zhao
Lin-cheng Shen
Weijie Liu
Weiquan Mao
Hui Zhang
AILaw
121
50
0
12 Sep 2022
Ranking-Enhanced Unsupervised Sentence Representation Learning
Yeon Seonwoo
Guoyin Wang
Changmin Seo
Sajal Choudhary
Jiwei Li
Xiang Li
Puyang Xu
Sunghyun Park
Alice H. Oh
SSL
DRL
AI4TS
33
15
0
09 Sep 2022
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder
Shitao Xiao
Zheng Liu
Yingxia Shao
Zhao Cao
RALM
115
105
0
24 May 2022
VLP: A Survey on Vision-Language Pre-training
Feilong Chen
Duzhen Zhang
Minglun Han
Xiuyi Chen
Jing Shi
Shuang Xu
Bo Xu
VLM
79
208
0
18 Feb 2022
Text and Code Embeddings by Contrastive Pre-Training
Arvind Neelakantan
Tao Xu
Raul Puri
Alec Radford
Jesse Michael Han
...
Tabarak Khan
Toki Sherbakov
Joanne Jang
Peter Welinder
Lilian Weng
SSL
AI4TS
201
412
0
24 Jan 2022
NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
Xingcheng Yao
Yanan Zheng
Xiaocong Yang
Zhilin Yang
24
44
0
07 Nov 2021
EncT5: A Framework for Fine-tuning T5 as Non-autoregressive Models
Frederick Liu
T. Huang
Shihang Lyu
Siamak Shakeri
Hongkun Yu
Jing Li
29
7
0
16 Oct 2021
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
203
1,651
0
15 Oct 2021
Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?
Xilun Chen
Kushal Lakhotia
Barlas Oğuz
Anchit Gupta
Patrick Lewis
Stanislav Peshterliev
Yashar Mehdad
Sonal Gupta
Wen-tau Yih
48
67
0
13 Oct 2021
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation
Yunfan Shao
Zhichao Geng
Yitao Liu
Junqi Dai
Hang Yan
Fei Yang
Li Zhe
Hujun Bao
Xipeng Qiu
MedIm
49
145
0
13 Sep 2021
A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives
Nils Rethmeier
Isabelle Augenstein
SSL
VLM
47
90
0
25 Feb 2021
Pretrained Transformers for Text Ranking: BERT and Beyond
Jimmy J. Lin
Rodrigo Nogueira
Andrew Yates
VLM
214
515
0
13 Oct 2020
Efficient Estimation of Word Representations in Vector Space
Tomáš Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
228
29,632
0
16 Jan 2013
1