Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.14912
Cited By
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation
30 May 2022
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation"
21 / 21 papers shown
Title
Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models
Jun Rao
Xuebo Liu
Zepeng Lin
Liang Ding
Jing Li
Dacheng Tao
Min Zhang
30
2
0
19 Sep 2024
Merging Experts into One: Improving Computational Efficiency of Mixture of Experts
Shwai He
Run-Ze Fan
Liang Ding
Li Shen
Tianyi Zhou
Dacheng Tao
MoE
MoMe
19
14
0
15 Oct 2023
Diversifying the Mixture-of-Experts Representation for Language Models with Orthogonal Optimizer
Boan Liu
Liang Ding
Li Shen
Keqin Peng
Yu Cao
Dazhao Cheng
Dacheng Tao
MoE
20
7
0
15 Oct 2023
CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts
Rabindra Lamsal
M. Read
S. Karunasekera
6
13
0
11 Sep 2023
Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language Pretraining?
Fei-Yue Wang
Liang Ding
Jun Rao
Ye Liu
Li Shen
Changxing Ding
14
15
0
24 Aug 2023
Revisiting Token Dropping Strategy in Efficient BERT Pretraining
Qihuang Zhong
Liang Ding
Juhua Liu
Xuebo Liu
Min Zhang
Bo Du
Dacheng Tao
VLM
24
9
0
24 May 2023
Prompt-Learning for Cross-Lingual Relation Extraction
Chiaming Hsu
Changtong Zan
Liang Ding
Longyue Wang
Xiaoting Wang
Weifeng Liu
Fu Lin
Wenbin Hu
LRM
9
5
0
20 Apr 2023
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Chao Xue
W. Liu
Shunxing Xie
Zhenfang Wang
Jiaxing Li
...
Shi-Yong Chen
Yibing Zhan
Jing Zhang
Chaoyue Wang
Dacheng Tao
21
1
0
01 Mar 2023
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
AI4MH
47
231
0
19 Feb 2023
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE
Qihuang Zhong
Liang Ding
Keqin Peng
Juhua Liu
Bo Du
Li Shen
Yibing Zhan
Dacheng Tao
VLM
26
11
0
18 Feb 2023
Toward Human-Like Evaluation for Natural Language Generation with Error Analysis
Qingyu Lu
Liang Ding
Liping Xie
Kanjian Zhang
Derek F. Wong
Dacheng Tao
ELM
ALM
18
12
0
20 Dec 2022
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Qihuang Zhong
Liang Ding
Yibing Zhan
Yu Qiao
Yonggang Wen
...
Yixin Chen
Xinbo Gao
Chun Miao
Xiaoou Tang
Dacheng Tao
VLM
ELM
43
34
0
04 Dec 2022
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models
Qihuang Zhong
Liang Ding
Li Shen
Peng Mi
Juhua Liu
Bo Du
Dacheng Tao
AAML
16
50
0
11 Oct 2022
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
VLM
CLL
24
41
0
22 Aug 2022
Dynamic Contrastive Distillation for Image-Text Retrieval
Jun Rao
Liang Ding
Shuhan Qi
Meng Fang
Yang Liu
Liqiong Shen
Dacheng Tao
VLM
30
17
0
04 Jul 2022
Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation and Understanding
Changtong Zan
Liang Ding
Li Shen
Yu Cao
Weifeng Liu
Dacheng Tao
LRM
24
8
0
16 Apr 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
251
1,382
0
21 Jan 2020
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
245
1,417
0
22 Aug 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
Teaching Machines to Read and Comprehend
Karl Moritz Hermann
Tomás Kociský
Edward Grefenstette
L. Espeholt
W. Kay
Mustafa Suleyman
Phil Blunsom
170
3,504
0
10 Jun 2015
1