ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.11928
17
66

GLGE: A New General Language Generation Evaluation Benchmark

24 November 2020
Dayiheng Liu
Yu Yan
Yeyun Gong
Weizhen Qi
Hang Zhang
Jian Jiao
Weizhu Chen
Jie Fu
Linjun Shou
Ming Gong
Pengcheng Wang
Jiusheng Chen
Daxin Jiang
Jiancheng Lv
Ruofei Zhang
Winnie Wu
Ming Zhou
Nan Duan
    ELM
ArXivPDFHTML
Abstract

Multi-task benchmarks such as GLUE and SuperGLUE have driven great progress of pretraining and transfer learning in Natural Language Processing (NLP). These benchmarks mostly focus on a range of Natural Language Understanding (NLU) tasks, without considering the Natural Language Generation (NLG) models. In this paper, we present the General Language Generation Evaluation (GLGE), a new multi-task benchmark for evaluating the generalization capabilities of NLG models across eight language generation tasks. For each task, we continue to design three subtasks in terms of task difficulty (GLGE-Easy, GLGE-Medium, and GLGE-Hard). This introduces 24 subtasks to comprehensively compare model performance. To encourage research on pretraining and transfer learning on NLG models, we make GLGE publicly available and build a leaderboard with strong baselines including MASS, BART, and ProphetNet (The source code and dataset are publicly available at https://github.com/microsoft/glge).

View on arXiv
Comments on this paper