ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.03877
  4. Cited By
Best Practices for Data-Efficient Modeling in NLG:How to Train
  Production-Ready Neural Models with Less Data

Best Practices for Data-Efficient Modeling in NLG:How to Train Production-Ready Neural Models with Less Data

8 November 2020
A. Arun
Soumya Batra
Vikas Bhardwaj
Ashwini Challa
Pinar E. Donmez
Peyman Heidari
Hakan Inan
Shashank Jain
Anuj Kumar
Shawn Mei
Karthika Mohan
Michael White
ArXivPDFHTML

Papers citing "Best Practices for Data-Efficient Modeling in NLG:How to Train Production-Ready Neural Models with Less Data"

3 / 3 papers shown
Title
TaTa: A Multilingual Table-to-Text Dataset for African Languages
TaTa: A Multilingual Table-to-Text Dataset for African Languages
Sebastian Gehrmann
Sebastian Ruder
Vitaly Nikolaev
Jan A. Botha
Michael Chavinda
Ankur P. Parikh
Clara E. Rivera
LMTD
40
10
0
31 Oct 2022
Faithfulness in Natural Language Generation: A Systematic Survey of
  Analysis, Evaluation and Optimization Methods
Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods
Wei Li
Wenhao Wu
Moye Chen
Jiachen Liu
Xinyan Xiao
Hua Wu
HILM
29
27
0
10 Mar 2022
Revisiting Self-Training for Neural Sequence Generation
Revisiting Self-Training for Neural Sequence Generation
Junxian He
Jiatao Gu
Jiajun Shen
MarcÁurelio Ranzato
SSL
LRM
244
270
0
30 Sep 2019
1