ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.08467
  4. Cited By
Improving Compositional Generalization with Self-Training for
  Data-to-Text Generation

Improving Compositional Generalization with Self-Training for Data-to-Text Generation

16 October 2021
Sanket Vaibhav Mehta
J. Rao
Yi Tay
Mihir Kale
Ankur P. Parikh
Emma Strubell
    AI4CE
ArXivPDFHTML

Papers citing "Improving Compositional Generalization with Self-Training for Data-to-Text Generation"

7 / 7 papers shown
Title
Deciphering the Role of Representation Disentanglement: Investigating
  Compositional Generalization in CLIP Models
Deciphering the Role of Representation Disentanglement: Investigating Compositional Generalization in CLIP Models
Reza Abbasi
M. Rohban
M. Baghshah
CoGe
38
5
0
08 Jul 2024
Joint Dropout: Improving Generalizability in Low-Resource Neural Machine
  Translation through Phrase Pair Variables
Joint Dropout: Improving Generalizability in Low-Resource Neural Machine Translation through Phrase Pair Variables
Ali Araabi
Vlad Niculae
Christof Monz
29
1
0
24 Jul 2023
DSI++: Updating Transformer Memory with New Documents
DSI++: Updating Transformer Memory with New Documents
Sanket Vaibhav Mehta
Jai Gupta
Yi Tay
Mostafa Dehghani
Vinh Q. Tran
J. Rao
Marc Najork
Emma Strubell
Donald Metzler
CLL
30
39
0
19 Dec 2022
Neural Pipeline for Zero-Shot Data-to-Text Generation
Neural Pipeline for Zero-Shot Data-to-Text Generation
Zdeněk Kasner
Ondrej Dusek
9
33
0
30 Mar 2022
Scale Efficiently: Insights from Pre-training and Fine-tuning
  Transformers
Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Yi Tay
Mostafa Dehghani
J. Rao
W. Fedus
Samira Abnar
Hyung Won Chung
Sharan Narang
Dani Yogatama
Ashish Vaswani
Donald Metzler
188
110
0
22 Sep 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,453
0
23 Jan 2020
Revisiting Self-Training for Neural Sequence Generation
Revisiting Self-Training for Neural Sequence Generation
Junxian He
Jiatao Gu
Jiajun Shen
MarcÁurelio Ranzato
SSL
LRM
242
269
0
30 Sep 2019
1