ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.12590
  4. Cited By
Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs

Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs

26 September 2022
Ðorðe Miladinovic
Kumar Shridhar
Kushal Kumar Jain
Max B. Paulus
J. M. Buhmann
Mrinmaya Sachan
Carl Allen
    DRL
ArXivPDFHTML

Papers citing "Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs"

5 / 5 papers shown
Title
Dior-CVAE: Pre-trained Language Models and Diffusion Priors for
  Variational Dialog Generation
Dior-CVAE: Pre-trained Language Models and Diffusion Priors for Variational Dialog Generation
Tianyu Yang
Thy Thy Tran
Iryna Gurevych
DiffM
13
1
0
24 May 2023
A Unified View of Long-Sequence Models towards Modeling Million-Scale
  Dependencies
A Unified View of Long-Sequence Models towards Modeling Million-Scale Dependencies
Hongyu Hè
Marko Kabić
18
2
0
13 Feb 2023
Automatic Generation of Socratic Subquestions for Teaching Math Word
  Problems
Automatic Generation of Socratic Subquestions for Teaching Math Word Problems
Kumar Shridhar
Jakub Macina
Mennatallah El-Assady
Tanmay Sinha
Manu Kapur
Mrinmaya Sachan
AIMat
26
45
0
23 Nov 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient
  Estimator
Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient Estimator
Max B. Paulus
Chris J. Maddison
Andreas Krause
BDL
20
38
0
09 Oct 2020
1