ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.01604
15
65

KERMIT: Generative Insertion-Based Modeling for Sequences

4 June 2019
William Chan
Nikita Kitaev
Kelvin Guu
Mitchell Stern
Jakob Uszkoreit
    VLM
ArXivPDFHTML
Abstract

We present KERMIT, a simple insertion-based approach to generative modeling for sequences and sequence pairs. KERMIT models the joint distribution and its decompositions (i.e., marginals and conditionals) using a single neural network and, unlike much prior work, does not rely on a prespecified factorization of the data distribution. During training, one can feed KERMIT paired data (x,y)(x, y)(x,y) to learn the joint distribution p(x,y)p(x, y)p(x,y), and optionally mix in unpaired data xxx or yyy to refine the marginals p(x)p(x)p(x) or p(y)p(y)p(y). During inference, we have access to the conditionals p(x∣y)p(x \mid y)p(x∣y) and p(y∣x)p(y \mid x)p(y∣x) in both directions. We can also sample from the joint distribution or the marginals. The model supports both serial fully autoregressive decoding and parallel partially autoregressive decoding, with the latter exhibiting an empirically logarithmic runtime. We demonstrate through experiments in machine translation, representation learning, and zero-shot cloze question answering that our unified approach is capable of matching or exceeding the performance of dedicated state-of-the-art systems across a wide range of tasks without the need for problem-specific architectural adaptation.

View on arXiv
Comments on this paper