ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.03777
52
3
v1v2 (latest)

Hidden Schema Networks

8 July 2022
Ramses J. Sanchez
L. Conrads
Pascal Welke
K. Cvejoski
C. Ojeda
    NAIMILM
ArXiv (abs)PDFHTML
Abstract

Most modern language models infer representations that, albeit powerful, lack both compositionality and semantic interpretability. Starting from the assumption that a large proportion of semantic content is necessarily relational, we introduce a neural language model that discovers networks of symbols (schemata) from text datasets. Using a variational autoencoder (VAE) framework, our model encodes sentences into sequences of symbols (composed representation), which correspond to the nodes visited by biased random walkers on a global latent graph. Sentences are then generated back, conditioned on the selected symbol sequences. We first demonstrate that the model is able to uncover ground-truth graphs from artificially generated datasets of random token sequences. Next we leverage pretrained BERT and GPT-2 language models as encoder and decoder, respectively, to train our model on language modelling tasks. Qualitatively, our results show that the model is able to infer schema networks encoding different aspects of natural language. Quantitatively, the model achieves state-of-the-art scores on VAE language modeling benchmarks. Source code to reproduce our experiments is available at https://github.com/ramsesjsf/HiddenSchemaNetworks

View on arXiv
Comments on this paper