ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.08351
  4. Cited By
Compositionality decomposed: how do neural networks generalise?

Compositionality decomposed: how do neural networks generalise?

22 August 2019
Dieuwke Hupkes
Verna Dankers
Mathijs Mul
Elia Bruni
    CoGe
ArXivPDFHTML

Papers citing "Compositionality decomposed: how do neural networks generalise?"

21 / 71 papers shown
Title
How much do language models copy from their training data? Evaluating
  linguistic novelty in text generation using RAVEN
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
R. Thomas McCoy
P. Smolensky
Tal Linzen
Jianfeng Gao
Asli Celikyilmaz
SyDa
17
119
0
18 Nov 2021
Learning to Generalize Compositionally by Transferring Across Semantic
  Parsing Tasks
Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks
Wang Zhu
Peter Shaw
Tal Linzen
Fei Sha
27
7
0
09 Nov 2021
The Neural Data Router: Adaptive Control Flow in Transformers Improves
  Systematic Generalization
The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization
Róbert Csordás
Kazuki Irie
Jürgen Schmidhuber
AI4CE
25
55
0
14 Oct 2021
Survey of Low-Resource Machine Translation
Survey of Low-Resource Machine Translation
Barry Haddow
Rachel Bawden
Antonio Valerio Miceli Barone
Jindvrich Helcl
Alexandra Birch
AIMat
31
147
0
01 Sep 2021
The Devil is in the Detail: Simple Tricks Improve Systematic
  Generalization of Transformers
The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers
Róbert Csordás
Kazuki Irie
Jürgen Schmidhuber
ViT
23
128
0
26 Aug 2021
Making Transformers Solve Compositional Tasks
Making Transformers Solve Compositional Tasks
Santiago Ontañón
Joshua Ainslie
Vaclav Cvicek
Zachary Kenneth Fisher
33
70
0
09 Aug 2021
Can Transformers Jump Around Right in Natural Language? Assessing
  Performance Transfer from SCAN
Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN
Rahma Chaabouni
Roberto Dessì
Eugene Kharitonov
19
20
0
03 Jul 2021
Improving Compositional Generalization in Classification Tasks via
  Structure Annotations
Improving Compositional Generalization in Classification Tasks via Structure Annotations
Juyong Kim
Pradeep Ravikumar
Joshua Ainslie
Santiago Ontañón
CoGe
16
18
0
19 Jun 2021
Grounding Spatio-Temporal Language with Transformers
Grounding Spatio-Temporal Language with Transformers
Tristan Karch
Laetitia Teodorescu
Katja Hofmann
Clément Moulin-Frier
Pierre-Yves Oudeyer
LM&Ro
14
11
0
16 Jun 2021
Disentangling Syntax and Semantics in the Brain with Deep Networks
Disentangling Syntax and Semantics in the Brain with Deep Networks
Charlotte Caucheteux
Alexandre Gramfort
J. King
34
69
0
02 Mar 2021
Probing Multimodal Embeddings for Linguistic Properties: the
  Visual-Semantic Case
Probing Multimodal Embeddings for Linguistic Properties: the Visual-Semantic Case
Adam Dahlgren Lindström
Suna Bensch
Johanna Björklund
F. Drewes
16
20
0
22 Feb 2021
Revisiting Iterative Back-Translation from the Perspective of
  Compositional Generalization
Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization
Yinuo Guo
Hualei Zhu
Zeqi Lin
Bei Chen
Jian-Guang Lou
Dongmei Zhang
BDL
182
26
0
08 Dec 2020
Scaling Laws for Autoregressive Generative Modeling
Scaling Laws for Autoregressive Generative Modeling
T. Henighan
Jared Kaplan
Mor Katz
Mark Chen
Christopher Hesse
...
Nick Ryder
Daniel M. Ziegler
John Schulman
Dario Amodei
Sam McCandlish
25
405
0
28 Oct 2020
Neural Databases
Neural Databases
James Thorne
Majid Yazdani
Marzieh Saeidi
Fabrizio Silvestri
Sebastian Riedel
A. Halevy
NAI
26
9
0
14 Oct 2020
Are Neural Nets Modular? Inspecting Functional Modularity Through
  Differentiable Weight Masks
Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks
Róbert Csordás
Sjoerd van Steenkiste
Jürgen Schmidhuber
23
87
0
05 Oct 2020
Systematic Generalization on gSCAN with Language Conditioned Embedding
Systematic Generalization on gSCAN with Language Conditioned Embedding
Tong Gao
Qi Huang
Raymond J. Mooney
16
22
0
11 Sep 2020
Compositional Generalization by Learning Analytical Expressions
Compositional Generalization by Learning Analytical Expressions
Qian Liu
Shengnan An
Jian-Guang Lou
Bei Chen
Zeqi Lin
Yan Gao
Bin Zhou
Nanning Zheng
Dongmei Zhang
CoGe
NAI
13
72
0
18 Jun 2020
A Study of Compositional Generalization in Neural Models
A Study of Compositional Generalization in Neural Models
Tim Klinger
D. Adjodah
Vincent Marois
Joshua Joseph
Matthew D Riemer
Alex Pentland
Murray Campbell
CoGe
NAI
15
12
0
16 Jun 2020
Discovering the Compositional Structure of Vector Representations with
  Role Learning Networks
Discovering the Compositional Structure of Vector Representations with Role Learning Networks
Paul Soulos
R. Thomas McCoy
Tal Linzen
P. Smolensky
CoGe
29
43
0
21 Oct 2019
OpenNMT: Open-Source Toolkit for Neural Machine Translation
OpenNMT: Open-Source Toolkit for Neural Machine Translation
Guillaume Klein
Yoon Kim
Yuntian Deng
Jean Senellart
Alexander M. Rush
256
1,896
0
10 Jan 2017
From Frequency to Meaning: Vector Space Models of Semantics
From Frequency to Meaning: Vector Space Models of Semantics
Peter D. Turney
Patrick Pantel
82
2,980
0
04 Mar 2010
Previous
12