ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.08549
  4. Cited By
Universality and individuality in neural dynamics across large
  populations of recurrent networks

Universality and individuality in neural dynamics across large populations of recurrent networks

19 July 2019
Niru Maheswaranathan
Alex H. Williams
Matthew D. Golub
Surya Ganguli
David Sussillo
ArXivPDFHTML

Papers citing "Universality and individuality in neural dynamics across large populations of recurrent networks"

22 / 22 papers shown
Title
Meta-Dynamical State Space Models for Integrative Neural Data Analysis
Meta-Dynamical State Space Models for Integrative Neural Data Analysis
Ayesha Vermani
Josue Nassar
Hyungju Jeon
Matthew Dowling
Il Memming Park
31
1
0
07 Oct 2024
Formation of Representations in Neural Networks
Formation of Representations in Neural Networks
Liu Ziyin
Isaac Chuang
Tomer Galanti
T. Poggio
37
4
0
03 Oct 2024
Back to the Continuous Attractor
Back to the Continuous Attractor
Ábel Ságodi
Guillermo Martín-Sánchez
Piotr Sokól
Il Memming Park
27
2
0
31 Jul 2024
When predict can also explain: few-shot prediction to select better neural latents
When predict can also explain: few-shot prediction to select better neural latents
Kabir Dabholkar
Omri Barak
BDL
55
0
0
23 May 2024
From Data-Fitting to Discovery: Interpreting the Neural Dynamics of
  Motor Control through Reinforcement Learning
From Data-Fitting to Discovery: Interpreting the Neural Dynamics of Motor Control through Reinforcement Learning
Eugene R. Rush
Kaushik Jayaram
J. Humbert
18
1
0
18 May 2023
How good are variational autoencoders at transfer learning?
How good are variational autoencoders at transfer learning?
Lisa Bonheme
M. Grzes
OOD
DRL
23
2
0
21 Apr 2023
Analyzing Populations of Neural Networks via Dynamical Model Embedding
Analyzing Populations of Neural Networks via Dynamical Model Embedding
Jordan S. Cotler
Kai Sheng Tai
Felipe Hernández
Blake Elias
David Sussillo
9
4
0
27 Feb 2023
Sources of Richness and Ineffability for Phenomenally Conscious States
Sources of Richness and Ineffability for Phenomenally Conscious States
Xu Ji
Eric Elmoznino
George Deane
Axel Constant
G. Dumas
Guillaume Lajoie
Jonathan Simon
Yoshua Bengio
26
11
0
13 Feb 2023
Representational dissimilarity metric spaces for stochastic neural
  networks
Representational dissimilarity metric spaces for stochastic neural networks
Lyndon Duong
Jingyang Zhou
Josue Nassar
Jules Berman
Jeroen Olieslagers
Alex H. Williams
24
19
0
21 Nov 2022
Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems
Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems
Manuela Brenner
Florian Hess
Jonas M. Mikhaeil
Leonard Bereska
Zahra Monfared
Po-Chen Kuo
Daniel Durstewitz
AI4CE
37
29
0
06 Jul 2022
How do Variational Autoencoders Learn? Insights from Representational
  Similarity
How do Variational Autoencoders Learn? Insights from Representational Similarity
Lisa Bonheme
M. Grzes
CoGe
SSL
DRL
27
10
0
17 May 2022
On the Origins of the Block Structure Phenomenon in Neural Network
  Representations
On the Origins of the Block Structure Phenomenon in Neural Network Representations
Thao Nguyen
M. Raghu
Simon Kornblith
25
14
0
15 Feb 2022
Testing the Tools of Systems Neuroscience on Artificial Neural Networks
Testing the Tools of Systems Neuroscience on Artificial Neural Networks
Grace W. Lindsay
16
4
0
14 Feb 2022
Gaussian RBF Centered Kernel Alignment (CKA) in the Large Bandwidth
  Limit
Gaussian RBF Centered Kernel Alignment (CKA) in the Large Bandwidth Limit
S. A. Alvarez
11
7
0
17 Dec 2021
Reverse engineering recurrent neural networks with Jacobian switching
  linear dynamical systems
Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems
Jimmy T.H. Smith
Scott W. Linderman
David Sussillo
10
28
0
01 Nov 2021
Do Vision Transformers See Like Convolutional Neural Networks?
Do Vision Transformers See Like Convolutional Neural Networks?
M. Raghu
Thomas Unterthiner
Simon Kornblith
Chiyuan Zhang
Alexey Dosovitskiy
ViT
61
924
0
19 Aug 2021
Reverse engineering learned optimizers reveals known and novel
  mechanisms
Reverse engineering learned optimizers reveals known and novel mechanisms
Niru Maheswaranathan
David Sussillo
Luke Metz
Ruoxi Sun
Jascha Narain Sohl-Dickstein
16
21
0
04 Nov 2020
Meta-trained agents implement Bayes-optimal agents
Meta-trained agents implement Bayes-optimal agents
Vladimir Mikulik
Grégoire Delétang
Tom McGrath
Tim Genewein
Miljan Martic
Shane Legg
Pedro A. Ortega
OOD
FedML
35
40
0
21 Oct 2020
Unfolding recurrence by Green's functions for optimized reservoir
  computing
Unfolding recurrence by Green's functions for optimized reservoir computing
Sandra Nestler
Christian Keup
David Dahmen
M. Gilson
Holger Rauhut
M. Helias
11
4
0
13 Oct 2020
How recurrent networks implement contextual processing in sentiment
  analysis
How recurrent networks implement contextual processing in sentiment analysis
Niru Maheswaranathan
David Sussillo
22
22
0
17 Apr 2020
From deep learning to mechanistic understanding in neuroscience: the
  structure of retinal prediction
From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction
Hidenori Tanaka
Aran Nayebi
Niru Maheswaranathan
Lane T. McIntosh
S. Baccus
Surya Ganguli
FAtt
14
61
0
12 Dec 2019
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
271
5,329
0
05 Nov 2016
1