ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.05759
  4. Cited By
Insights on representational similarity in neural networks with
  canonical correlation

Insights on representational similarity in neural networks with canonical correlation

14 June 2018
Ari S. Morcos
M. Raghu
Samy Bengio
    DRL
ArXivPDFHTML

Papers citing "Insights on representational similarity in neural networks with canonical correlation"

34 / 84 papers shown
Title
Representation Topology Divergence: A Method for Comparing Neural
  Network Representations
Representation Topology Divergence: A Method for Comparing Neural Network Representations
S. Barannikov
I. Trofimov
Nikita Balabin
Evgeny Burnaev
3DPC
28
45
0
31 Dec 2021
Does MAML Only Work via Feature Re-use? A Data Centric Perspective
Does MAML Only Work via Feature Re-use? A Data Centric Perspective
Brando Miranda
Yu-xiong Wang
Oluwasanmi Koyejo
30
4
0
24 Dec 2021
When Neural Networks Using Different Sensors Create Similar Features
When Neural Networks Using Different Sensors Create Similar Features
Hugues Moreau
A. Vassilev
Liming Luke Chen
25
0
0
04 Nov 2021
Context Meta-Reinforcement Learning via Neuromodulation
Context Meta-Reinforcement Learning via Neuromodulation
Eseoghene Ben-Iwhiwhu
Jeffery Dick
Nicholas A. Ketz
Praveen K. Pilly
Andrea Soltoggio
OffRL
30
12
0
30 Oct 2021
Hyper-Representations: Self-Supervised Representation Learning on Neural
  Network Weights for Model Characteristic Prediction
Hyper-Representations: Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic Prediction
Konstantin Schurholt
Dimche Kostadinov
Damian Borth
SSL
19
14
0
28 Oct 2021
Don't speak too fast: The impact of data bias on self-supervised speech
  models
Don't speak too fast: The impact of data bias on self-supervised speech models
Yen Meng
Yi-Hui Chou
Andy T. Liu
Hung-yi Lee
34
25
0
15 Oct 2021
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing
  Language Models
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Robert Wolfe
Aylin Caliskan
85
51
0
01 Oct 2021
Do Vision Transformers See Like Convolutional Neural Networks?
Do Vision Transformers See Like Convolutional Neural Networks?
M. Raghu
Thomas Unterthiner
Simon Kornblith
Chiyuan Zhang
Alexey Dosovitskiy
ViT
52
924
0
19 Aug 2021
Layer-wise Analysis of a Self-supervised Speech Representation Model
Layer-wise Analysis of a Self-supervised Speech Representation Model
Ankita Pasad
Ju-Chieh Chou
Karen Livescu
SSL
26
287
0
10 Jul 2021
Deep Learning Through the Lens of Example Difficulty
Deep Learning Through the Lens of Example Difficulty
R. Baldock
Hartmut Maennel
Behnam Neyshabur
39
155
0
17 Jun 2021
Revisiting Model Stitching to Compare Neural Representations
Revisiting Model Stitching to Compare Neural Representations
Yamini Bansal
Preetum Nakkiran
Boaz Barak
FedML
32
104
0
14 Jun 2021
An Online Riemannian PCA for Stochastic Canonical Correlation Analysis
An Online Riemannian PCA for Stochastic Canonical Correlation Analysis
Zihang Meng
Rudrasis Chakraborty
Vikas Singh
9
9
0
08 Jun 2021
Signal Transformer: Complex-valued Attention and Meta-Learning for
  Signal Recognition
Signal Transformer: Complex-valued Attention and Meta-Learning for Signal Recognition
Yihong Dong
Ying Peng
Muqiao Yang
Songtao Lu
Qingjiang Shi
40
9
0
05 Jun 2021
Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of
  Pre-trained Models' Transferability
Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Wei-Tsung Kao
Hung-yi Lee
16
16
0
12 Mar 2021
Aggregative Self-Supervised Feature Learning from a Limited Sample
Aggregative Self-Supervised Feature Learning from a Limited Sample
Jiuwen Zhu
Yuexiang Li
S. Kevin Zhou
SSL
22
0
0
14 Dec 2020
Learn to Bind and Grow Neural Structures
Learn to Bind and Grow Neural Structures
Azhar Shaikh
Nishant Sinha
CLL
16
0
0
21 Nov 2020
For self-supervised learning, Rationality implies generalization,
  provably
For self-supervised learning, Rationality implies generalization, provably
Yamini Bansal
Gal Kaplun
Boaz Barak
OOD
SSL
58
22
0
16 Oct 2020
Anatomy of Catastrophic Forgetting: Hidden Representations and Task
  Semantics
Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics
V. Ramasesh
Ethan Dyer
M. Raghu
CLL
24
173
0
14 Jul 2020
An Investigation of the Weight Space to Monitor the Training Progress of
  Neural Networks
An Investigation of the Weight Space to Monitor the Training Progress of Neural Networks
Konstantin Schurholt
Damian Borth
24
3
0
18 Jun 2020
High-contrast "gaudy" images improve the training of deep neural network
  models of visual cortex
High-contrast "gaudy" images improve the training of deep neural network models of visual cortex
Benjamin R. Cowley
Jonathan W. Pillow
18
10
0
13 Jun 2020
Critical Assessment of Transfer Learning for Medical Image Segmentation
  with Fully Convolutional Neural Networks
Critical Assessment of Transfer Learning for Medical Image Segmentation with Fully Convolutional Neural Networks
Davood Karimi
Simon K. Warfield
Ali Gholipour
MedIm
14
19
0
30 May 2020
How Do You Act? An Empirical Study to Understand Behavior of Deep
  Reinforcement Learning Agents
How Do You Act? An Empirical Study to Understand Behavior of Deep Reinforcement Learning Agents
Richard Meyes
Moritz Schneider
Tobias Meisen
20
2
0
07 Apr 2020
Under the Hood of Neural Networks: Characterizing Learned
  Representations by Functional Neuron Populations and Network Ablations
Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations
Richard Meyes
Constantin Waubert de Puiseau
Andres Felipe Posada-Moreno
Tobias Meisen
AI4CE
30
22
0
02 Apr 2020
Sequential Transfer Machine Learning in Networks: Measuring the Impact
  of Data and Neural Net Similarity on Transferability
Sequential Transfer Machine Learning in Networks: Measuring the Impact of Data and Neural Net Similarity on Transferability
Robin Hirt
Akash Srivastava
Carlos Berg
Niklas Kühl
8
3
0
29 Mar 2020
A Survey of Deep Learning for Scientific Discovery
A Survey of Deep Learning for Scientific Discovery
M. Raghu
Erica Schmidt
OOD
AI4CE
38
120
0
26 Mar 2020
Similarity of Neural Networks with Gradients
Similarity of Neural Networks with Gradients
Shuai Tang
Wesley J. Maddox
Charlie Dickens
Tom Diethe
Andreas C. Damianou
14
25
0
25 Mar 2020
AL2: Progressive Activation Loss for Learning General Representations in
  Classification Neural Networks
AL2: Progressive Activation Loss for Learning General Representations in Classification Neural Networks
Majed El Helou
Frederike Dumbgen
Sabine Süsstrunk
CLL
AI4CE
24
2
0
07 Mar 2020
Selectivity considered harmful: evaluating the causal impact of class
  selectivity in DNNs
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
Matthew L. Leavitt
Ari S. Morcos
58
33
0
03 Mar 2020
Robust Training with Ensemble Consensus
Robust Training with Ensemble Consensus
Jisoo Lee
Sae-Young Chung
NoLa
17
28
0
22 Oct 2019
Modelling the influence of data structure on learning in neural
  networks: the hidden manifold model
Modelling the influence of data structure on learning in neural networks: the hidden manifold model
Sebastian Goldt
M. Mézard
Florent Krzakala
Lenka Zdeborová
BDL
21
51
0
25 Sep 2019
Investigating Multilingual NMT Representations at Scale
Investigating Multilingual NMT Representations at Scale
Sneha Kudugunta
Ankur Bapna
Isaac Caswell
N. Arivazhagan
Orhan Firat
LRM
141
120
0
05 Sep 2019
Similarity of Neural Network Representations Revisited
Similarity of Neural Network Representations Revisited
Simon Kornblith
Mohammad Norouzi
Honglak Lee
Geoffrey E. Hinton
20
1,354
0
01 May 2019
Shared Representational Geometry Across Neural Networks
Shared Representational Geometry Across Neural Networks
Qihong Lu
Po-Hsuan Chen
Jonathan W. Pillow
Peter J. Ramadge
K. A. Norman
Uri Hasson
OOD
16
11
0
28 Nov 2018
How deep is deep enough? -- Quantifying class separability in the hidden
  layers of deep neural networks
How deep is deep enough? -- Quantifying class separability in the hidden layers of deep neural networks
Junhong Lin
C. Metzner
Andreas K. Maier
V. Cevher
Holger Schulze
Patrick Krauss
18
56
0
05 Nov 2018
Previous
12