ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.02241
  4. Cited By
Neural Redshift: Random Networks are not Random Functions

Neural Redshift: Random Networks are not Random Functions

4 March 2024
Damien Teney
A. Nicolicioiu
Valentin Hartmann
Ehsan Abbasnejad
ArXivPDFHTML

Papers citing "Neural Redshift: Random Networks are not Random Functions"

21 / 21 papers shown
Title
Do We Always Need the Simplicity Bias? Looking for Optimal Inductive Biases in the Wild
Damien Teney
Liangze Jiang
Florin Gogianu
Ehsan Abbasnejad
63
0
0
13 Mar 2025
Do ImageNet-trained models learn shortcuts? The impact of frequency shortcuts on generalization
Do ImageNet-trained models learn shortcuts? The impact of frequency shortcuts on generalization
Shunxin Wang
Raymond N. J. Veldhuis
N. Strisciuglio
VLM
51
0
0
05 Mar 2025
From Language to Cognition: How LLMs Outgrow the Human Language Network
Badr AlKhamissi
Greta Tuckute
Yingtian Tang
Taha Binhuraib
Antoine Bosselut
Martin Schrimpf
ALM
45
1
0
03 Mar 2025
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
Yibo Zhong
Haoxiang Jiang
Lincan Li
Ryumei Nakada
Tianci Liu
Linjun Zhang
Huaxiu Yao
Haoyu Wang
49
2
0
24 Feb 2025
Exploring Kolmogorov-Arnold Networks for Interpretable Time Series Classification
Exploring Kolmogorov-Arnold Networks for Interpretable Time Series Classification
Irina Barašin
Blaž Bertalanič
M. Mohorčič
Carolina Fortuna
AI4TS
72
2
0
22 Nov 2024
Is network fragmentation a useful complexity measure?
Is network fragmentation a useful complexity measure?
Coenraad Mouton
Randle Rabe
Daniël G. Haasbroek
Marthinus W. Theunissen
Hermanus L. Potgieter
Marelie Hattingh Davel
30
0
0
07 Nov 2024
Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture
Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture
Sajad Movahedi
Antonio Orvieto
Seyed-Mohsen Moosavi-Dezfooli
AAML
AI4CE
34
0
0
15 Oct 2024
SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement
  Learning
SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning
Hojoon Lee
Dongyoon Hwang
Donghu Kim
Hyunseung Kim
Jun Jet Tai
K. Subramanian
Peter R. Wurman
Jaegul Choo
Peter Stone
Takuma Seno
OffRL
26
1
0
13 Oct 2024
Let the Quantum Creep In: Designing Quantum Neural Network Models by
  Gradually Swapping Out Classical Components
Let the Quantum Creep In: Designing Quantum Neural Network Models by Gradually Swapping Out Classical Components
Peiyong Wang
Casey. R. Myers
Lloyd C. L. Hollenberg
U. Parampalli
28
1
0
26 Sep 2024
Brain-Like Language Processing via a Shallow Untrained Multihead
  Attention Network
Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network
Badr AlKhamissi
Greta Tuckute
Antoine Bosselut
Martin Schrimpf
34
5
0
21 Jun 2024
Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to
  Multimodal Inputs
Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs
Mustafa Shukor
Matthieu Cord
32
5
0
26 May 2024
How Uniform Random Weights Induce Non-uniform Bias: Typical
  Interpolating Neural Networks Generalize with Narrow Teachers
How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers
G. Buzaglo
I. Harel
Mor Shpigel Nacson
Alon Brutzkus
Nathan Srebro
Daniel Soudry
37
3
0
09 Feb 2024
From Alexnet to Transformers: Measuring the Non-linearity of Deep Neural Networks with Affine Optimal Transport
From Alexnet to Transformers: Measuring the Non-linearity of Deep Neural Networks with Affine Optimal Transport
Quentin Bouniot
I. Redko
Anton Mallasto
Charlotte Laclau
Karol Arndt
Oliver Struckmeier
Markus Heinonen
Ville Kyrki
Samuel Kaski
25
1
0
17 Oct 2023
Model-agnostic Measure of Generalization Difficulty
Model-agnostic Measure of Generalization Difficulty
Akhilan Boopathy
Kevin Liu
Jaedong Hwang
Shu Ge
Asaad Mohammedsaleh
Ila Fiete
35
2
0
01 May 2023
Why neural networks find simple solutions: the many regularizers of
  geometric complexity
Why neural networks find simple solutions: the many regularizers of geometric complexity
Benoit Dherin
Michael Munn
M. Rosca
David Barrett
33
22
0
27 Sep 2022
Neural Networks and the Chomsky Hierarchy
Neural Networks and the Chomsky Hierarchy
Grégoire Delétang
Anian Ruoss
Jordi Grau-Moya
Tim Genewein
L. Wenliang
...
Chris Cundy
Marcus Hutter
Shane Legg
Joel Veness
Pedro A. Ortega
UQCV
82
85
0
05 Jul 2022
Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space
  Perspective
Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective
Luca Scimeca
Seong Joon Oh
Sanghyuk Chun
Michael Poli
Sangdoo Yun
OOD
350
44
0
06 Oct 2021
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
61
67
0
29 Sep 2021
Sensitivity as a Complexity Measure for Sequence Classification Tasks
Sensitivity as a Complexity Measure for Sequence Classification Tasks
Michael Hahn
Dan Jurafsky
Richard Futrell
127
16
0
21 Apr 2021
A Theoretical Analysis of the Repetition Problem in Text Generation
A Theoretical Analysis of the Repetition Problem in Text Generation
Z. Fu
Wai Lam
Anthony Man-Cho So
Bei Shi
49
73
0
29 Dec 2020
Hidden Unit Specialization in Layered Neural Networks: ReLU vs.
  Sigmoidal Activation
Hidden Unit Specialization in Layered Neural Networks: ReLU vs. Sigmoidal Activation
Elisa Oostwal
Michiel Straat
Michael Biehl
MLT
36
51
0
16 Oct 2019
1