ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.04554
  4. Cited By
Principled Training of Neural Networks with Direct Feedback Alignment

Principled Training of Neural Networks with Direct Feedback Alignment

11 June 2019
Julien Launay
Iacopo Poli
Florent Krzakala
ArXivPDFHTML

Papers citing "Principled Training of Neural Networks with Direct Feedback Alignment"

22 / 22 papers shown
Title
Can Local Representation Alignment RNNs Solve Temporal Tasks?
Can Local Representation Alignment RNNs Solve Temporal Tasks?
Nikolay Manchev
Luis C. Garcia-Peraza-Herrera
AI4TS
30
0
0
18 Apr 2025
Training Large Neural Networks With Low-Dimensional Error Feedback
Training Large Neural Networks With Low-Dimensional Error Feedback
Maher Hanut
Jonathan Kadmon
40
0
0
27 Feb 2025
Training Spiking Neural Networks via Augmented Direct Feedback Alignment
Training Spiking Neural Networks via Augmented Direct Feedback Alignment
Yongbo Zhang
Katsuma Inoue
M. Nakajima
Toshikazu Hashimoto
Yasuo Kuniyoshi
Kohei Nakajima
24
0
0
12 Sep 2024
Forward Learning of Graph Neural Networks
Forward Learning of Graph Neural Networks
Namyong Park
Xing Wang
Antoine Simoulin
Shuai Yang
Grey Yang
Ryan Rossi
Puja Trivedi
Nesreen K. Ahmed
GNN
41
1
0
16 Mar 2024
Random Feedback Alignment Algorithms to train Neural Networks: Why do
  they Align?
Random Feedback Alignment Algorithms to train Neural Networks: Why do they Align?
Dominique F. Chu
Florian Bacho
ODL
17
0
0
04 Jun 2023
Learning with augmented target information: An alternative theory of
  Feedback Alignment
Learning with augmented target information: An alternative theory of Feedback Alignment
Huzi Cheng
Joshua W. Brown
CVBM
8
0
0
03 Apr 2023
Biologically Plausible Learning on Neuromorphic Hardware Architectures
Biologically Plausible Learning on Neuromorphic Hardware Architectures
Christopher Wolters
Brady Taylor
Edward Hanson
Xiaoxuan Yang
Ulf Schlichtmann
Yiran Chen
6
3
0
29 Dec 2022
Low-Variance Forward Gradients using Direct Feedback Alignment and
  Momentum
Low-Variance Forward Gradients using Direct Feedback Alignment and Momentum
Florian Bacho
Dominique F. Chu
13
7
0
14 Dec 2022
Scaling Laws Beyond Backpropagation
Scaling Laws Beyond Backpropagation
Matthew J. Filipovich
Alessandro Cappelli
Daniel Hesslow
Julien Launay
19
3
0
26 Oct 2022
Towards Scaling Difference Target Propagation by Learning Backprop
  Targets
Towards Scaling Difference Target Propagation by Learning Backprop Targets
M. Ernoult
Fabrice Normandin
A. Moudgil
Sean Spinney
Eugene Belilovsky
Irina Rish
Blake A. Richards
Yoshua Bengio
13
28
0
31 Jan 2022
Convergence Analysis and Implicit Regularization of Feedback Alignment
  for Deep Linear Networks
Convergence Analysis and Implicit Regularization of Feedback Alignment for Deep Linear Networks
M. Girotti
Ioannis Mitliagkas
Gauthier Gidel
12
1
0
20 Oct 2021
Applications of the Free Energy Principle to Machine Learning and
  Neuroscience
Applications of the Free Energy Principle to Machine Learning and Neuroscience
Beren Millidge
DRL
20
7
0
30 Jun 2021
Credit Assignment in Neural Networks through Deep Feedback Control
Credit Assignment in Neural Networks through Deep Feedback Control
Alexander Meulemans
Matilde Tristany Farinha
Javier García Ordónez
Pau Vilimelis Aceituno
João Sacramento
Benjamin Grewe
20
34
0
15 Jun 2021
Credit Assignment Through Broadcasting a Global Error Vector
Credit Assignment Through Broadcasting a Global Error Vector
David G. Clark
L. F. Abbott
SueYeon Chung
17
23
0
08 Jun 2021
Bottom-up and top-down approaches for the design of neuromorphic
  processing systems: Tradeoffs and synergies between natural and artificial
  intelligence
Bottom-up and top-down approaches for the design of neuromorphic processing systems: Tradeoffs and synergies between natural and artificial intelligence
Charlotte Frenkel
D. Bol
Giacomo Indiveri
23
33
0
02 Jun 2021
Training Deep Architectures Without End-to-End Backpropagation: A Survey
  on the Provably Optimal Methods
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods
Shiyu Duan
José C. Príncipe
MQ
20
3
0
09 Jan 2021
Adversarial Robustness by Design through Analog Computing and Synthetic
  Gradients
Adversarial Robustness by Design through Analog Computing and Synthetic Gradients
Alessandro Cappelli
Ruben Ohana
Julien Launay
Laurent Meunier
Iacopo Poli
Florent Krzakala
AAML
54
13
0
06 Jan 2021
Align, then memorise: the dynamics of learning with feedback alignment
Align, then memorise: the dynamics of learning with feedback alignment
Maria Refinetti
Stéphane dÁscoli
Ruben Ohana
Sebastian Goldt
26
36
0
24 Nov 2020
Investigating the Scalability and Biological Plausibility of the
  Activation Relaxation Algorithm
Investigating the Scalability and Biological Plausibility of the Activation Relaxation Algorithm
Beren Millidge
Alexander Tschantz
A. Seth
Christopher L. Buckley
12
0
0
13 Oct 2020
A Theoretical Framework for Target Propagation
A Theoretical Framework for Target Propagation
Alexander Meulemans
Francesco S. Carzaniga
Johan A. K. Suykens
João Sacramento
Benjamin Grewe
AAML
25
77
0
25 Jun 2020
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and
  Architectures
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Julien Launay
Iacopo Poli
Franccois Boniface
Florent Krzakala
27
62
0
23 Jun 2020
Learning without feedback: Fixed random learning signals allow for
  feedforward training of deep neural networks
Learning without feedback: Fixed random learning signals allow for feedforward training of deep neural networks
Charlotte Frenkel
M. Lefebvre
D. Bol
8
23
0
03 Sep 2019
1