Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1609.01596
Cited By
Direct Feedback Alignment Provides Learning in Deep Neural Networks
6 September 2016
Arild Nøkland
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Direct Feedback Alignment Provides Learning in Deep Neural Networks"
50 / 248 papers shown
Title
Bottom-up and top-down approaches for the design of neuromorphic processing systems: Tradeoffs and synergies between natural and artificial intelligence
Charlotte Frenkel
D. Bol
Giacomo Indiveri
36
34
0
02 Jun 2021
In-Hardware Learning of Multilayer Spiking Neural Networks on a Neuromorphic Processor
Amar Shrestha
Haowen Fang
D. Rider
Zaidao Mei
Qinru Qiu
38
26
0
08 May 2021
Learning in Deep Neural Networks Using a Biologically Inspired Optimizer
Giorgia Dellaferrera
Stanisław Woźniak
Giacomo Indiveri
A. Pantazi
E. Eleftheriou
32
2
0
23 Apr 2021
Bidirectional Interaction between Visual and Motor Generative Models using Predictive Coding and Active Inference
Louis Annabi
Alexandre Pitti
M. Quoy
16
10
0
19 Apr 2021
Meta-Learning Bidirectional Update Rules
Mark Sandler
Max Vladymyrov
A. Zhmoginov
Nolan Miller
Andrew Jackson
T. Madams
Blaise Agüera y Arcas
27
15
0
10 Apr 2021
A Gradient Estimator for Time-Varying Electrical Networks with Non-Linear Dissipation
Jack D. Kendall
14
6
0
09 Mar 2021
Reverse Differentiation via Predictive Coding
Tommaso Salvatori
Yuhang Song
Thomas Lukasiewicz
Rafal Bogacz
Zhenghua Xu
PINN
30
26
0
08 Mar 2021
Efficient Training Convolutional Neural Networks on Edge Devices with Gradient-pruned Sign-symmetric Feedback Alignment
Ziyang Hong
C. Yue
29
2
0
04 Mar 2021
Gradient-adjusted Incremental Target Propagation Provides Effective Credit Assignment in Deep Neural Networks
Sander Dalm
Nasir Ahmad
L. Ambrogioni
Marcel van Gerven
36
1
0
23 Feb 2021
Revisiting Locally Supervised Learning: an Alternative to End-to-end Training
Yulin Wang
Zanlin Ni
Shiji Song
Le Yang
Gao Huang
25
83
0
26 Jan 2021
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods
Shiyu Duan
José C. Príncipe
MQ
38
3
0
09 Jan 2021
Adversarial Robustness by Design through Analog Computing and Synthetic Gradients
Alessandro Cappelli
Ruben Ohana
Julien Launay
Laurent Meunier
Iacopo Poli
Florent Krzakala
AAML
69
13
0
06 Jan 2021
Training DNNs in O(1) memory with MEM-DFA using Random Matrices
Tien Chu
Kamil Mykitiuk
Miron Szewczyk
Adam Wiktor
Z. Wojna
18
2
0
21 Dec 2020
The Neural Coding Framework for Learning Generative Models
Alexander Ororbia
Daniel Kifer
GAN
26
65
0
07 Dec 2020
Align, then memorise: the dynamics of learning with feedback alignment
Maria Refinetti
Stéphane dÁscoli
Ruben Ohana
Sebastian Goldt
31
36
0
24 Nov 2020
A More Biologically Plausible Local Learning Rule for ANNs
Shashi Kant Gupta
AAML
17
2
0
24 Nov 2020
On-Chip Error-triggered Learning of Multi-layer Memristive Spiking Neural Networks
Melika Payvand
M. Fouda
Fadi J. Kurdahi
A. Eltawil
Emre Neftci
35
29
0
21 Nov 2020
Identifying Learning Rules From Neural Network Observables
Aran Nayebi
S. Srivastava
Surya Ganguli
Daniel L. K. Yamins
19
21
0
22 Oct 2020
Local plasticity rules can learn deep representations using self-supervised contrastive predictions
Bernd Illing
Jean-Paul Ventura
G. Bellec
W. Gerstner
SSL
DRL
59
69
0
16 Oct 2020
Investigating the Scalability and Biological Plausibility of the Activation Relaxation Algorithm
Beren Millidge
Alexander Tschantz
A. Seth
Christopher L. Buckley
22
0
0
13 Oct 2020
Deep Reservoir Networks with Learned Hidden Reservoir Weights using Direct Feedback Alignment
Matthew Evanusa
Cornelia Fermuller
Yiannis Aloimonos
AI4TS
13
1
0
13 Oct 2020
Differentially Private Deep Learning with Direct Feedback Alignment
Jaewoo Lee
Daniel Kifer
FedML
25
9
0
08 Oct 2020
Relaxing the Constraints on Predictive Coding Models
Beren Millidge
Alexander Tschantz
A. Seth
Christopher L. Buckley
26
23
0
02 Oct 2020
Feed-Forward On-Edge Fine-tuning Using Static Synthetic Gradient Modules
R. Neven
Marian Verhelst
Tinne Tuytelaars
Toon Goedemé
31
1
0
21 Sep 2020
Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain
Beren Millidge
Alexander Tschantz
A. Seth
Christopher L. Buckley
ODL
28
16
0
11 Sep 2020
LoCo: Local Contrastive Representation Learning
Yuwen Xiong
Mengye Ren
R. Urtasun
SSL
DRL
35
69
0
04 Aug 2020
Online Spatio-Temporal Learning in Deep Neural Networks
Thomas Bohnstingl
Stanislaw Wo'zniak
Wolfgang Maass
A. Pantazi
E. Eleftheriou
29
43
0
24 Jul 2020
Biological credit assignment through dynamic inversion of feedforward networks
William F. Podlaski
C. Machens
27
19
0
10 Jul 2020
MPLP: Learning a Message Passing Learning Protocol
E. Randazzo
Eyvind Niklasson
A. Mordvintsev
22
4
0
02 Jul 2020
A Theoretical Framework for Target Propagation
Alexander Meulemans
Francesco S. Carzaniga
Johan A. K. Suykens
João Sacramento
Benjamin Grewe
AAML
33
77
0
25 Jun 2020
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Julien Launay
Iacopo Poli
Franccois Boniface
Florent Krzakala
41
63
0
23 Jun 2020
Extension of Direct Feedback Alignment to Convolutional and Recurrent Neural Network for Bio-plausible Deep Learning
Donghyeon Han
Gwangtae Park
Junha Ryu
H. Yoo
3DV
17
5
0
23 Jun 2020
Learning to Learn with Feedback and Local Plasticity
Jack W Lindsey
Ashok Litwin-Kumar
CLL
34
32
0
16 Jun 2020
Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks
Roman Pogodin
P. Latham
24
34
0
12 Jun 2020
Light-in-the-loop: using a photonics co-processor for scalable training of neural networks
Julien Launay
Iacopo Poli
Kilian Muller
I. Carron
L. Daudet
Florent Krzakala
S. Gigan
11
6
0
02 Jun 2020
A 28-nm Convolutional Neuromorphic Processor Enabling Online Learning with Spike-Based Retinas
Charlotte Frenkel
J. Legat
D. Bol
19
43
0
13 May 2020
Towards On-Chip Bayesian Neuromorphic Learning
Nathan Wycoff
Prasanna Balaprakash
Fangfang Xia
19
1
0
05 May 2020
Why should we add early exits to neural networks?
Simone Scardapane
M. Scarpiniti
E. Baccarelli
A. Uncini
16
117
0
27 Apr 2020
Overcoming the Weight Transport Problem via Spike-Timing-Dependent Weight Inference
Nasir Ahmad
L. Ambrogioni
Marcel van Gerven
23
2
0
09 Mar 2020
Two Routes to Scalable Credit Assignment without Weight Symmetry
D. Kunin
Aran Nayebi
Javier Sagastuy-Breña
Surya Ganguli
Jonathan M. Bloom
Daniel L. K. Yamins
34
32
0
28 Feb 2020
Contrastive Similarity Matching for Supervised Learning
Shanshan Qin
N. Mudur
Cengiz Pehlevan
SSL
DRL
22
1
0
24 Feb 2020
Large-Scale Gradient-Free Deep Learning with Recursive Local Representation Alignment
Alexander Ororbia
A. Mali
Daniel Kifer
C. Lee Giles
23
2
0
10 Feb 2020
Sideways: Depth-Parallel Training of Video Models
Mateusz Malinowski
G. Swirszcz
João Carreira
Viorica Patraucean
MDE
49
13
0
17 Jan 2020
A Supervised Learning Algorithm for Multilayer Spiking Neural Networks Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design
Yusuke Sakemi
K. Morino
Takashi Morie
Kazuyuki Aihara
8
32
0
08 Jan 2020
Are skip connections necessary for biologically plausible learning rules?
Daniel Jiwoong Im
Rutuja Patil
K. Branson
14
1
0
04 Dec 2019
Ghost Units Yield Biologically Plausible Backprop in Deep Neural Networks
Thomas Mesnard
Gaetan Vignoud
João Sacramento
Walter Senn
Yoshua Bengio
27
7
0
15 Nov 2019
Structured and Deep Similarity Matching via Structured and Deep Hebbian Networks
D. Obeid
Hugo Ramambason
Cengiz Pehlevan
FedML
14
20
0
11 Oct 2019
A Survey of Techniques All Classifiers Can Learn from Deep Networks: Models, Optimizations, and Regularization
Alireza Ghods
D. Cook
21
1
0
10 Sep 2019
On the Acceleration of Deep Learning Model Parallelism with Staleness
An Xu
Zhouyuan Huo
Heng-Chiao Huang
16
37
0
05 Sep 2019
Learning without feedback: Fixed random learning signals allow for feedforward training of deep neural networks
Charlotte Frenkel
M. Lefebvre
D. Bol
16
23
0
03 Sep 2019
Previous
1
2
3
4
5
Next