ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.05343
  4. Cited By
Decoupled Neural Interfaces using Synthetic Gradients
v1v2 (latest)

Decoupled Neural Interfaces using Synthetic Gradients

International Conference on Machine Learning (ICML), 2016
18 August 2016
Max Jaderberg
Wojciech M. Czarnecki
Simon Osindero
Oriol Vinyals
Alex Graves
David Silver
Koray Kavukcuoglu
ArXiv (abs)PDFHTML

Papers citing "Decoupled Neural Interfaces using Synthetic Gradients"

50 / 224 papers shown
Title
Training Deep Architectures Without End-to-End Backpropagation: A Survey
  on the Provably Optimal Methods
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal MethodsIEEE Computational Intelligence Magazine (IEEE CIM), 2021
Shiyu Duan
José C. Príncipe
MQ
420
5
0
09 Jan 2021
Advances in Electron Microscopy with Deep Learning
Advances in Electron Microscopy with Deep Learning
Jeffrey M. Ede
568
3
0
04 Jan 2021
Differentiable Programming à la Moreau
Differentiable Programming à la MoreauIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
Vincent Roulet
Zaïd Harchaoui
195
5
0
31 Dec 2020
Reservoir Transformers
Reservoir TransformersAnnual Meeting of the Association for Computational Linguistics (ACL), 2020
Sheng Shen
Alexei Baevski
Ari S. Morcos
Kurt Keutzer
Michael Auli
Douwe Kiela
365
22
0
30 Dec 2020
Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct
  Feedback Alignment
Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment
Julien Launay
Iacopo Poli
Kilian Muller
Gustave Pariente
I. Carron
L. Daudet
Florent Krzakala
S. Gigan
MoE
176
20
0
11 Dec 2020
Parallel Training of Deep Networks with Local Updates
Parallel Training of Deep Networks with Local Updates
Michael Laskin
Luke Metz
Seth Nabarrao
Mark Saroufim
Badreddine Noune
Carlo Luschi
Jascha Narain Sohl-Dickstein
Pieter Abbeel
FedML
216
30
0
07 Dec 2020
The Neural Coding Framework for Learning Generative Models
The Neural Coding Framework for Learning Generative ModelsNature Communications (Nat Commun), 2020
Alexander Ororbia
Daniel Kifer
GAN
444
75
0
07 Dec 2020
Accumulated Decoupled Learning: Mitigating Gradient Staleness in
  Inter-Layer Model Parallelization
Accumulated Decoupled Learning: Mitigating Gradient Staleness in Inter-Layer Model Parallelization
Huiping Zhuang
Zhiping Lin
Kar-Ann Toh
220
4
0
03 Dec 2020
On-Chip Error-triggered Learning of Multi-layer Memristive Spiking
  Neural Networks
On-Chip Error-triggered Learning of Multi-layer Memristive Spiking Neural NetworksIEEE Journal on Emerging and Selected Topics in Circuits and Systems (JESTCS), 2020
Melika Payvand
M. Fouda
Fadi J. Kurdahi
A. Eltawil
Emre Neftci
141
31
0
21 Nov 2020
Self Normalizing Flows
Self Normalizing FlowsInternational Conference on Machine Learning (ICML), 2020
Thomas Anderson Keller
Jorn W. T. Peters
P. Jaini
Emiel Hoogeboom
Patrick Forré
Max Welling
220
14
0
14 Nov 2020
Fast & Slow Learning: Incorporating Synthetic Gradients in Neural Memory
  Controllers
Fast & Slow Learning: Incorporating Synthetic Gradients in Neural Memory Controllers
Tharindu Fernando
Akila Pemasiri
Sridha Sridharan
Clinton Fookes
140
0
0
10 Nov 2020
From Eye-blinks to State Construction: Diagnostic Benchmarks for Online
  Representation Learning
From Eye-blinks to State Construction: Diagnostic Benchmarks for Online Representation Learning
Banafsheh Rafiee
Zaheer Abbas
Sina Ghiassian
Raksha Kumaraswamy
R. Sutton
Elliot A. Ludvig
Adam White
OffRL
272
18
0
09 Nov 2020
SplitEasy: A Practical Approach for Training ML models on Mobile Devices
SplitEasy: A Practical Approach for Training ML models on Mobile Devices
Kamalesh Palanisamy
Vivek Khimani
Moin Hussain Moti
Dimitris Chatzopoulos
299
24
0
09 Nov 2020
Modular-Relatedness for Continual LearningInternational Symposium on Intelligent Data Analysis (IDA), 2020
Ammar Shaker
Shujian Yu
Francesco Alesiani
KELMCLL
150
2
0
02 Nov 2020
Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via
  Accelerated Downsampling
Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via Accelerated Downsampling
Wenchi Ma
Miao Yu
Kaidong Li
Guanghui Wang
215
6
0
15 Oct 2020
Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign
  Dropout
Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign Dropout
Zhao Chen
Jiquan Ngiam
Yanping Huang
Thang Luong
Henrik Kretzschmar
Yuning Chai
Dragomir Anguelov
152
261
0
14 Oct 2020
Interlocking Backpropagation: Improving depthwise model-parallelism
Interlocking Backpropagation: Improving depthwise model-parallelismJournal of machine learning research (JMLR), 2020
Aidan Gomez
Oscar Key
Kuba Perlin
Stephen Gou
Nick Frosst
J. Dean
Y. Gal
232
22
0
08 Oct 2020
Feed-Forward On-Edge Fine-tuning Using Static Synthetic Gradient Modules
Feed-Forward On-Edge Fine-tuning Using Static Synthetic Gradient Modules
R. Neven
Marian Verhelst
Tinne Tuytelaars
Toon Goedemé
159
1
0
21 Sep 2020
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
788
88
0
17 Sep 2020
Learning Functors using Gradient Descent
Learning Functors using Gradient Descent
Bruno Gavranovic
66
5
0
15 Sep 2020
A Practical Layer-Parallel Training Algorithm for Residual Networks
A Practical Layer-Parallel Training Algorithm for Residual Networks
Qi Sun
Hexin Dong
Zewei Chen
Weizhen Dian
Jiacheng Sun
Yitong Sun
Zhenguo Li
Bin Dong
ODL
233
2
0
03 Sep 2020
LoCo: Local Contrastive Representation Learning
LoCo: Local Contrastive Representation Learning
Yuwen Xiong
Mengye Ren
R. Urtasun
SSLDRL
218
75
0
04 Aug 2020
Universality of Gradient Descent Neural Network Training
Universality of Gradient Descent Neural Network TrainingNeural Networks (NN), 2020
G. Welper
121
11
0
27 Jul 2020
Meta-rPPG: Remote Heart Rate Estimation Using a Transductive
  Meta-Learner
Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-LearnerEuropean Conference on Computer Vision (ECCV), 2020
Eugene Lee
E. Chen
Chen-Yi Lee
152
189
0
14 Jul 2020
A Theoretical Framework for Target Propagation
A Theoretical Framework for Target Propagation
Alexander Meulemans
Francesco S. Carzaniga
Johan A. K. Suykens
João Sacramento
Benjamin Grewe
AAML
236
91
0
25 Jun 2020
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and
  Architectures
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and ArchitecturesNeural Information Processing Systems (NeurIPS), 2020
Julien Launay
Iacopo Poli
Franccois Boniface
Florent Krzakala
228
72
0
23 Jun 2020
Extension of Direct Feedback Alignment to Convolutional and Recurrent
  Neural Network for Bio-plausible Deep Learning
Extension of Direct Feedback Alignment to Convolutional and Recurrent Neural Network for Bio-plausible Deep Learning
Donghyeon Han
Gwangtae Park
Junha Ryu
H. Yoo
3DV
74
6
0
23 Jun 2020
Parameter-Based Value Functions
Parameter-Based Value Functions
Francesco Faccio
Louis Kirsch
Jürgen Schmidhuber
OffRL
280
28
0
16 Jun 2020
Interaction Networks: Using a Reinforcement Learner to train other
  Machine Learning algorithms
Interaction Networks: Using a Reinforcement Learner to train other Machine Learning algorithms
Florian Dietz
56
1
0
15 Jun 2020
On the Impossibility of Global Convergence in Multi-Loss Optimization
On the Impossibility of Global Convergence in Multi-Loss OptimizationInternational Conference on Learning Representations (ICLR), 2020
Alistair Letcher
212
33
0
26 May 2020
Modularizing Deep Learning via Pairwise Learning With Kernels
Modularizing Deep Learning via Pairwise Learning With Kernels
Shiyu Duan
Shujian Yu
José C. Príncipe
MoMe
151
21
0
12 May 2020
Deep Learning: Our Miraculous Year 1990-1991
Deep Learning: Our Miraculous Year 1990-1991
J. Schmidhuber
3DGSMedIm
181
6
0
12 May 2020
Empirical Bayes Transductive Meta-Learning with Synthetic Gradients
Empirical Bayes Transductive Meta-Learning with Synthetic GradientsInternational Conference on Learning Representations (ICLR), 2020
S. Hu
Pablo G. Moreno
Yanghua Xiao
Xin Shen
G. Obozinski
Neil D. Lawrence
Andreas C. Damianou
BDL
185
135
0
27 Apr 2020
Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
  Synthesis
Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image SynthesisComputer Vision and Pattern Recognition (CVPR), 2020
Jogendra Nath Kundu
Siddharth Seth
Varun Jampani
M. Rakesh
R. Venkatesh Babu
Anirban Chakraborty
SSL3DH
180
83
0
09 Apr 2020
Policy Evaluation Networks
Policy Evaluation Networks
J. Harb
Tom Schaul
Doina Precup
Pierre-Luc Bacon
OffRL
138
37
0
26 Feb 2020
Bounding the expected run-time of nonconvex optimization with early
  stopping
Bounding the expected run-time of nonconvex optimization with early stoppingConference on Uncertainty in Artificial Intelligence (UAI), 2020
Thomas Flynn
K. Yu
A. Malik
Nicolas DÍmperio
Shinjae Yoo
187
2
0
20 Feb 2020
Towards Crowdsourced Training of Large Neural Networks using
  Decentralized Mixture-of-Experts
Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-ExpertsNeural Information Processing Systems (NeurIPS), 2020
Max Ryabinin
Anton I. Gusev
FedML
271
57
0
10 Feb 2020
Large-Scale Gradient-Free Deep Learning with Recursive Local
  Representation Alignment
Large-Scale Gradient-Free Deep Learning with Recursive Local Representation Alignment
Alexander Ororbia
A. Mali
Daniel Kifer
C. Lee Giles
251
2
0
10 Feb 2020
Sideways: Depth-Parallel Training of Video Models
Sideways: Depth-Parallel Training of Video ModelsComputer Vision and Pattern Recognition (CVPR), 2020
Mateusz Malinowski
G. Swirszcz
João Carreira
Viorica Patraucean
MDE
289
15
0
17 Jan 2020
Questions to Guide the Future of Artificial Intelligence Research
Questions to Guide the Future of Artificial Intelligence Research
J. Ott
144
3
0
21 Dec 2019
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedMLAI4CE
519
7,324
0
10 Dec 2019
Ghost Units Yield Biologically Plausible Backprop in Deep Neural
  Networks
Ghost Units Yield Biologically Plausible Backprop in Deep Neural Networks
Thomas Mesnard
Gaetan Vignoud
João Sacramento
Walter Senn
Yoshua Bengio
115
7
0
15 Nov 2019
Label-Conditioned Next-Frame Video Generation with Neural Flows
Label-Conditioned Next-Frame Video Generation with Neural Flows
Sergey Tarasenko
VGen
85
1
0
16 Oct 2019
Decoupling Hierarchical Recurrent Neural Networks With Locally
  Computable Losses
Decoupling Hierarchical Recurrent Neural Networks With Locally Computable Losses
Asier Mujika
Felix Weissenberger
Angelika Steger
133
0
0
11 Oct 2019
Learning to Remember from a Multi-Task Teacher
Learning to Remember from a Multi-Task Teacher
Yuwen Xiong
Mengye Ren
R. Urtasun
CLLKELMOOD
146
4
0
10 Oct 2019
Meta-Learning Deep Energy-Based Memory Models
Meta-Learning Deep Energy-Based Memory ModelsInternational Conference on Learning Representations (ICLR), 2019
Sergey Bartunov
Jack W. Rae
Simon Osindero
Timothy Lillicrap
298
35
0
07 Oct 2019
Gated Linear Networks
Gated Linear NetworksAAAI Conference on Artificial Intelligence (AAAI), 2019
William H. Guss
Tor Lattimore
David Budden
Avishkar Bhoopchand
Christopher Mattern
...
Ruslan Salakhutdinov
Jianan Wang
Peter Toth
Simon Schmitt
Marcus Hutter
AI4CE
193
44
0
30 Sep 2019
Ouroboros: On Accelerating Training of Transformer-Based Language Models
Ouroboros: On Accelerating Training of Transformer-Based Language ModelsNeural Information Processing Systems (NeurIPS), 2019
Qian Yang
Zhouyuan Huo
Wenlin Wang
Heng-Chiao Huang
Lawrence Carin
126
9
0
14 Sep 2019
On the Acceleration of Deep Learning Model Parallelism with Staleness
On the Acceleration of Deep Learning Model Parallelism with StalenessComputer Vision and Pattern Recognition (CVPR), 2019
An Xu
Zhouyuan Huo
Heng-Chiao Huang
133
40
0
05 Sep 2019
Learning without feedback: Fixed random learning signals allow for
  feedforward training of deep neural networks
Learning without feedback: Fixed random learning signals allow for feedforward training of deep neural networks
Charlotte Frenkel
M. Lefebvre
D. Bol
199
23
0
03 Sep 2019
Previous
12345
Next