ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.00522
  4. Cited By
Understanding Synthetic Gradients and Decoupled Neural Interfaces

Understanding Synthetic Gradients and Decoupled Neural Interfaces

1 March 2017
Wojciech M. Czarnecki
G. Swirszcz
Max Jaderberg
Simon Osindero
Oriol Vinyals
Koray Kavukcuoglu
ArXivPDFHTML

Papers citing "Understanding Synthetic Gradients and Decoupled Neural Interfaces"

15 / 15 papers shown
Title
Deep Incubation: Training Large Models by Divide-and-Conquering
Deep Incubation: Training Large Models by Divide-and-Conquering
Zanlin Ni
Yulin Wang
Jiangwei Yu
Haojun Jiang
Yu Cao
Gao Huang
VLM
18
11
0
08 Dec 2022
Block-wise Training of Residual Networks via the Minimizing Movement
  Scheme
Block-wise Training of Residual Networks via the Minimizing Movement Scheme
Skander Karkar
Ibrahim Ayed
Emmanuel de Bézenac
Patrick Gallinari
27
1
0
03 Oct 2022
Hebbian Deep Learning Without Feedback
Hebbian Deep Learning Without Feedback
Adrien Journé
Hector Garcia Rodriguez
Qinghai Guo
Timoleon Moraitis
AAML
20
48
0
23 Sep 2022
Biologically Plausible Training of Deep Neural Networks Using a Top-down
  Credit Assignment Network
Biologically Plausible Training of Deep Neural Networks Using a Top-down Credit Assignment Network
Jian-Hui Chen
Cheng-Lin Liu
Zuoren Wang
21
0
0
01 Aug 2022
A Taxonomy of Recurrent Learning Rules
A Taxonomy of Recurrent Learning Rules
Guillermo Martín-Sánchez
Sander M. Bohté
S. Otte
11
3
0
23 Jul 2022
Error-driven Input Modulation: Solving the Credit Assignment Problem
  without a Backward Pass
Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass
Giorgia Dellaferrera
Gabriel Kreiman
21
53
0
27 Jan 2022
Target Propagation via Regularized Inversion
Target Propagation via Regularized Inversion
Vincent Roulet
Zaïd Harchaoui
BDL
AAML
19
4
0
02 Dec 2021
On Training Implicit Models
On Training Implicit Models
Zhengyang Geng
Xinyu Zhang
Shaojie Bai
Yisen Wang
Zhouchen Lin
59
69
0
09 Nov 2021
SoftHebb: Bayesian Inference in Unsupervised Hebbian Soft
  Winner-Take-All Networks
SoftHebb: Bayesian Inference in Unsupervised Hebbian Soft Winner-Take-All Networks
Timoleon Moraitis
Dmitry Toichkin
Adrien Journé
Yansong Chua
Qinghai Guo
AAML
BDL
68
28
0
12 Jul 2021
Training Deep Architectures Without End-to-End Backpropagation: A Survey
  on the Provably Optimal Methods
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods
Shiyu Duan
José C. Príncipe
MQ
20
3
0
09 Jan 2021
Reservoir Transformers
Reservoir Transformers
Sheng Shen
Alexei Baevski
Ari S. Morcos
Kurt Keutzer
Michael Auli
Douwe Kiela
29
17
0
30 Dec 2020
A Unified Framework of Online Learning Algorithms for Training Recurrent
  Neural Networks
A Unified Framework of Online Learning Algorithms for Training Recurrent Neural Networks
O. Marschall
Kyunghyun Cho
Cristina Savin
FedML
28
72
0
05 Jul 2019
Low-pass Recurrent Neural Networks - A memory architecture for
  longer-term correlation discovery
Low-pass Recurrent Neural Networks - A memory architecture for longer-term correlation discovery
T. Stepleton
Razvan Pascanu
Will Dabney
Siddhant M. Jayakumar
Hubert Soyer
Rémi Munos
11
4
0
13 May 2018
Decoupled Parallel Backpropagation with Convergence Guarantee
Decoupled Parallel Backpropagation with Convergence Guarantee
Zhouyuan Huo
Bin Gu
Qian Yang
Heng-Chiao Huang
8
97
0
27 Apr 2018
Deep Reinforcement Learning: An Overview
Deep Reinforcement Learning: An Overview
Yuxi Li
OffRL
VLM
104
1,502
0
25 Jan 2017
1