ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.14331
  4. Cited By
A Theoretical Framework for Target Propagation

A Theoretical Framework for Target Propagation

25 June 2020
Alexander Meulemans
Francesco S. Carzaniga
Johan A. K. Suykens
João Sacramento
Benjamin Grewe
    AAML
ArXivPDFHTML

Papers citing "A Theoretical Framework for Target Propagation"

21 / 21 papers shown
Title
Tight Stability, Convergence, and Robustness Bounds for Predictive
  Coding Networks
Tight Stability, Convergence, and Robustness Bounds for Predictive Coding Networks
A. Mali
Tommaso Salvatori
Alexander Ororbia
32
0
0
07 Oct 2024
Contribute to balance, wire in accordance: Emergence of backpropagation from a simple, bio-plausible neuroplasticity rule
Contribute to balance, wire in accordance: Emergence of backpropagation from a simple, bio-plausible neuroplasticity rule
Xinhao Fan
S. P. Mysore
29
0
0
23 May 2024
Go beyond End-to-End Training: Boosting Greedy Local Learning with
  Context Supply
Go beyond End-to-End Training: Boosting Greedy Local Learning with Context Supply
Chengting Yu
Fengzhao Zhang
Hanzhi Ma
Aili Wang
Er-ping Li
14
1
0
12 Dec 2023
Improving equilibrium propagation without weight symmetry through
  Jacobian homeostasis
Improving equilibrium propagation without weight symmetry through Jacobian homeostasis
Axel Laborieux
Friedemann Zenke
10
6
0
05 Sep 2023
Biologically-Motivated Learning Model for Instructed Visual Processing
Biologically-Motivated Learning Model for Instructed Visual Processing
R. Abel
S. Ullman
20
0
0
04 Jun 2023
Block-local learning with probabilistic latent representations
Block-local learning with probabilistic latent representations
David Kappel
Khaleelulla Khan Nazeer
Cabrel Teguemne Fokam
Christian Mayr
Anand Subramoney
24
4
0
24 May 2023
Dual Propagation: Accelerating Contrastive Hebbian Learning with Dyadic
  Neurons
Dual Propagation: Accelerating Contrastive Hebbian Learning with Dyadic Neurons
R. Høier
D. Staudt
Christopher Zach
26
11
0
02 Feb 2023
Predictive Coding beyond Gaussian Distributions
Predictive Coding beyond Gaussian Distributions
Luca Pinchetti
Tommaso Salvatori
Yordan Yordanov
Beren Millidge
Yuhang Song
Thomas Lukasiewicz
UQCV
BDL
27
11
0
07 Nov 2022
ATLAS: Universal Function Approximator for Memory Retention
ATLAS: Universal Function Approximator for Memory Retention
H. V. Deventer
Anna Sergeevna Bosman
22
0
0
10 Aug 2022
A Theoretical Framework for Inference and Learning in Predictive Coding
  Networks
A Theoretical Framework for Inference and Learning in Predictive Coding Networks
Beren Millidge
Yuhang Song
Tommaso Salvatori
Thomas Lukasiewicz
Rafal Bogacz
24
12
0
21 Jul 2022
Gradients without Backpropagation
Gradients without Backpropagation
A. G. Baydin
Barak A. Pearlmutter
Don Syme
Frank D. Wood
Philip H. S. Torr
25
66
0
17 Feb 2022
Towards Scaling Difference Target Propagation by Learning Backprop
  Targets
Towards Scaling Difference Target Propagation by Learning Backprop Targets
M. Ernoult
Fabrice Normandin
A. Moudgil
Sean Spinney
Eugene Belilovsky
Irina Rish
Blake A. Richards
Yoshua Bengio
11
28
0
31 Jan 2022
Gradient Descent on Neurons and its Link to Approximate Second-Order
  Optimization
Gradient Descent on Neurons and its Link to Approximate Second-Order Optimization
Frederik Benzing
ODL
35
23
0
28 Jan 2022
Target Propagation via Regularized Inversion
Target Propagation via Regularized Inversion
Vincent Roulet
Zaïd Harchaoui
BDL
AAML
19
4
0
02 Dec 2021
On Training Implicit Models
On Training Implicit Models
Zhengyang Geng
Xinyu Zhang
Shaojie Bai
Yisen Wang
Zhouchen Lin
59
69
0
09 Nov 2021
Applications of the Free Energy Principle to Machine Learning and
  Neuroscience
Applications of the Free Energy Principle to Machine Learning and Neuroscience
Beren Millidge
DRL
20
7
0
30 Jun 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight
  Alignment Perspective
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
16
9
0
15 Jun 2021
Credit Assignment in Neural Networks through Deep Feedback Control
Credit Assignment in Neural Networks through Deep Feedback Control
Alexander Meulemans
Matilde Tristany Farinha
Javier García Ordónez
Pau Vilimelis Aceituno
João Sacramento
Benjamin Grewe
17
34
0
15 Jun 2021
Training Deep Architectures Without End-to-End Backpropagation: A Survey
  on the Provably Optimal Methods
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods
Shiyu Duan
José C. Príncipe
MQ
20
3
0
09 Jan 2021
Differentiable Programming à la Moreau
Differentiable Programming à la Moreau
Vincent Roulet
Zaïd Harchaoui
13
5
0
31 Dec 2020
Self Normalizing Flows
Self Normalizing Flows
Thomas Anderson Keller
Jorn W. T. Peters
P. Jaini
Emiel Hoogeboom
Patrick Forré
Max Welling
24
14
0
14 Nov 2020
1