ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.09284
35
3
v1v2v3 (latest)

Gradient target propagation

19 October 2018
T. S. Farias
Jonas Maziero
ArXiv (abs)PDFHTMLGithub (3★)
Abstract

We report a learning rule for neural networks that computes how much each neuron should contribute to minimize a giving cost function via the estimation of its target value. By theoretical analysis, we show that this learning rule contains backpropagation, Hebian learning, and additional terms. We also give a general technique for weights initialization. Our results are at least as good as those obtained with backpropagation. The neural networks are trained and tested in three problems: MNIST, MNIST-Fashion, and CIFAR-10 datasets. The associated code is available at https://github.com/tiago939/target.

View on arXiv
Comments on this paper