ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.11049
21
0

Learning by the F-adjoint

8 July 2024
Ahmed Boughammoura
    FedML
ArXivPDFHTML
Abstract

A recent paper by Boughammoura (2023) describes the back-propagation algorithm in terms of an alternative formulation called the F-adjoint method. In particular, by the F-adjoint algorithm the computation of the loss gradient, with respect to each weight within the network, is straightforward and can simply be done. In this work, we develop and investigate this theoretical framework to improve some supervised learning algorithm for feed-forward neural network. Our main result is that by introducing some neural dynamical model combined by the gradient descent algorithm, we derived an equilibrium F-adjoint process which yields to some local learning rule for deep feed-forward networks setting. Experimental results on MNIST and Fashion-MNIST datasets, demonstrate that the proposed approach provide a significant improvements on the standard back-propagation training procedure.

View on arXiv
Comments on this paper