ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.12214
  4. Cited By
Neurogenesis Dynamics-inspired Spiking Neural Network Training
  Acceleration

Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration

24 April 2023
Shaoyi Huang
Haowen Fang
Kaleel Mahmood
Bowen Lei
Nuo Xu
Bin Lei
Yue Sun
Dongkuan Xu
Wujie Wen
Caiwen Ding
ArXivPDFHTML

Papers citing "Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration"

8 / 8 papers shown
Title
Neuroplasticity in Artificial Intelligence -- An Overview and Inspirations on Drop In & Out Learning
Neuroplasticity in Artificial Intelligence -- An Overview and Inspirations on Drop In & Out Learning
Yupei Li
M. Milling
Björn Schuller
AI4CE
102
0
0
27 Mar 2025
Evolved Developmental Artificial Neural Networks for Multitasking with
  Advanced Activity Dependence
Evolved Developmental Artificial Neural Networks for Multitasking with Advanced Activity Dependence
Yintong Zhang
Jason A. Yoder
31
0
0
14 Jul 2024
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real
  World
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Bani Mallick
UQCV
29
0
0
29 Mar 2024
AutoReP: Automatic ReLU Replacement for Fast Private Network Inference
AutoReP: Automatic ReLU Replacement for Fast Private Network Inference
Hongwu Peng
Shaoyi Huang
Tong Zhou
Yukui Luo
Chenghong Wang
...
Tony Geng
Kaleel Mahmood
Wujie Wen
Xiaolin Xu
Caiwen Ding
OffRL
32
38
0
20 Aug 2023
Towards Self-Assembling Artificial Neural Networks through Neural
  Developmental Programs
Towards Self-Assembling Artificial Neural Networks through Neural Developmental Programs
Elias Najarro
Shyam Sudhakaran
S. Risi
28
15
0
17 Jul 2023
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient
  Correction
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Shuren He
Bani Mallick
25
6
0
09 Jan 2023
Towards Sparsification of Graph Neural Networks
Towards Sparsification of Graph Neural Networks
Hongwu Peng
Deniz Gurevin
Shaoyi Huang
Tong Geng
Weiwen Jiang
O. Khan
Caiwen Ding
GNN
30
24
0
11 Sep 2022
Deep Residual Learning in Spiking Neural Networks
Deep Residual Learning in Spiking Neural Networks
Wei Fang
Zhaofei Yu
Yanqing Chen
Tiejun Huang
T. Masquelier
Yonghong Tian
121
474
0
08 Feb 2021
1