ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09181
  4. Cited By
Sparse evolutionary Deep Learning with over one million artificial
  neurons on commodity hardware

Sparse evolutionary Deep Learning with over one million artificial neurons on commodity hardware

26 January 2019
Shiwei Liu
D. Mocanu
A. R. Ramapuram Matavalam
Yulong Pei
Mykola Pechenizkiy
    BDL
ArXivPDFHTML

Papers citing "Sparse evolutionary Deep Learning with over one million artificial neurons on commodity hardware"

16 / 16 papers shown
Title
Sparse-to-Sparse Training of Diffusion Models
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
86
0
0
30 Apr 2025
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
Boqian Wu
Q. Xiao
Shiwei Liu
Lu Yin
Mykola Pechenizkiy
D. Mocanu
M. V. Keulen
Elena Mocanu
MedIm
53
4
0
20 Feb 2025
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Boqian Wu
Q. Xiao
Shunxin Wang
N. Strisciuglio
Mykola Pechenizkiy
M. V. Keulen
D. Mocanu
Elena Mocanu
OOD
3DH
52
0
0
03 Oct 2024
Learning a Sparse Representation of Barron Functions with the Inverse
  Scale Space Flow
Learning a Sparse Representation of Barron Functions with the Inverse Scale Space Flow
T. J. Heeringa
Tim Roith
Christoph Brune
Martin Burger
11
0
0
05 Dec 2023
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Yu-xin Zhang
Lirui Zhao
Mingbao Lin
Yunyun Sun
Yiwu Yao
Xingjia Han
Jared Tanner
Shiwei Liu
Rongrong Ji
SyDa
37
40
0
13 Oct 2023
You Can Have Better Graph Neural Networks by Not Training Weights at
  All: Finding Untrained GNNs Tickets
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets
Tianjin Huang
Tianlong Chen
Meng Fang
Vlado Menkovski
Jiaxu Zhao
...
Yulong Pei
D. Mocanu
Zhangyang Wang
Mykola Pechenizkiy
Shiwei Liu
GNN
39
14
0
28 Nov 2022
Gradient-based Weight Density Balancing for Robust Dynamic Sparse
  Training
Gradient-based Weight Density Balancing for Robust Dynamic Sparse Training
Mathias Parger
Alexander Ertl
Paul Eibensteiner
J. H. Mueller
Martin Winter
M. Steinberger
34
0
0
25 Oct 2022
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing
  Performance
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
Shiwei Liu
Yuesong Tian
Tianlong Chen
Li Shen
36
8
0
05 Mar 2022
Achieving Personalized Federated Learning with Sparse Local Models
Achieving Personalized Federated Learning with Sparse Local Models
Tiansheng Huang
Shiwei Liu
Li Shen
Fengxiang He
Weiwei Lin
Dacheng Tao
FedML
30
43
0
27 Jan 2022
M-ar-K-Fast Independent Component Analysis
M-ar-K-Fast Independent Component Analysis
Luca Parisi
30
0
0
17 Aug 2021
Deep Ensembling with No Overhead for either Training or Testing: The
  All-Round Blessings of Dynamic Sparsity
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
OOD
28
49
0
28 Jun 2021
Learning Gradual Argumentation Frameworks using Genetic Algorithms
Learning Gradual Argumentation Frameworks using Genetic Algorithms
J. Spieler
Nico Potyka
Steffen Staab
AI4CE
34
4
0
25 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
34
111
0
19 Jun 2021
Sparse Training Theory for Scalable and Efficient Agents
Sparse Training Theory for Scalable and Efficient Agents
D. Mocanu
Elena Mocanu
T. Pinto
Selima Curci
Phuong H. Nguyen
M. Gibescu
D. Ernst
Z. Vale
45
17
0
02 Mar 2021
On improving deep learning generalization with adaptive sparse
  connectivity
On improving deep learning generalization with adaptive sparse connectivity
Shiwei Liu
D. Mocanu
Mykola Pechenizkiy
ODL
12
7
0
27 Jun 2019
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,634
0
03 Jul 2012
1