ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.04720
  4. Cited By
Improving Neural Network Training in Low Dimensional Random Bases

Improving Neural Network Training in Low Dimensional Random Bases

9 November 2020
Frithjof Gressmann
Zach Eaton-Rosen
Carlo Luschi
ArXiv (abs)PDFHTML

Papers citing "Improving Neural Network Training in Low Dimensional Random Bases"

18 / 18 papers shown
Accelerating Learned Image Compression Through Modeling Neural Training Dynamics
Accelerating Learned Image Compression Through Modeling Neural Training Dynamics
Yichi Zhang
Zhihao Duan
Yuning Huang
Fengqing Zhu
589
1
0
23 May 2025
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Zhijie Chen
Qiaobo Li
A. Banerjee
FedML
585
3
0
11 Nov 2024
PMSS: Pretrained Matrices Skeleton Selection for LLM Fine-tuning
PMSS: Pretrained Matrices Skeleton Selection for LLM Fine-tuningInternational Conference on Computational Linguistics (COLING), 2024
Qibin Wang
Xiaolin Hu
Weikai Xu
Wei Liu
Jian Luan
Bin Wang
297
6
0
25 Sep 2024
Learning Scalable Model Soup on a Single GPU: An Efficient Subspace
  Training Strategy
Learning Scalable Model Soup on a Single GPU: An Efficient Subspace Training Strategy
Tao Li
Weisen Jiang
Fanghui Liu
Xiaolin Huang
James T. Kwok
MoMe
377
2
0
04 Jul 2024
Interpretability of Language Models via Task Spaces
Interpretability of Language Models via Task Spaces
Lucas Weber
Jaap Jumelet
Elia Bruni
Dieuwke Hupkes
251
5
0
10 Jun 2024
Does SGD really happen in tiny subspaces?
Does SGD really happen in tiny subspaces?
Minhak Song
Kwangjun Ahn
Chulhee Yun
630
22
1
25 May 2024
Towards Green AI: Current status and future research
Towards Green AI: Current status and future research
Christian Clemm
Lutz Stobbe
Kishan Wimalawarne
Jan Druschke
294
12
0
01 May 2024
Improving Model Fusion by Training-time Neuron Alignment with Fixed Neuron Anchors
Improving Model Fusion by Training-time Neuron Alignment with Fixed Neuron AnchorsIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024
Zexi Li
Zhiqi Li
Jie Lin
Zhenyuan Zhang
Tao Lin
Chao Wu
Tao Lin
Chao Wu
468
5
0
02 Feb 2024
Identifying Policy Gradient Subspaces
Identifying Policy Gradient SubspacesInternational Conference on Learning Representations (ICLR), 2024
Jan Schneider-Barnes
Pierre Schumacher
Simon Guist
Tianyu Cui
Daniel Haeufle
Bernhard Scholkopf
Le Chen
360
7
0
12 Jan 2024
Enhancing Neural Training via a Correlated Dynamics Model
Enhancing Neural Training via a Correlated Dynamics Model
Jonathan Brokman
Roy Betser
Rotem Turjeman
Tom Berkov
I. Cohen
Guy Gilboa
212
5
0
20 Dec 2023
Deep Model Fusion: A Survey
Deep Model Fusion: A Survey
Weishi Li
Yong Peng
Miao Zhang
Liang Ding
Han Hu
Li Shen
FedMLMoMe
349
106
0
27 Sep 2023
Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific
  Subspaces of Pre-trained Language Models
Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific Subspaces of Pre-trained Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Zhong Zhang
Bang Liu
Junming Shao
314
20
0
27 May 2023
PGrad: Learning Principal Gradients For Domain Generalization
PGrad: Learning Principal Gradients For Domain GeneralizationInternational Conference on Learning Representations (ICLR), 2023
Zhe Wang
J. E. Grigsby
Yanjun Qi
OOD
239
19
0
02 May 2023
Robust Federated Learning against both Data Heterogeneity and Poisoning
  Attack via Aggregation Optimization
Robust Federated Learning against both Data Heterogeneity and Poisoning Attack via Aggregation Optimization
Yueqi Xie
Weizhong Zhang
Renjie Pi
Fangzhao Wu
Qifeng Chen
Xing Xie
Sunghun Kim
FedML
300
9
0
10 Nov 2022
Trainable Weight Averaging: Accelerating Training and Improving Generalization
Trainable Weight Averaging: Accelerating Training and Improving Generalization
Tao Li
Zhehao Huang
Yingwen Wu
Zhengbao He
Qinghua Tao
Xiaolin Huang
Chih-Jen Lin
MoMe
421
3
0
26 May 2022
Kernel Modulation: A Parameter-Efficient Method for Training
  Convolutional Neural Networks
Kernel Modulation: A Parameter-Efficient Method for Training Convolutional Neural NetworksInternational Conference on Pattern Recognition (ICPR), 2022
Yuhuang Hu
Shih-Chii Liu
153
1
0
29 Mar 2022
Subspace Adversarial Training
Subspace Adversarial Training
Tao Li
Yingwen Wu
Sizhe Chen
Kun Fang
Xiaolin Huang
AAMLOOD
376
68
0
24 Nov 2021
Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in
  Tiny Subspaces
Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces
Tao Li
Lei Tan
Qinghua Tao
Yipeng Liu
Xiaolin Huang
282
10
0
20 Mar 2021
1
Page 1 of 1