ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.05946
  4. Cited By
ACDC: A Structured Efficient Linear Layer

ACDC: A Structured Efficient Linear Layer

18 November 2015
Marcin Moczulski
Misha Denil
J. Appleyard
Nando de Freitas
ArXivPDFHTML

Papers citing "ACDC: A Structured Efficient Linear Layer"

17 / 17 papers shown
Title
Block Circulant Adapter for Large Language Models
Block Circulant Adapter for Large Language Models
Xinyu Ding
Meiqi Wang
Siyu Liao
Zhongfeng Wang
31
0
0
01 May 2025
Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
Mahdi Karami
Ali Ghodsi
VLM
36
6
0
28 Feb 2024
Monarch: Expressive Structured Matrices for Efficient and Accurate
  Training
Monarch: Expressive Structured Matrices for Efficient and Accurate Training
Tri Dao
Beidi Chen
N. Sohoni
Arjun D Desai
Michael Poli
Jessica Grogan
Alexander Liu
Aniruddh Rao
Atri Rudra
Christopher Ré
13
87
0
01 Apr 2022
Rethinking Neural Operations for Diverse Tasks
Rethinking Neural Operations for Diverse Tasks
Nicholas Roberts
M. Khodak
Tri Dao
Liam Li
Christopher Ré
Ameet Talwalkar
AI4CE
29
22
0
29 Mar 2021
GST: Group-Sparse Training for Accelerating Deep Reinforcement Learning
GST: Group-Sparse Training for Accelerating Deep Reinforcement Learning
Juhyoung Lee
Sangyeob Kim
Sangjin Kim
Wooyoung Jo
H. Yoo
OffRL
19
9
0
24 Jan 2021
Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality
  Regularization and Singular Value Sparsification
Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification
Huanrui Yang
Minxue Tang
W. Wen
Feng Yan
Daniel Hu
Ang Li
H. Li
Yiran Chen
23
62
0
20 Apr 2020
Generalisation error in learning with random features and the hidden
  manifold model
Generalisation error in learning with random features and the hidden manifold model
Federica Gerace
Bruno Loureiro
Florent Krzakala
M. Mézard
Lenka Zdeborová
22
165
0
21 Feb 2020
Iteratively Training Look-Up Tables for Network Quantization
Iteratively Training Look-Up Tables for Network Quantization
Fabien Cardinaux
Stefan Uhlich
K. Yoshiyama
Javier Alonso García
Lukas Mauch
Stephen Tiedemann
Thomas Kemp
Akira Nakamura
MQ
21
16
0
12 Nov 2019
Principled Training of Neural Networks with Direct Feedback Alignment
Principled Training of Neural Networks with Direct Feedback Alignment
Julien Launay
Iacopo Poli
Florent Krzakala
11
35
0
11 Jun 2019
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
Keivan Alizadeh-Vahid
Anish K. Prabhu
Ali Farhadi
Mohammad Rastegari
17
50
0
05 Jun 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
20
307
0
15 Feb 2019
Learning Compressed Transforms with Low Displacement Rank
Learning Compressed Transforms with Low Displacement Rank
Anna T. Thomas
Albert Gu
Tri Dao
Atri Rudra
Christopher Ré
20
40
0
04 Oct 2018
Entropy and mutual information in models of deep neural networks
Entropy and mutual information in models of deep neural networks
Marylou Gabrié
Andre Manoel
Clément Luneau
Jean Barbier
N. Macris
Florent Krzakala
Lenka Zdeborová
22
178
0
24 May 2018
Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and Memory
Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and Memory
Ameya Prabhu
Vishal Batchu
Rohit Gajawada
Sri Aurobindo Munagala
A. Namboodiri
MQ
12
18
0
11 Apr 2018
Small-footprint Highway Deep Neural Networks for Speech Recognition
Small-footprint Highway Deep Neural Networks for Speech Recognition
Liang Lu
Steve Renals
16
15
0
18 Oct 2016
Structured Convolution Matrices for Energy-efficient Deep learning
Structured Convolution Matrices for Energy-efficient Deep learning
R. Appuswamy
T. Nayak
John V. Arthur
S. K. Esser
P. Merolla
J. McKinstry
T. Melano
M. Flickner
D. Modha
27
11
0
08 Jun 2016
Strongly-Typed Recurrent Neural Networks
Strongly-Typed Recurrent Neural Networks
David Balduzzi
Muhammad Ghifary
PINN
8
60
0
06 Feb 2016
1