ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.02528
  4. Cited By
Scalable MatMul-free Language Modeling

Scalable MatMul-free Language Modeling

4 June 2024
Rui-Jie Zhu
Yu Zhang
Ethan Sifferman
Tyler Sheaves
Yiqiao Wang
Dustin Richmond
P. Zhou
Jason Eshraghian
ArXivPDFHTML

Papers citing "Scalable MatMul-free Language Modeling"

14 / 14 papers shown
Title
PROM: Prioritize Reduction of Multiplications Over Lower Bit-Widths for Efficient CNNs
PROM: Prioritize Reduction of Multiplications Over Lower Bit-Widths for Efficient CNNs
Lukas Meiner
Jens Mehnert
A. P. Condurache
MQ
25
0
0
06 May 2025
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
Paul Albert
Frederic Z. Zhang
Hemanth Saratchandran
Cristian Rodriguez-Opazo
Anton van den Hengel
Ehsan Abbasnejad
94
0
0
03 Feb 2025
Towards Resilient and Efficient LLMs: A Comparative Study of Efficiency,
  Performance, and Adversarial Robustness
Towards Resilient and Efficient LLMs: A Comparative Study of Efficiency, Performance, and Adversarial Robustness
Xiaojing Fan
Chunliang Tao
AAML
29
28
0
08 Aug 2024
u-$\mu$P: The Unit-Scaled Maximal Update Parametrization
u-μ\muμP: The Unit-Scaled Maximal Update Parametrization
Charlie Blake
C. Eichenberg
Josef Dean
Lukas Balles
Luke Y. Prince
Bjorn Deiseroth
Andres Felipe Cruz Salinas
Carlo Luschi
Samuel Weinbach
Douglas Orr
46
9
0
24 Jul 2024
SQUAT: Stateful Quantization-Aware Training in Recurrent Spiking Neural
  Networks
SQUAT: Stateful Quantization-Aware Training in Recurrent Spiking Neural Networks
Sreyes P. Venkatesh
Razvan Marinescu
Jason Eshraghian
MQ
25
5
0
15 Apr 2024
Griffin: Mixing Gated Linear Recurrences with Local Attention for
  Efficient Language Models
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Soham De
Samuel L. Smith
Anushan Fernando
Aleksandar Botev
George-Christian Muraru
...
David Budden
Yee Whye Teh
Razvan Pascanu
Nando de Freitas
Çağlar Gülçehre
Mamba
53
116
0
29 Feb 2024
BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via
  Self-Distillation
BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via Self-Distillation
Dayou Du
Yijia Zhang
Shijie Cao
Jiaqi Guo
Ting Cao
Xiaowen Chu
Ningyi Xu
MQ
41
28
0
16 Feb 2024
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
DeepSeek-AI Xiao Bi
:
Xiao Bi
Deli Chen
Guanting Chen
...
Yao Zhao
Shangyan Zhou
Shunfeng Zhou
Qihao Zhu
Yuheng Zou
LRM
ALM
139
298
0
05 Jan 2024
Neuromorphic Intermediate Representation: A Unified Instruction Set for
  Interoperable Brain-Inspired Computing
Neuromorphic Intermediate Representation: A Unified Instruction Set for Interoperable Brain-Inspired Computing
Jens Egholm Pedersen
Steven Abreu
Matthias Jobst
Gregor Lenz
Vittorio Fra
...
Gianvito Urgese
Sadasivan Shankar
Terrence C. Stewart
Jason Eshraghian
Sadique Sheik
25
30
0
24 Nov 2023
Resurrecting Recurrent Neural Networks for Long Sequences
Resurrecting Recurrent Neural Networks for Long Sequences
Antonio Orvieto
Samuel L. Smith
Albert Gu
Anushan Fernando
Çağlar Gülçehre
Razvan Pascanu
Soham De
88
258
0
11 Mar 2023
Spikformer: When Spiking Neural Network Meets Transformer
Spikformer: When Spiking Neural Network Meets Transformer
Zhaokun Zhou
Yuesheng Zhu
Chao He
Yaowei Wang
Shuicheng Yan
Yonghong Tian
Liuliang Yuan
143
231
0
29 Sep 2022
BiT: Robustly Binarized Multi-distilled Transformer
BiT: Robustly Binarized Multi-distilled Transformer
Zechun Liu
Barlas Oğuz
Aasish Pappu
Lin Xiao
Scott Yih
Meng Li
Raghuraman Krishnamoorthi
Yashar Mehdad
MQ
32
50
0
25 May 2022
BinaryBERT: Pushing the Limit of BERT Quantization
BinaryBERT: Pushing the Limit of BERT Quantization
Haoli Bai
Wei Zhang
Lu Hou
Lifeng Shang
Jing Jin
Xin Jiang
Qun Liu
Michael Lyu
Irwin King
MQ
138
183
0
31 Dec 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
220
4,424
0
23 Jan 2020
1