ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.05405
  4. Cited By
Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws

Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws

8 April 2024
Zeyuan Allen-Zhu
Yuanzhi Li
    KELM
ArXiv (abs)PDFHTML

Papers citing "Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws"

24 / 24 papers shown
Title
Multidimensional Analysis of Specific Language Impairment Using Unsupervised Learning Through PCA and Clustering
Multidimensional Analysis of Specific Language Impairment Using Unsupervised Learning Through PCA and Clustering
Niruthiha Selvanayagam
30
0
0
05 Jun 2025
Attention Retrieves, MLP Memorizes: Disentangling Trainable Components in the Transformer
Attention Retrieves, MLP Memorizes: Disentangling Trainable Components in the Transformer
Yihe Dong
Lorenzo Noci
Mikhail Khodak
Mufan Li
39
0
0
01 Jun 2025
Toward a Theory of Agents as Tool-Use Decision-Makers
Toward a Theory of Agents as Tool-Use Decision-Makers
Hongru Wang
Cheng Qian
Manling Li
Jiahao Qiu
Boyang Xue
Mengdi Wang
Heng Ji
Kam-Fai Wong
48
0
0
01 Jun 2025
How much do language models memorize?
How much do language models memorize?
John X. Morris
Chawin Sitawarin
Chuan Guo
Narine Kokhlikyan
G. E. Suh
Alexander M. Rush
Kamalika Chaudhuri
Saeed Mahloujifar
KELMELM
33
0
0
30 May 2025
Pretraining Language Models to Ponder in Continuous Space
Pretraining Language Models to Ponder in Continuous Space
Boyi Zeng
Shixiang Song
Siyuan Huang
Yixuan Wang
He Li
Ziwei He
Xinbing Wang
Zhiyu Li
Zhouhan Lin
LRM
83
0
0
27 May 2025
How Is LLM Reasoning Distracted by Irrelevant Context? An Analysis Using a Controlled Benchmark
How Is LLM Reasoning Distracted by Irrelevant Context? An Analysis Using a Controlled Benchmark
Minglai Yang
Ethan Huang
Liang Zhang
Mihai Surdeanu
William Yang Wang
Liangming Pan
LRM
49
0
0
24 May 2025
Data Mixing Can Induce Phase Transitions in Knowledge Acquisition
Data Mixing Can Induce Phase Transitions in Knowledge Acquisition
Xinran Gu
Kaifeng Lyu
Jiazheng Li
Jingzhao Zhang
83
0
0
23 May 2025
Enhancing LLMs via High-Knowledge Data Selection
Enhancing LLMs via High-Knowledge Data Selection
Feiyu Duan
Xuemiao Zhang
Sirui Wang
Haoran Que
Yuqi Liu
Wenge Rong
Xunliang Cai
237
0
0
20 May 2025
When Does Metadata Conditioning (NOT) Work for Language Model Pre-Training? A Study with Context-Free Grammars
When Does Metadata Conditioning (NOT) Work for Language Model Pre-Training? A Study with Context-Free Grammars
Rei Higuchi
Ryotaro Kawata
Naoki Nishikawa
Kazusato Oko
Shoichiro Yamaguchi
Sosuke Kobayashi
Seiya Tokui
K. Hayashi
Daisuke Okanohara
Taiji Suzuki
AI4CE
86
1
0
24 Apr 2025
Tina: Tiny Reasoning Models via LoRA
Tina: Tiny Reasoning Models via LoRA
Shangshang Wang
Julian Asilis
Ömer Faruk Akgül
Enes Burak Bilgin
Ollie Liu
Willie Neiswanger
OffRLLRM
130
9
0
22 Apr 2025
Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models
Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models
Zhanke Zhou
Zhaocheng Zhu
Xuan Li
Mikhail Galkin
Xiao Feng
Sanmi Koyejo
Jian Tang
Bo Han
LRM
169
6
0
28 Mar 2025
Reasoning with Latent Thoughts: On the Power of Looped Transformers
Reasoning with Latent Thoughts: On the Power of Looped Transformers
Nikunj Saunshi
Nishanth Dikkala
Zhiyuan Li
Sanjiv Kumar
Sashank J. Reddi
OffRLLRMAI4CE
141
22
0
24 Feb 2025
Benchmarking Post-Training Quantization in LLMs: Comprehensive Taxonomy, Unified Evaluation, and Comparative Analysis
Benchmarking Post-Training Quantization in LLMs: Comprehensive Taxonomy, Unified Evaluation, and Comparative Analysis
Jiaqi Zhao
Ming Wang
Miao Zhang
Yuzhang Shang
Xuebo Liu
Yaowei Wang
Min Zhang
Liqiang Nie
MQ
246
2
0
18 Feb 2025
How to Upscale Neural Networks with Scaling Law? A Survey and Practical Guidelines
How to Upscale Neural Networks with Scaling Law? A Survey and Practical Guidelines
Ayan Sengupta
Ayan Sengupta
Tanmoy Chakraborty
168
0
0
17 Feb 2025
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
Yixin Ou
Yunzhi Yao
N. Zhang
Hui Jin
Jiacheng Sun
Shumin Deng
Zechao Li
Ningyu Zhang
KELMCLL
124
2
0
16 Feb 2025
Typhoon T1: An Open Thai Reasoning Model
Typhoon T1: An Open Thai Reasoning Model
Pittawat Taveekitworachai
Potsawee Manakul
Kasima Tharnpipitchai
Kunat Pipatanakul
OffRLLRM
276
0
0
13 Feb 2025
Do we really have to filter out random noise in pre-training data for language models?
Do we really have to filter out random noise in pre-training data for language models?
Jinghan Ru
Yuxin Xie
Xianwei Zhuang
Yuguo Yin
Zhihui Guo
Zhiming Liu
Qianli Ren
Yuexian Zou
193
6
0
10 Feb 2025
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
Tianzhe Chu
Yuexiang Zhai
Jihan Yang
Shengbang Tong
Saining Xie
Dale Schuurmans
Quoc V. Le
Sergey Levine
Yi-An Ma
OffRL
247
128
0
28 Jan 2025
Scaling Laws for Predicting Downstream Performance in LLMs
Scaling Laws for Predicting Downstream Performance in LLMs
Yangyi Chen
Binxuan Huang
Yifan Gao
Zhengyang Wang
Jingfeng Yang
Heng Ji
LRM
137
12
0
11 Oct 2024
Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition
Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition
Jiyeon Kim
Hyunji Lee
Hyowon Cho
Joel Jang
Hyeonbin Hwang
Seungpil Won
Youbin Ahn
Dohaeng Lee
Minjoon Seo
KELM
414
5
0
02 Oct 2024
State space models, emergence, and ergodicity: How many parameters are
  needed for stable predictions?
State space models, emergence, and ergodicity: How many parameters are needed for stable predictions?
Ingvar M. Ziemann
Nikolai Matni
George J. Pappas
81
1
0
20 Sep 2024
Representing Rule-based Chatbots with Transformers
Representing Rule-based Chatbots with Transformers
Dan Friedman
Abhishek Panigrahi
Danqi Chen
156
1
0
15 Jul 2024
Knowledge Circuits in Pretrained Transformers
Knowledge Circuits in Pretrained Transformers
Yunzhi Yao
Ningyu Zhang
Zekun Xi
Meng Wang
Ziwen Xu
Shumin Deng
Huajun Chen
KELM
180
25
0
28 May 2024
Physics of Language Models: Part 1, Learning Hierarchical Language Structures
Physics of Language Models: Part 1, Learning Hierarchical Language Structures
Zeyuan Allen-Zhu
Yuanzhi Li
112
21
0
23 May 2023
1