Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2506.17871
Cited By
v1
v2 (latest)
LLM Probability Concentration: How Alignment Shrinks the Generative Horizon
22 June 2025
Chenghao Yang
Ari Holtzman
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (7 upvotes)
Github (25018★)
Papers citing
"LLM Probability Concentration: How Alignment Shrinks the Generative Horizon"
2 / 2 papers shown
Title
Let it Calm: Exploratory Annealed Decoding for Verifiable Reinforcement Learning
Chenghao Yang
Lin Gui
Chenxiao Yang
Victor Veitch
Lizhu Zhang
Zhuokai Zhao
OffRL
80
0
0
06 Oct 2025
Random Policy Valuation is Enough for LLM Reasoning with Verifiable Rewards
Haoran He
Yuxiao Ye
Qingpeng Cai
Chen-Hao Hu
Binxing Jiao
Daxin Jiang
Ling Pan
OffRL
LRM
46
0
0
29 Sep 2025
1