ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20533
  4. Cited By
Accelerate Parallelizable Reasoning via Parallel Decoding within One Sequence

Accelerate Parallelizable Reasoning via Parallel Decoding within One Sequence

26 March 2025
Yijiong Yu
    LRM
    AIMat
ArXivPDFHTML

Papers citing "Accelerate Parallelizable Reasoning via Parallel Decoding within One Sequence"

1 / 1 papers shown
Title
Hogwild! Inference: Parallel LLM Generation via Concurrent Attention
Hogwild! Inference: Parallel LLM Generation via Concurrent Attention
Gleb Rodionov
Roman Garipov
Alina Shutova
George Yakushev
Vage Egiazarian
Anton Sinitsin
Denis Kuznedelev
Dan Alistarh
LRM
27
1
0
08 Apr 2025
1