ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.05496
  4. Cited By
Flex Attention: A Programming Model for Generating Optimized Attention
  Kernels

Flex Attention: A Programming Model for Generating Optimized Attention Kernels

7 December 2024
Juechu Dong
Boyuan Feng
Driss Guessous
Yanbo Liang
Horace He
ArXivPDFHTML

Papers citing "Flex Attention: A Programming Model for Generating Optimized Attention Kernels"

2 / 2 papers shown
Title
Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models
Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models
Marianne Arriola
Aaron Gokaslan
Justin T Chiu
Zhihan Yang
Zhixuan Qi
Jiaqi Han
S. Sahoo
Volodymyr Kuleshov
DiffM
72
5
0
12 Mar 2025
AttentionEngine: A Versatile Framework for Efficient Attention Mechanisms on Diverse Hardware Platforms
AttentionEngine: A Versatile Framework for Efficient Attention Mechanisms on Diverse Hardware Platforms
Feiyang Chen
Yu Cheng
Lei Wang
Yuqing Xia
Ziming Miao
...
Fan Yang
Jinbao Xue
Zhi Yang
M. Yang
H. Chen
76
1
0
24 Feb 2025
1