ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12721
33
0

Can Reasoning Models Reason about Hardware? An Agentic HLS Perspective

17 March 2025
L. Collini
Andrew Hennessee
Ramesh Karri
Siddharth Garg
    ELM
    LRM
ArXivPDFHTML
Abstract

Recent Large Language Models (LLMs) such as OpenAI o3-mini and DeepSeek-R1 use enhanced reasoning through Chain-of-Thought (CoT). Their potential in hardware design, which relies on expert-driven iterative optimization, remains unexplored. This paper investigates whether reasoning LLMs can address challenges in High-Level Synthesis (HLS) design space exploration and optimization. During HLS, engineers manually define pragmas/directives to balance performance and resource constraints. We propose an LLM-based optimization agentic framework that automatically restructures code, inserts pragmas, and identifies optimal design points via feedback from HLs tools and access to integer-linear programming (ILP) solvers. Experiments compare reasoning models against conventional LLMs on benchmarks using success rate, efficiency, and design quality (area/latency) metrics, and provide the first-ever glimpse into the CoTs produced by a powerful open-source reasoning model like DeepSeek-R1.

View on arXiv
@article{collini2025_2503.12721,
  title={ Can Reasoning Models Reason about Hardware? An Agentic HLS Perspective },
  author={ Luca Collini and Andrew Hennessee and Ramesh Karri and Siddharth Garg },
  journal={arXiv preprint arXiv:2503.12721},
  year={ 2025 }
}
Comments on this paper