ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06821
19
0

ThreatLens: LLM-guided Threat Modeling and Test Plan Generation for Hardware Security Verification

11 May 2025
Dipayan Saha
Hasan Al Shaikh
Shams Tarek
Farimah Farahmandi
ArXivPDFHTML
Abstract

Current hardware security verification processes predominantly rely on manual threat modeling and test plan generation, which are labor-intensive, error-prone, and struggle to scale with increasing design complexity and evolving attack methodologies. To address these challenges, we propose ThreatLens, an LLM-driven multi-agent framework that automates security threat modeling and test plan generation for hardware security verification. ThreatLens integrates retrieval-augmented generation (RAG) to extract relevant security knowledge, LLM-powered reasoning for threat assessment, and interactive user feedback to ensure the generation of practical test plans. By automating these processes, the framework reduces the manual verification effort, enhances coverage, and ensures a structured, adaptable approach to security verification. We evaluated our framework on the NEORV32 SoC, demonstrating its capability to automate security verification through structured test plans and validating its effectiveness in real-world scenarios.

View on arXiv
@article{saha2025_2505.06821,
  title={ ThreatLens: LLM-guided Threat Modeling and Test Plan Generation for Hardware Security Verification },
  author={ Dipayan Saha and Hasan Al Shaikh and Shams Tarek and Farimah Farahmandi },
  journal={arXiv preprint arXiv:2505.06821},
  year={ 2025 }
}
Comments on this paper