ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.14027
  4. Cited By
(Security) Assertions by Large Language Models

(Security) Assertions by Large Language Models

24 June 2023
Rahul Kande
Hammond Pearce
Benjamin Tan
Brendan Dolan-Gavitt
Shailja Thakur
Ramesh Karri
Jeyavijayan Rajendran Texas AM University
ArXivPDFHTML

Papers citing "(Security) Assertions by Large Language Models"

10 / 10 papers shown
Title
ThreatLens: LLM-guided Threat Modeling and Test Plan Generation for Hardware Security Verification
ThreatLens: LLM-guided Threat Modeling and Test Plan Generation for Hardware Security Verification
Dipayan Saha
Hasan Al Shaikh
Shams Tarek
Farimah Farahmandi
21
0
0
11 May 2025
The Quest to Build Trust Earlier in Digital Design
The Quest to Build Trust Earlier in Digital Design
Benjamin Tan
16
0
0
09 Sep 2024
AssertionBench: A Benchmark to Evaluate Large-Language Models for Assertion Generation
AssertionBench: A Benchmark to Evaluate Large-Language Models for Assertion Generation
Vaishnavi Pulavarthi
Deeksha Nandal
Soham Dan
Debjit Pal
27
6
0
26 Jun 2024
LLMs and the Future of Chip Design: Unveiling Security Risks and
  Building Trust
LLMs and the Future of Chip Design: Unveiling Security Risks and Building Trust
Zeng Wang
Lilas Alrahis
Likhitha Mankali
J. Knechtel
Ozgur Sinanoglu
34
9
0
11 May 2024
LLM4SecHW: Leveraging Domain Specific Large Language Model for Hardware
  Debugging
LLM4SecHW: Leveraging Domain Specific Large Language Model for Hardware Debugging
Weimin Fu
Kaichen Yang
R. Dutta
Xiaolong Guo
Gang Qu
14
22
0
28 Jan 2024
DIVAS: An LLM-based End-to-End Framework for SoC Security Analysis and
  Policy-based Protection
DIVAS: An LLM-based End-to-End Framework for SoC Security Analysis and Policy-based Protection
Sudipta Paria
Aritra Dasgupta
S. Bhunia
11
21
0
14 Aug 2023
A Survey of Safety and Trustworthiness of Large Language Models through
  the Lens of Verification and Validation
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Xiaowei Huang
Wenjie Ruan
Wei Huang
Gao Jin
Yizhen Dong
...
Sihao Wu
Peipei Xu
Dengyu Wu
André Freitas
Mustafa A. Mustafa
ALM
27
81
0
19 May 2023
TheHuzz: Instruction Fuzzing of Processors Using Golden-Reference Models
  for Finding Software-Exploitable Vulnerabilities
TheHuzz: Instruction Fuzzing of Processors Using Golden-Reference Models for Finding Software-Exploitable Vulnerabilities
Aakash Tyagi
Addison Crump
A. Sadeghi
Garrett Persyn
Jeyavijayan Rajendran
Patrick Jauernig
Rahul Kande
34
59
0
24 Jan 2022
Fuzzing Hardware Like Software
Fuzzing Hardware Like Software
Timothy Trippel
K. Shin
A. Chernyakhovsky
Garret Kelly
Dominic Rizzo
Matthew Hicks
40
87
0
03 Feb 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,791
0
17 Sep 2019
1