ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.21039
51
0

Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report

28 April 2025
Paul Kassianik
Baturay Saglam
Alexander Chen
Blaine Nelson
Anu Vellore
Massimo Aufiero
Fraser Burch
Dhruv Kedia
Avi Zohary
Sajana Weerawardhena
Aman Priyanshu
Adam Swanda
Amy Chang
Hyrum Anderson
Kojin Oshiba
Omar Santos
Yaron Singer
Amin Karbasi
    PILM
ArXivPDFHTML
Abstract

As transformer-based large language models (LLMs) increasingly permeate society, they have revolutionized domains such as software engineering, creative writing, and digital arts. However, their adoption in cybersecurity remains limited due to challenges like scarcity of specialized training data and complexity of representing cybersecurity-specific knowledge. To address these gaps, we present Foundation-Sec-8B, a cybersecurity-focused LLM built on the Llama 3.1 architecture and enhanced through continued pretraining on a carefully curated cybersecurity corpus. We evaluate Foundation-Sec-8B across both established and new cybersecurity benchmarks, showing that it matches Llama 3.1-70B and GPT-4o-mini in certain cybersecurity-specific tasks. By releasing our model to the public, we aim to accelerate progress and adoption of AI-driven tools in both public and private cybersecurity contexts.

View on arXiv
@article{kassianik2025_2504.21039,
  title={ Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report },
  author={ Paul Kassianik and Baturay Saglam and Alexander Chen and Blaine Nelson and Anu Vellore and Massimo Aufiero and Fraser Burch and Dhruv Kedia and Avi Zohary and Sajana Weerawardhena and Aman Priyanshu and Adam Swanda and Amy Chang and Hyrum Anderson and Kojin Oshiba and Omar Santos and Yaron Singer and Amin Karbasi },
  journal={arXiv preprint arXiv:2504.21039},
  year={ 2025 }
}
Comments on this paper