As transformer-based large language models (LLMs) increasingly permeate society, they have revolutionized domains such as software engineering, creative writing, and digital arts. However, their adoption in cybersecurity remains limited due to challenges like scarcity of specialized training data and complexity of representing cybersecurity-specific knowledge. To address these gaps, we present Foundation-Sec-8B, a cybersecurity-focused LLM built on the Llama 3.1 architecture and enhanced through continued pretraining on a carefully curated cybersecurity corpus. We evaluate Foundation-Sec-8B across both established and new cybersecurity benchmarks, showing that it matches Llama 3.1-70B and GPT-4o-mini in certain cybersecurity-specific tasks. By releasing our model to the public, we aim to accelerate progress and adoption of AI-driven tools in both public and private cybersecurity contexts.
View on arXiv@article{kassianik2025_2504.21039, title={ Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report }, author={ Paul Kassianik and Baturay Saglam and Alexander Chen and Blaine Nelson and Anu Vellore and Massimo Aufiero and Fraser Burch and Dhruv Kedia and Avi Zohary and Sajana Weerawardhena and Aman Priyanshu and Adam Swanda and Amy Chang and Hyrum Anderson and Kojin Oshiba and Omar Santos and Yaron Singer and Amin Karbasi }, journal={arXiv preprint arXiv:2504.21039}, year={ 2025 } }