State of the Field
Recent advancements in cybersecurity AI are focusing on enhancing the capabilities of large language models (LLMs) to address specific operational challenges. New models are being trained with domain-specific data, allowing them to perform tasks such as malware detection and classification with greater accuracy and efficiency. For instance, recent work has demonstrated the effectiveness of fine-tuned LLMs in distinguishing between benign and malicious software, although ongoing adaptation is necessary to keep pace with evolving threats. Additionally, frameworks for scalable feature selection are being developed to improve the interpretability and robustness of malware detection systems, while agentic AI architectures are being proposed to govern decision-making processes under uncertainty. These developments not only aim to bolster the cybersecurity defenses of organizations but also address the growing sophistication of cybercriminal tactics that exploit AI technologies. The field is increasingly recognizing the need for continuous learning and adaptation to remain effective against a dynamic threat landscape.
Papers
1–9 of 9RedSage: A Cybersecurity Generalist LLM
Cybersecurity operations demand assistant LLMs that support diverse workflows without exposing sensitive data. Existing solutions either rely on proprietary APIs with privacy risks or on open models l...
Llama-3.1-FoundationAI-SecurityLLM-Reasoning-8B Technical Report
We present Foundation-Sec-8B-Reasoning, the first open-source native reasoning model for cybersecurity. Built upon our previously released Foundation-Sec-8B base model (derived from Llama-3.1-8B-Base)...
CAFE-GB: Scalable and Stable Feature Selection for Malware Detection via Chunk-wise Aggregated Gradient Boosting
High-dimensional malware datasets often exhibit feature redundancy, instability, and scalability limitations, which hinder the effectiveness and interpretability of machine learning-based malware dete...
A Decompilation-Driven Framework for Malware Detection with Large Language Models
The parallel evolution of Large Language Models (LLMs) with advanced code-understanding capabilities and the increasing sophistication of malware presents a new frontier for cybersecurity research. Th...
Malware Classification using Diluted Convolutional Neural Network with Fast Gradient Sign Method
Android malware has become an increasingly critical threat to organizations, society and individuals, posing significant risks to privacy, data security and infrastructure. As malware continues to evo...
Agentic AI for Cybersecurity: A Meta-Cognitive Architecture for Governable Autonomy
Contemporary AI-driven cybersecurity systems are predominantly architected as model-centric detection and automation pipelines optimized for task-level performance metrics such as accuracy and respons...
What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation
The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime. Beyond empowering novice offenders, AI stands to intensify the scale and sophistica...
Defensive Refusal Bias: How Safety Alignment Fails Cyber Defenders
Safety alignment in large language models (LLMs), particularly for cybersecurity tasks, primarily focuses on preventing misuse. While this approach reduces direct harm, it obscures a complementary fai...
From Threat Intelligence to Firewall Rules: Semantic Relations in Hybrid AI Agent and Expert System Architectures
Web security demands rapid response capabilities to evolving cyber threats. Agentic Artificial Intelligence (AI) promises automation, but the need for trustworthy security responses is of the utmost i...