Cybersecurity AI

9papers
5.2viability
-20%30d

State of the Field

Recent advancements in cybersecurity AI are focusing on enhancing the capabilities of large language models (LLMs) to address specific operational challenges. New models are being trained with domain-specific data, allowing them to perform tasks such as malware detection and classification with greater accuracy and efficiency. For instance, recent work has demonstrated the effectiveness of fine-tuned LLMs in distinguishing between benign and malicious software, although ongoing adaptation is necessary to keep pace with evolving threats. Additionally, frameworks for scalable feature selection are being developed to improve the interpretability and robustness of malware detection systems, while agentic AI architectures are being proposed to govern decision-making processes under uncertainty. These developments not only aim to bolster the cybersecurity defenses of organizations but also address the growing sophistication of cybercriminal tactics that exploit AI technologies. The field is increasingly recognizing the need for continuous learning and adaptation to remain effective against a dynamic threat landscape.

Last updated Feb 24, 2026

Papers

1–9 of 9
Research Paper·Jan 29, 2026

RedSage: A Cybersecurity Generalist LLM

Cybersecurity operations demand assistant LLMs that support diverse workflows without exposing sensitive data. Existing solutions either rely on proprietary APIs with privacy risks or on open models l...

8.0 viability
Research Paper·Jan 28, 2026

Llama-3.1-FoundationAI-SecurityLLM-Reasoning-8B Technical Report

We present Foundation-Sec-8B-Reasoning, the first open-source native reasoning model for cybersecurity. Built upon our previously released Foundation-Sec-8B base model (derived from Llama-3.1-8B-Base)...

7.0 viability
Research Paper·Jan 22, 2026

CAFE-GB: Scalable and Stable Feature Selection for Malware Detection via Chunk-wise Aggregated Gradient Boosting

High-dimensional malware datasets often exhibit feature redundancy, instability, and scalability limitations, which hinder the effectiveness and interpretability of machine learning-based malware dete...

7.0 viability
Research Paper·Jan 14, 2026

A Decompilation-Driven Framework for Malware Detection with Large Language Models

The parallel evolution of Large Language Models (LLMs) with advanced code-understanding capabilities and the increasing sophistication of malware presents a new frontier for cybersecurity research. Th...

7.0 viability
Research Paper·Jan 14, 2026

Malware Classification using Diluted Convolutional Neural Network with Fast Gradient Sign Method

Android malware has become an increasingly critical threat to organizations, society and individuals, posing significant risks to privacy, data security and infrastructure. As malware continues to evo...

5.0 viability
Research Paper·Feb 12, 2026

Agentic AI for Cybersecurity: A Meta-Cognitive Architecture for Governable Autonomy

Contemporary AI-driven cybersecurity systems are predominantly architected as model-centric detection and automation pipelines optimized for task-level performance metrics such as accuracy and respons...

4.0 viability
Research Paper·Feb 16, 2026

What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation

The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime. Beyond empowering novice offenders, AI stands to intensify the scale and sophistica...

3.0 viability
Research Paper·Mar 1, 2026

Defensive Refusal Bias: How Safety Alignment Fails Cyber Defenders

Safety alignment in large language models (LLMs), particularly for cybersecurity tasks, primarily focuses on preventing misuse. While this approach reduces direct harm, it obscures a complementary fai...

3.0 viability
Research Paper·Mar 4, 2026

From Threat Intelligence to Firewall Rules: Semantic Relations in Hybrid AI Agent and Expert System Architectures

Web security demands rapid response capabilities to evolving cyber threats. Agentic Artificial Intelligence (AI) promises automation, but the need for trustworthy security responses is of the utmost i...

3.0 viability