TOSSS: a CVE-based Software Security Benchmark for Large Language Models

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $10K - $14K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (22)

[1]
GLM-5: from Vibe Coding to Agentic Engineering
2026GLM-4.5 Team Aohan Zeng, Xin Lv et al.
[2]
From SFT to RL: Demystifying the Post-Training Pipeline for LLM-based Vulnerability Detection
2026Youpeng Li, Fuxun Yu et al.
[3]
Kimi K2.5: Visual Agentic Intelligence
2026Kimi Team Yifan Bai, Yifan Bai et al.
[4]
RealSec-bench: A Benchmark for Evaluating Secure Code Generation in Real-World Repositories
2026Yanlin Wang, Ziyao Zhang et al.
[5]
OpenAI GPT-5 System Card
2025Aaditya K. Singh, A. Fry et al.
[6]
SecureAgentBench: Benchmarking Secure Code Generation under Realistic Vulnerability Scenarios
2025Junkai Chen, Huihui Huang et al.
[7]
A.S.E: A Repository-Level Benchmark for Evaluating Security in AI-Generated Code
2025Keke Lian, Bin Wang et al.
[8]
SafeGenBench: A Benchmark Framework for Security Vulnerability Detection in LLM-Generated Code
2025Xinghang Li, Jingzhe Ding et al.
[9]
SecVulEval: Benchmarking LLMs for Real-World C/C++ Vulnerability Detection
2025Md Basim Uddin Ahmed, Nima Shiri Harzevili et al.
[10]
CWEval: Outcome-driven Evaluation on Functionality and Security of LLM Code Generation
2025Jinjun Peng, Leyi Cui et al.
[11]
How Propense Are Large Language Models at Producing Code Smells? A Benchmarking Study
2024Alejandro Velasco, Daniel Rodríguez-Cárdenas et al.
[12]
LLMSecCode: Evaluating Large Language Models for Secure Coding
2024A. Rydén, Erik Näslund et al.
[13]
MegaVul: A C/C++ Vulnerability Dataset with Comprehensive Code Representations
2024Chao Ni, Liyu Shen et al.
[14]
Purple Llama CyberSecEval: A Secure Coding Benchmark for Language Models
2023M. Bhatt, Sa-hana Chennabasappa et al.
[15]
Sallm: Security Assessment of Generated Code
2023Mohammed Latif Siddiq, Joanna C. S. Santos
[16]
CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models
2023Hossein Hajipour, Thorsten Holz et al.
[17]
On the Robustness of Code Generation Techniques: An Empirical Study on GitHub Copilot
2023A. Mastropaolo, L. Pascarella et al.
[18]
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
2022A. Salem, Giovanni Cherubin et al.
[19]
Asleep at the Keyboard? Assessing the Security of GitHub Copilot’s Code Contributions
2021H. Pearce, Baleegh Ahmad et al.
[20]
Systematic Evaluation of Privacy Risks of Machine Learning Models
2020Liwei Song, Prateek Mittal

Showing 20 of 22 references

Founder's Pitch

"TOSSS is a benchmark that evaluates the security capabilities of Large Language Models in selecting secure code snippets."

Software SecurityScore: 7View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

2/4 signals

5

Series A Potential

1/4 signals

2.5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 3/11/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

Related Papers

Loading…