Vibe Code Bench: Evaluating AI Models on End-to-End Web Application Development

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (15)

[1]
Terminal-Bench: Benchmarking Agents on Hard, Realistic Tasks in Command Line Interfaces
2026Mike A. Merrill, Alexander G Shaw et al.
[2]
SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?
2025Xiang Deng, Jeff Da et al.
[3]
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
2025Samuel Miserendino, Michele Wang et al.
[4]
Adding Error Bars to Evals: A Statistical Approach to Language Model Evaluations
2024E. Miller
[5]
OpenHands: An Open Platform for AI Software Developers as Generalist Agents
2024Xingyao Wang, Boxuan Li et al.
[6]
SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering
2024John Yang, Carlos E. Jimenez et al.
[7]
LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code
2024Naman Jain, King Han et al.
[8]
VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks
2024Jing Yu Koh, Robert Lo et al.
[9]
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
2023Carlos E. Jimenez, John Yang et al.
[10]
WebArena: A Realistic Web Environment for Building Autonomous Agents
2023Shuyan Zhou, Frank F. Xu et al.
[11]
The Impact of AI on Developer Productivity: Evidence from GitHub Copilot
2023Sida Peng, Eirini Kalliamvakou et al.
[12]
Program Synthesis with Large Language Models
2021Jacob Austin, Augustus Odena et al.
[13]
Evaluating Large Language Models Trained on Code
2021Mark Chen, Jerry Tworek et al.
[14]
Stack Overflow
2012J. Aristotle
[15]
Cursor
2009Kenneth A. Ross, C. S. Jensen et al.

Founder's Pitch

"Introducing Vibe Code Bench, a comprehensive benchmark for evaluating AI models on end-to-end web application development."

AI BenchmarksScore: 4View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

2/4 signals

5

Quick Build

3/4 signals

7.5

Series A Potential

0/4 signals

0

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 3/4/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

Related Papers

Loading…