PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $11K - $15K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (24)

[1]
AgentBay: A Hybrid Interaction Sandbox for Seamless Human-AI Intervention in Agentic Systems
2025Yun Piao, Hongbo Min et al.
[2]
ChatInject: Abusing Chat Templates for Prompt Injection in LLM Agents
2025Hwan Chang, Yonghyun Jun et al.
[3]
DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks
2025Yupei Liu, Yuqi Jia et al.
[4]
Adaptive Attacks Break Defenses Against Indirect Prompt Injection Attacks on LLM Agents
2025Qiusi Zhan, Richard Fang et al.
[5]
SecAlign: Defending Against Prompt Injection with Preference Optimization
2024Sizhe Chen, Arman Zharmagambetov et al.
[6]
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
2024Hanrong Zhang, Jingyuan Huang et al.
[7]
ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat Templates
2024Fengqing Jiang, Zhangchen Xu et al.
[8]
Defending Against Indirect Prompt Injection Attacks With Spotlighting
2024Keegan Hines, Gary Lopez et al.
[9]
StruQ: Defending Against Prompt Injection with Structured Queries
2024Sizhe Chen, Julien Piet et al.
[10]
Benchmarking and Defending against Indirect Prompt Injection Attacks on Large Language Models
2023Jingwei Yi, Yueqi Xie et al.
[11]
Assessing Prompt Injection Risks in 200+ Custom GPTs
2023Jiahao Yu, Yuhang Wu et al.
[12]
POSTER: Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications
2023Fengqing Jiang, Zhangchen Xu et al.
[13]
Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game
2023S. Toyer, Olivia Watkins et al.
[14]
Qwen Technical Report
2023Jinze Bai, Shuai Bai et al.
[15]
The Rise and Potential of Large Language Model Based Agents: A Survey
2023Zhiheng Xi, Wenxiang Chen et al.
[16]
Jailbroken: How Does LLM Safety Training Fail?
2023Alexander Wei, Nika Haghtalab et al.
[17]
Visual Adversarial Examples Jailbreak Aligned Large Language Models
2023Xiangyu Qi, Kaixuan Huang et al.
[18]
On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective
2023Jindong Wang, Xixu Hu et al.
[19]
Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
2023Daniel Kang, Xuechen Li et al.
[20]
PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
2023Kaijie Zhu, Jindong Wang et al.

Showing 20 of 24 references

Founder's Pitch

"Automated framework for agent hijacking in LLMs, exploiting structured template injection to enhance attack success and transferability."

Security in LLMsScore: 5View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

2/4 signals

5

Series A Potential

1/4 signals

2.5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/18/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.