Papers
1–3 of 3Research Paper·Mar 11, 2026
AttriGuard: Defeating Indirect Prompt Injection in LLM Agents via Causal Attribution of Tool Invocations
LLM agents are highly vulnerable to Indirect Prompt Injection (IPI), where adversaries embed malicious directives in untrusted tool outputs to hijack execution. Most existing defenses treat IPI as an ...
7.0 viability
Research Paper·Mar 2, 2026
VidDoS: Universal Denial-of-Service Attack on Video-based Large Language Models
Video-LLMs are increasingly deployed in safety-critical applications but are vulnerable to Energy-Latency Attacks (ELAs) that exhaust computational resources. Current image-centric methods fail becaus...
5.0 viability
Research Paper·Feb 18, 2026
Automating Agent Hijacking via Structural Template Injection
Agent hijacking, highlighted by OWASP as a critical threat to the Large Language Model (LLM) ecosystem, enables adversaries to manipulate execution by injecting malicious instructions into retrieved c...
5.0 viability