Security in LLMs Comparison Hub
3 papers - avg viability 5.7
Top Papers
- AttriGuard: Defeating Indirect Prompt Injection in LLM Agents via Causal Attribution of Tool Invocations(7.0)
AttriGuard provides a novel defense against indirect prompt injection in LLM agents through causal attribution of tool invocations.
- VidDoS: Universal Denial-of-Service Attack on Video-based Large Language Models(5.0)
VidDoS exposes and addresses critical Energy-Latency Attacks in Video-based Large Language Models, highlighting significant vulnerabilities in safety-critical applications.
- Automating Agent Hijacking via Structural Template Injection(5.0)
Automated framework for agent hijacking in LLMs, exploiting structured template injection to enhance attack success and transferability.