PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $10K - $14K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (24)

[1]
Narrow Finetuning Leaves Clearly Readable Traces in Activation Differences
2025Julian Minder, Clément Dumas et al.
[2]
Persona Vectors: Monitoring and Controlling Character Traits in Language Models
2025Runjin Chen, Andy Arditi et al.
[3]
Refusal in Language Models Is Mediated by a Single Direction
2024Andy Arditi, Oscar Obeso et al.
[4]
AI Sandbagging: Language Models can Strategically Underperform on Evaluations
2024Teun van der Weij, Felix Hofstätter et al.
[5]
LESS: Selecting Influential Data for Targeted Instruction Tuning
2024Mengzhou Xia, Sadhika Malladi et al.
[6]
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
2024Evan Hubinger, Carson E. Denison et al.
[7]
Instruction-Following Evaluation for Large Language Models
2023Jeffrey Zhou, Tianjian Lu et al.
[8]
The Linear Representation Hypothesis and the Geometry of Large Language Models
2023Kiho Park, Yo Joong Choe et al.
[9]
Representation Engineering: A Top-Down Approach to AI Transparency
2023Andy Zou, Long Phan et al.
[10]
LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
2023Lianmin Zheng, Wei-Lin Chiang et al.
[11]
Steering Language Models With Activation Engineering
2023Alexander Matt Turner, Lisa Thiergart et al.
[12]
XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models
2023Paul Röttger, Hannah Rose Kirk et al.
[13]
Jailbroken: How Does LLM Safety Training Fail?
2023Alexander Wei, Nika Haghtalab et al.
[14]
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
2023Rafael Rafailov, Archit Sharma et al.
[15]
Discovering Language Model Behaviors with Model-Written Evaluations
2022Ethan Perez, Sam Ringer et al.
[16]
Constitutional AI: Harmlessness from AI Feedback
2022Yuntao Bai, Saurav Kadavath et al.
[17]
Toy Models of Superposition
2022Nelson Elhage, Tristan Hume et al.
[18]
Training language models to follow instructions with human feedback
2022Long Ouyang, Jeff Wu et al.
[19]
Datamodels: Understanding Predictions with Data and Data with Predictions
2022Andrew Ilyas, Sung Min Park et al.
[20]
Training Verifiers to Solve Math Word Problems
2021K. Cobbe, Vineet Kosaraju et al.

Showing 20 of 24 references

Founder's Pitch

"Develop a tool for tracing and mitigating emergent behaviors in LLMs by identifying responsible training data using activation-based data attribution."

LLM SafetyScore: 5View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

4/4 signals

10

Series A Potential

1/4 signals

2.5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/11/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.