PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $10K - $14K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (46)

[1]
DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models
2025DeepSeek-AI, A. Liu et al.
[2]
ICAD-LLM: One-for-All Anomaly Detection via In-Context Learning with Large Language Models
2025Zhongyuan Wu, Jingyuan Wang et al.
[3]
EvilGenie: A Reward Hacking Benchmark
2025Jonathan Gabor, Jayson Lynch et al.
[4]
SetAD: Semi-Supervised Anomaly Learning in Contextual Sets
2025Jianling Gao, Chongyang Tao et al.
[5]
Natural Emergent Misalignment from Reward Hacking in Production RL
2025M. MacDiarmid, Benjamin Wright et al.
[6]
ImpossibleBench: Measuring LLMs' Propensity of Exploiting Test Cases
2025Ziqian Zhong, Aditi Raghunathan et al.
[7]
School of Reward Hacks: Hacking harmless tasks generalizes to misaligned behavior in LLMs
2025Mia Taylor, James Chua et al.
[8]
GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
2025GLM-4.5 Team Aohan Zeng, Xin Lv et al.
[9]
Kimi K2: Open Agentic Intelligence
2025Kimi Team Yifan Bai, Yiping Bao et al.
[10]
Specification Self-Correction: Mitigating In-Context Reward Hacking Through Test-Time Refinement
2025Víctor Gallego
[11]
Detecting Proxy Gaming in RL and LLM Alignment via Evaluator Stress Tests
2025Ibne Farabi Shihab, Sanjeda Akter et al.
[12]
Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning
2025Yinjie Wang, Ling Yang et al.
[13]
Writing-Zero: Bridge the Gap Between Non-verifiable Tasks and Verifiable Rewards
2025Ruipeng Jia, Yunyi Yang et al.
[14]
A Comprehensive Survey of Reward Models: Taxonomy, Applications, Challenges, and Future
2025Jialun Zhong, Wei Shen et al.
[15]
Crossing the Reward Bridge: Expanding RL with Verifiable Rewards Across Diverse Domains
2025Yi Su, Dian Yu et al.
[16]
Optimizing Safe and Aligned Language Generation: A Multi-Objective GRPO Approach
2025Xuying Li, Zhuo Li et al.
[17]
AA-CLIP: Enhancing Zero-Shot Anomaly Detection via Anomaly-Aware CLIP
2025Wenxin Ma, Xu Zhang et al.
[18]
Towards Zero-Shot Anomaly Detection and Reasoning with Multimodal Large Language Models
2025Jiacong Xu, Shao-Yuan Lo et al.
[19]
Enhancing Code LLMs with Reinforcement Learning in Code Generation: A Survey
2024Junqiao Wang, Zeng Zhang et al.
[20]
A Survey On Enhancing Reinforcement Learning in Complex Environments: Insights from Human and LLM Feedback
2024Alireza Rashidi Laleh, M. N. Ahmadabadi

Showing 20 of 46 references

Founder's Pitch

"TRACE is a novel benchmark and evaluation tool for detecting reward hacking in code-based reinforcement learning environments."

RL Environment SecurityScore: 6View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

2/4 signals

5

Quick Build

0/4 signals

0

Series A Potential

4/4 signals

10

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 1/27/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.