PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (24)

[1]
Attribution-Guided Distillation of Matryoshka Sparse Autoencoders
2025Cristina P. Martin-Linares, Jonathan P. Ling
[2]
OrtSAE: Orthogonal Sparse Autoencoders Uncover Atomic Features
2025Anton Korznikov, Andrey V. Galichin et al.
[3]
Feature Hedging: Correlated Features Break Narrow Sparse Autoencoders
2025David Chanin, Tom'avs Dulka et al.
[4]
Learning Multi-Level Features with Matryoshka Sparse Autoencoders
2025Bart Bussmann, Noa Nabeshima et al.
[5]
SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability
2025Adam Karvonen, Can Rager et al.
[6]
Sparse Autoencoders Do Not Find Canonical Units of Analysis
2025Patrick Leask, Bart Bussmann et al.
[7]
BatchTopK Sparse Autoencoders
2024Bart Bussmann, Patrick Leask et al.
[8]
Efficient Dictionary Learning with Switch Sparse Autoencoders
2024Anish Mudide, Joshua Engels et al.
[9]
The Geometry of Concepts: Sparse Autoencoder Feature Structure
2024Yuxiao Li, Eric J. Michaud et al.
[10]
A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders
2024David Chanin, James Wilken-Smith et al.
[11]
Measuring Progress in Dictionary Learning for Language Model Interpretability with Board Game Models
2024Adam Karvonen, Benjamin Wright et al.
[12]
Relational Composition in Neural Networks: A Survey and Call to Action
2024Martin Wattenberg, Fernanda Vi'egas
[13]
Jumping Ahead: Improving Reconstruction Fidelity with JumpReLU Sparse Autoencoders
2024Senthooran Rajamanoharan, Tom Lieberum et al.
[14]
Scaling and evaluating sparse autoencoders
2024Leo Gao, Tom Dupr'e la Tour et al.
[15]
Not All Language Model Features Are One-Dimensionally Linear
2024Joshua Engels, Isaac Liao et al.
[16]
RAVEL: Evaluating Interpretability Methods on Disentangling Language Model Representations
2024Jing Huang, Zhengxuan Wu et al.
[17]
Monotonic Representation of Numeric Attributes in Language Models
2024Benjamin Heinzerling, Kentaro Inui
[18]
Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders
2024Senthooran Rajamanoharan, Arthur Conmy et al.
[19]
The Linear Representation Hypothesis and the Geometry of Large Language Models
2023Kiho Park, Yo Joong Choe et al.
[20]
Language Models Represent Space and Time
2023Wes Gurnee, Max Tegmark

Showing 20 of 24 references

Founder's Pitch

"Introducing HSAE, a scalable tool for building and analyzing hierarchical conceptual structures in LLMs using sparse autoencoders."

AI Model AnalysisScore: 4View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

2/4 signals

5

Series A Potential

0/4 signals

0

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/12/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.