PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $10K - $14K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (42)

[1]
Do Depth-Grown Models Overcome the Curse of Depth? An In-Depth Analysis
2025Ferdinand Kapl, Emmanouil Angelis et al.
[2]
Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence
2025Sean McLeish, Ang Li et al.
[3]
Scaling Latent Reasoning via Looped Language Models
2025Ruiming Zhu, Zixuan Wang et al.
[4]
Encode, Think, Decode: Scaling test-time reasoning with recursive latent thoughts
2025Yeskendir Koishekenov, Aldo Lipani et al.
[5]
Nemotron-CC-Math: A 133 Billion-Token-Scale High Quality Math Pretraining Dataset
2025Rabeeh Karimi Mahabadi, S. Satheesh et al.
[6]
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation
2025Sangmin Bae, Yujin Kim et al.
[7]
Hierarchical Reasoning Model
2025Guan Wang, Jin Li et al.
[8]
Do Language Models Use Their Depth Efficiently?
2025R'obert Csord'as, Christopher D. Manning et al.
[9]
Reasoning with Latent Thoughts: On the Power of Looped Transformers
2025Nikunj Saunshi, Nishanth Dikkala et al.
[10]
The Curse of Depth in Large Language Models
2025Wenfang Sun, Xinyuan Song et al.
[11]
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach
2025Jonas Geiping, Sean McLeish et al.
[12]
SmolLM2: When Smol Goes Big - Data-Centric Training of a Small Language Model
2025Loubna Ben Allal, Anton Lozhkov et al.
[13]
Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models
2025Fengli Xu, Qianyue Hao et al.
[14]
The 4th Dimension for Scaling Model Size
2025Ruike Zhu, Hanwen Zhang et al.
[15]
Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA
2024Sangmin Bae, Adam Fisch et al.
[16]
On the Inductive Bias of Stacking Towards Improving Reasoning
2024Nikunj Saunshi, Stefani Karp et al.
[17]
Looped Transformers for Length Generalization
2024Ying Fan, Yilun Du et al.
[18]
Does your data spark joy? Performance gains from domain upsampling at the end of training
2024Cody Blakeney, Mansheej Paul et al.
[19]
MoEUT: Mixture-of-Experts Universal Transformers
2024R'obert Csord'as, Kazuki Irie et al.
[20]
Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training
2024Wenyu Du, Tongxu Luo et al.

Showing 20 of 42 references

Founder's Pitch

"Develop a tool that enhances reasoning in LLMs through composable depth growth and looping techniques."

LLM TrainingScore: 3View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

0/4 signals

0

Quick Build

0/4 signals

0

Series A Potential

1/4 signals

2.5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/18/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.