PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $10K - $14K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (30)

[1]
Theory of Mind in Large Language Models: Assessment and Enhancement
2025Ruirui Chen, Weifeng Jiang et al.
[2]
DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning
2025DeepSeek-AI, Daya Guo et al.
[3]
ToMATO: Verbalizing the Mental States of Role-Playing LLMs for Benchmarking Theory of Mind
2025Kazutoshi Shinoda, Nobukatsu Hojo et al.
[4]
Position: Theory of Mind Benchmarks are Broken for Large Language Models
2024Matthew Riemer, Zahra Ashktorab et al.
[5]
To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
2024Zayne Sprague, Fangcong Yin et al.
[6]
MuMA-ToM: Multi-modal Multi-Agent Theory of Mind
2024Haojun Shi, Suyu Ye et al.
[7]
Theory of Mind in Human-AI Interaction
2024Qiaosi Wang, S. Walsh et al.
[8]
Yi: Open Foundation Models by 01.AI
202401.AI Alex Young, Bei Chen et al.
[9]
ToMBench: Benchmarking Theory of Mind in Large Language Models
2024Zhuang Chen, Jincenzi Wu et al.
[10]
OpenToM: A Comprehensive Benchmark for Evaluating Theory-of-Mind Reasoning Capabilities of Large Language Models
2024Hainiu Xu, Runcong Zhao et al.
[11]
MMToM-QA: Multimodal Theory of Mind Question Answering
2024Chuanyang Jin, Yutong Wu et al.
[12]
FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
2023Hyunwoo Kim, Melanie Sclar et al.
[13]
Measuring Faithfulness in Chain-of-Thought Reasoning
2023Tamera Lanham, Anna Chen et al.
[14]
Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
2023Miles Turpin, Julian Michael et al.
[15]
Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks
2023T. Ullman
[16]
Large Language Models are Zero-Shot Reasoners
2022Takeshi Kojima, S. Gu et al.
[17]
Theory of Mind and Preference Learning at the Interface of Cognitive Science, Neuroscience, and AI: A Review
2022C. Langley, B. Cirstea et al.
[18]
Chain of Thought Prompting Elicits Reasoning in Large Language Models
2022Jason Wei, Xuezhi Wang et al.
[19]
Language Models are Few-Shot Learners
2020Tom B. Brown, Benjamin Mann et al.
[20]
Knowing me, knowing you: theory of mind in AI
2020F. Cuzzolin, A. Morelli et al.

Showing 20 of 30 references

Founder's Pitch

"Develop an AI tool to evaluate and enhance Theory of Mind capabilities in language models using Chain-of-Thought prompting."

Cognitive Science and Language ModelsScore: 5View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

2/4 signals

5

Quick Build

4/4 signals

10

Series A Potential

1/4 signals

2.5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/25/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.