PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (70)

[1]
Behavioral Economics of AI: LLM Biases and Corrections
2026Pietro Bini, L. Cong et al.
[2]
The Landscape of AI Alignment: A Comprehensive Review of Theories and Methods
2025XiaoYong Li, Qing Jiang et al.
[3]
From LLM Reasoning to Autonomous AI Agents: A Comprehensive Review
2025M. Ferrag, Norbert Tihanyi et al.
[4]
From tools to thieves: Measuring and understanding public perceptions of AI through crowdsourced metaphors
2025Myra Cheng, Angela Y. Lee et al.
[5]
Explicit vs. Implicit: Investigating Social Bias in Large Language Models through Self-Reflection
2025Yachao Zhao, Bo Wang et al.
[6]
Generative Agent Simulations of 1,000 People
2024J. Park, Carolyn Q. Zou et al.
[7]
The end of algorithm aversion
2024Alvaro Chacon
[8]
Decision-Making Behavior Evaluation Framework for LLMs under Uncertain Context
2024Jingru Jia, Zehua Yuan et al.
[9]
How Far Are We on the Decision-Making of LLMs? Evaluating LLMs' Gaming Ability in Multi-Agent Environments
2024Jen-Tse Huang, E. Li et al.
[10]
Determinants of LLM-assisted Decision-Making
2024Eva Eigner, Thorsten Händler
[11]
Can Large Language Model Agents Simulate Human Trust Behaviors?
2024Chengxing Xie, Canyu Chen et al.
[12]
Measuring Implicit Bias in Explicitly Unbiased Large Language Models
2024Xuechunzi Bai, Angelina Wang et al.
[13]
Representation Engineering: A Top-Down Approach to AI Transparency
2023Andy Zou, Long Phan et al.
[14]
People Perceive Algorithmic Assessments as Less Fair and Trustworthy Than Identical Human Assessments
2023Lillio Mok, Sasha Nanda et al.
[15]
Large Language Model Alignment: A Survey
2023Tianhao Shen, Renren Jin et al.
[16]
Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection
2023Beizhe Hu, Qiang Sheng et al.
[17]
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
2023Yue Zhang, Yafu Li et al.
[18]
AI deception: A survey of examples, risks, and potential solutions
2023Peter S. Park, Simon Goldstein et al.
[19]
ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate
2023Chi-Min Chan, Weize Chen et al.
[20]
What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI
2023Nicole Gross

Showing 20 of 70 references

Founder's Pitch

"Investigating LLMs' inconsistent biases in decision-making involving algorithmic agents and human experts."

AI Safety and EthicsScore: 5View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

2/4 signals

5

Series A Potential

0/4 signals

0

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/25/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.