PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (46)

[1]
Performance Guaranteed Deep Learning for Detection of Cyber-Attacks in Dynamic Smart Grids
2025Mostafa Mohammadpourfard, Chenhan Xiao et al.
[2]
Guaranteed False Data Injection Attack Without Physical Model
2025Chenhan Xiao, N. Costilla-Enríquez et al.
[3]
ShapG: new feature importance method based on the Shapley value
2024Chi Zhao, Jing Liu et al.
[4]
Explainable Artificial Intelligence (XAI)
2023Ranu Sewada, Ashwani Jangid et al.
[5]
Privacy-Preserving Line Outage Detection in Distribution Grids: An Efficient Approach With Uncompromised Performance
2023Chenhan Xiao, Y. Liao et al.
[6]
Distribution Grid Line Outage Identification With Unknown Pattern and Performance Guarantee
2023Chenhan Xiao, Y. Liao et al.
[7]
Quickest Line Outage Detection with Low False Alarm Rate and No Prior Outage Knowledge
2022Y. Liao, Chenhan Xiao et al.
[8]
Deep Hierarchical Semantic Segmentation
2022Liulei Li, Tianfei Zhou et al.
[9]
Large-Scale Unsupervised Semantic Segmentation
2021Shangqi Gao, Zhong-Yu Li et al.
[10]
Fast Hierarchical Games for Image Explanations
2021Jacopo Teneggi, Alexandre Luster et al.
[11]
Understanding Global Feature Contributions With Additive Importance Measures
2020Ian Covert, Scott M. Lundberg et al.
[12]
Why model why? Assessing the strengths and limitations of LIME
2020Jurgen Dieber, S. Kirrane
[13]
Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models
2020T. Heskes, E. Sijben et al.
[14]
Explaining predictive models with mixed features using Shapley values and conditional inference trees
2020Annabelle Redelmeier, Martin Jullum et al.
[15]
[Adult].
2020
[16]
From local explanations to global understanding with explainable AI for trees
2020Scott M. Lundberg, G. Erion et al.
[17]
Restricting the Flow: Information Bottlenecks for Attribution
2020Karl Schulz, Leon Sixt et al.
[18]
Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
2019Christopher Frye, Ilya Feige et al.
[19]
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
2019Haofan Wang, Zifan Wang et al.
[20]
"Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations
2019Yujia Zhang, Kuangyan Song et al.

Showing 20 of 46 references

Founder's Pitch

"Developing O-Shap, a hierarchical SHAP method for more precise and coherent AI model explanations."

Explainable AIScore: 5View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

3/4 signals

7.5

Series A Potential

0/4 signals

0

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/19/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.