UNBOX: Unveiling Black-box visual models with Natural-language

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (42)

[1]
DEXTER: Diffusion-Guided EXplanations with TExtual Reasoning for Vision Models
2025Simone Carnemolla, M. Pennisi et al.
[2]
Error Slice Discovery via Manifold Compactness
2025Han Yu, Jiashuo Liu et al.
[3]
A concept-aware explainability method for convolutional neural networks
2025Mustafa Kağan Gürkan, N. Arica et al.
[4]
GIFT: A Framework Towards Global Interpretable Faithful Textual Explanations of Vision Classifiers
2024'Eloi Zablocki, Valentin Gerard et al.
[5]
LADDER: Language-Driven Slice Discovery and Error Rectification in Vision Classifiers
2024Shantanu Ghosh, Rayan Syed et al.
[6]
TextGrad: Automatic "Differentiation" via Text
2024Mert Yuksekgonul, Federico Bianchi et al.
[7]
Diffexplainer: Towards Cross-modal Global Explanations with Diffusion Models
2024M. Pennisi, Giovanni Bellitto et al.
[8]
WWW: A Unified Framework for Explaining what, Where and why of Neural Networks by Interpretation of Neuron Concepts
2024Yong Hyun Ahn, Hyeon Bae Kim et al.
[9]
Uni-NLX: Unifying Textual Explanations for Vision and Vision-Language Tasks
2023Fawaz Sammani, Nikos Deligiannis
[10]
Grounding Counterfactual Explanation of Image Classifiers to Textual Concept Space
2023Siwon Kim, Jinoh Oh et al.
[11]
Discovering and Mitigating Visual Biases Through Keyword Explanation
2023Younghyun Kim, Sangwoo Mo et al.
[12]
Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
2022Yue Yang, Artemis Panagopoulou et al.
[13]
CRAFT: Concept Recursive Activation FacTorization for Explainability
2022Thomas Fel, Agustin Picard et al.
[14]
Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
2022Jonathan Crabbe, M. Schaar
[15]
CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks
2022Tuomas P. Oikarinen, Tsui-Wei Weng
[16]
HINT: Hierarchical Neuron Concept Explainer
2022Andong Wang, Wei-Ning Lee et al.
[17]
NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks
2022Fawaz Sammani, Tanmoy Mukherjee et al.
[18]
Correct-N-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations
2022Michael Zhang, N. Sohoni et al.
[19]
Natural Language Descriptions of Deep Visual Features
2022Evan Hernandez, Sarah Schwettmann et al.
[20]
Salient ImageNet: How to discover spurious features in Deep Learning?
2021Sahil Singla, S. Feizi

Showing 20 of 42 references

Founder's Pitch

"UNBOX provides a framework for understanding black-box visual models using natural language, enabling auditing and bias detection without requiring internal access."

Explainable AIScore: 7View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

2/4 signals

5

Series A Potential

2/4 signals

5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 3/9/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

Related Papers

Loading…

Related Resources