PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

MVP Investment

$9K - $12K
6-10 weeks
Engineering
$8,000
Cloud Hosting
$240
SaaS Stack
$300
Domain & Legal
$100

6mo ROI

2-4x

3yr ROI

10-20x

Lightweight AI tools can reach profitability quickly. At $500/mo average contract, 20 customers = $10K MRR by 6mo, 200+ by 3yr.

Talent Scout

A

Adam Dejl

Imperial College London

D

Deniz Gorur

Imperial College London

F

Francesca Toni

Imperial College London

Find Similar Experts

Interactive experts on LinkedIn & GitHub

References (8)

[1]
Retrieval and Argumentation Enhanced Multi-Agent LLMs for Judgmental Forecasting
2025Deniz Gorur, Antonio Rago et al.
[2]
Evaluating Uncertainty Quantification Methods in Argumentative Large Language Models
2025Kevin Zhou, Adam Dejl et al.
[3]
Argumentative Large Language Models for Explainable and Contestable Claim Verification
2024Gabriel Freedman, Adam Dejl et al.
[4]
Chain of Thought Prompting Elicits Reasoning in Large Language Models
2022Jason Wei, Xuezhi Wang et al.
[5]
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
2020Patrick Lewis, Ethan Perez et al.
[6]
How Many Properties Do We Need for Gradual Argumentation?
2018P. Baroni, Antonio Rago et al.
[7]
Continuous Dynamical Systems for Weighted Bipolar Argumentation
2018Nico Potyka
[8]
Discontinuity-Free Decision Support with Quantitative Argumentation Debates
2016Antonio Rago, Francesca Toni et al.

Founder's Pitch

"ArgLLM-App is an interactive web tool enabling explainable decision-making with argumentative reasoning over large language models."

Interactive Argumentative SystemsScore: 8View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

3/4 signals

7.5

Series A Potential

4/4 signals

10

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/27/2026

🔭 Research Neighborhood

Generating constellation...

~3-8 seconds

Why It Matters

This research provides a platform to make AI decision-making transparent and contestable, catering to industries where decision explainability and user trust are paramount, such as legal tech, finance, or policy-making.

Product Angle

To productize ArgLLM-App, focus on sectors like legal, insurance, or any decision-heavy field by offering a service to assess and improve decision transparency and validity. Develop APIs to integrate with existing workflow management tools.

Disruption

This solution could replace traditional decision-making support tools by providing a more interactive and transparent approach to understanding AI-driven decisions, which are often seen as black-box models today.

Product Opportunity

The market for AI explainability solutions is growing, as industries like law, finance, and compliance need transparent decision-making tools. Companies in these fields value verified AI outputs and will invest in systems that enhance this.

Use Case Idea

LegalTech companies could use ArgLLM-App to automate the evaluation of legal cases by generating defensible argument trees, enabling lawyers to explore case strengths and weaknesses interactively.

Science

The system uses Large Language Models (LLMs) to create argumentation frameworks (QBAFs) that visually display interconnected arguments, providing explanations for AI decisions. Users can interact with this framework, adjust confidence scores, and add new arguments, enabling the system to refine its decision processes based on human input.

Method & Eval

The system was tested as a web-based application demonstrating the creation and modification of argumentation frameworks. Human users can interact with it through a chat interface and visual modifications.

Caveats

The system's usability might still be limited by the complexity of QBAFs for average users. Also, reliance on LLMs from a single provider presents possible limitations in adaptability and transparency.

Author Intelligence

Adam Dejl

Imperial College London
adam.dejl18@imperial.ac.uk

Deniz Gorur

Imperial College London
d.gorur22@imperial.ac.uk

Francesca Toni

Imperial College London
ft@imperial.ac.uk