Compiler-First State Space Duality and Portable $O(1)$ Autoregressive Caching for Inference

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

MVP Investment

$9K - $12K
6-10 weeks
Engineering
$8,000
Cloud Hosting
$240
SaaS Stack
$300
Domain & Legal
$100

6mo ROI

2-4x

3yr ROI

10-20x

Lightweight AI tools can reach profitability quickly. At $500/mo average contract, 20 customers = $10K MRR by 6mo, 200+ by 3yr.

Talent Scout

C

Cosmo Santoni

Imperial College London

Find Similar Experts

Inference experts on LinkedIn & GitHub

References

References not yet indexed.

Founder's Pitch

"An optimized JAX-based inference caching solution for device-agnostic autoregressive decoding."

Inference OptimizationScore: 7View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

4/4 signals

10

Series A Potential

2/4 signals

5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 3/10/2026

🔭 Research Neighborhood

Generating constellation...

~3-8 seconds

Why It Matters

This research enables efficient inference on diverse hardware architectures, reducing dependency on custom GPU kernels and allowing broader accessibility for machine learning applications on various platforms including TPUs and CPUs.

Product Angle

Create a SaaS platform or SDK that allows developers to integrate efficient, device-agnostic inference into their applications, leveraging the portability of this inference technology to offer cost-effective and scalable solutions across hardware environments.

Disruption

This approach can replace inference frameworks that are tightly coupled with specific hardware, such as Nvidia's CUDA-exclusive systems, enabling more flexibility in deploying machine learning models across different infrastructures.

Product Opportunity

The market includes cloud service providers and enterprise businesses deploying machine learning models who face high costs and technical barriers due to hardware compatibility issues, positioning the solution as a cost-saving and performance-enhancing option.

Use Case Idea

The technology can be applied to build an inference service for NLP models that efficiently runs on cloud-based TPU-backed servers, targeting services requiring high throughput text generation with minimal latency and hardware dependency constraints.

Science

The approach repurposes state-space model's algebraic properties for compilation into efficient inference processes, emphasizing portability without reliance on custom kernels by leveraging compiler technologies like XLA for performance optimization across diverse hardware platforms.

Method & Eval

The implementation was tested on TPU v6e and NVIDIA GPUs, demonstrating efficiency through significant FLOPS and bandwidth utilisation without custom kernels, matching performance with existing CUDA-based solutions in token generation accuracy.

Caveats

The scope is limited to inference; training policies are not covered. Initial JIT compilation latency and lack of optimizations for complex deployment pipelines could deter some enterprise applications, especially those with real-time inference needs.

Author Intelligence

Cosmo Santoni

Imperial College London
cosmo.santoni@imperial.ac.uk

Related Papers

Loading…