Differentially Private and Communication Efficient Large Language Model Split Inference via Stochastic Quantization and Soft Prompt

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $10K - $14K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (47)

[1]
Pangu Embedded: An Efficient Dual-system LLM Reasoner with Metacognition
2025Hanting Chen, Yasheng Wang et al.
[2]
Cape: Context-Aware Prompt Perturbation Mechanism with Differential Privacy
2025Haoqi Wu, Wei Dai et al.
[3]
PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration
2024Ziqian Zeng, Jianwei Wang et al.
[4]
Information Leakage from Embedding in Large Language Models
2024Zhipeng Wan, Anda Cheng et al.
[5]
Can we Soft Prompt LLMs for Graph Learning Tasks?
2024Zheyuan Liu, Xiaoxin He et al.
[6]
Text Embedding Inversion Security for Multilingual Language Models
2024Yiyi Chen, Heather Lent et al.
[7]
Soft Prompt Recovers Compressed LLMs, Transferably
2024Zhaozhuo Xu, Zirui Liu et al.
[8]
InferDPT: Privacy-Preserving Inference for Black-box Large Language Model
2023Meng Tong, Kejiang Chen et al.
[9]
Split-and-Denoise: Protect large language model inference with local differential privacy
2023Peihua Mai, Ran Yan et al.
[10]
Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation
2023Xinyu Tang, Richard Shin et al.
[11]
DP-Forward: Fine-tuning and Inference on Language Models with Differential Privacy in Forward Pass
2023Minxin Du, Xiang Yue et al.
[12]
Hide and Seek (HaS): A Lightweight Framework for Prompt Privacy Protection
2023Yu Chen, Tingxin Li et al.
[13]
InfoPrompt: Information-Theoretic Soft Prompt Tuning for Natural Language Understanding
2023Junda Wu, Tong Yu et al.
[14]
Harnessing large-language models to generate private synthetic text
2023Alexey Kurakin, N. Ponomareva et al.
[15]
LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly Transformers
2023Xuanqing Liu, Zhuotao Liu
[16]
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models
2023Haonan Duan, Adam Dziedzic et al.
[17]
Privacy Amplification via Compression: Achieving the Optimal Privacy-Accuracy-Communication Trade-off in Distributed Mean Estimation
2023Wei-Ning Chen, Danni Song et al.
[18]
A Survey of Large Language Models
2023Wayne Xin Zhao, Kun Zhou et al.
[19]
Why Is Public Pretraining Necessary for Private Model Training?
2023Arun Ganesh, Mahdi Haghifam et al.
[20]
CipherGPT: Secure Two-Party GPT Inference
2023Xiaoyang Hou, Jian Liu et al.

Showing 20 of 47 references

Founder's Pitch

"Develop a differentially private and communication efficient LLM inference framework for resource-constrained devices."

LLM InferenceScore: 5View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

0/4 signals

0

Series A Potential

0/4 signals

0

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 2/12/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

Related Papers

Loading…