HiAP: A Multi-Granular Stochastic Auto-Pruning Framework for Vision Transformers

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (33)

[1]
MDP: Multidimensional Vision Model Pruning with Latency Constraint
2025Xinglong Sun, Barath Lakshmanan et al.
[2]
Isomorphic Pruning for Vision Models
2024Gongfan Fang, Xinyin Ma et al.
[3]
UPDP: A Unified Progressive Depth Pruner for CNN and Vision Transformer
2024Ji Liu, Dehua Tang et al.
[4]
SR-init: An Interpretable Layer Pruning Method
2023Hui Tang, Yao Lu et al.
[5]
X-Pruner: eXplainable Pruning for Vision Transformers
2023Lu Yu, Wei Xiang
[6]
DepGraph: Towards Any Structural Pruning
2023Gongfan Fang, Xinyin Ma et al.
[7]
GOHSP: A Unified Framework of Graph and Optimization-based Heterogeneous Structured Pruning for Vision Transformer
2023Miao Yin, Burak Uzkent et al.
[8]
Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
2022Zhenglun Kong, Haoyu Ma et al.
[9]
Token Merging: Your ViT But Faster
2022Daniel Bolya, Cheng-Yang Fu et al.
[10]
Width & Depth Pruning for Vision Transformers
2022Fang Yu, Kun Huang et al.
[11]
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
2022Tri Dao, Daniel Y. Fu et al.
[12]
Unified Visual Transformer Compression
2022Shixing Yu, Tianlong Chen et al.
[13]
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations
2022Youwei Liang, Chongjian Ge et al.
[14]
Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
2022Arnav Chavan, Zhiqiang Shen et al.
[15]
SAViT: Structure-Aware Vision Transformer Pruning via Collaborative Optimization
2022Chuanyang Zheng, Zheyang Li et al.
[16]
SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning
2021Zhenglun Kong, Peiyan Dong et al.
[17]
A-ViT: Adaptive Tokens for Efficient Vision Transformer
2021Hongxu Yin, Arash Vahdat et al.
[18]
Global Vision Transformer Pruning with Hessian-Aware Saliency
2021Huanrui Yang, Hongxu Yin et al.
[19]
Chasing Sparsity in Vision Transformers: An End-to-End Exploration
2021Tianlong Chen, Yu Cheng et al.
[20]
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
2021Yongming Rao, Wenliang Zhao et al.

Showing 20 of 33 references

Founder's Pitch

"HiAP is an innovative framework that optimizes Vision Transformers for efficient deployment on edge devices through multi-granular stochastic pruning."

Vision TransformersScore: 6View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

1/4 signals

2.5

Quick Build

0/4 signals

0

Series A Potential

0/4 signals

0

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 3/12/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

Related Papers

Loading…

Related Resources