Semi-Supervised Masked Autoencoders: Unlocking Vision Transformer Potential with Limited Data

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

References (30)

[1]
RegMixMatch: Optimizing Mixup Utilization in Semi-Supervised Learning
2024Haorong Han, Jidong Yuan et al.
[2]
InPL: Pseudo-labeling the Inliers First for Imbalanced Semi-supervised Learning
2023Z. Yu, Yin Li et al.
[3]
Co-training with High-Confidence Pseudo Labels for Semi-supervised Medical Image Segmentation
2023Zhiqiang Shen, Peng Cao et al.
[4]
Semi-supervised Vision Transformers at Scale
2022Zhaowei Cai, Avinash Ravichandran et al.
[5]
SCMT: Self-Correction Mean Teacher for Semi-supervised Object Detection
2022Feng Xiong, Jiayi Tian et al.
[6]
SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders
2022Gang Li, Heliang Zheng et al.
[7]
SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners
2022Feng Liang, Yangguang Li et al.
[8]
MultiMAE: Multi-modal Multi-task Masked Autoencoders
2022Roman Bachmann, David Mizrahi et al.
[9]
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
2022Zhan Tong, Yibing Song et al.
[10]
Semi-Supervised Vision Transformers
2021Zejia Weng, Xitong Yang et al.
[11]
Masked Autoencoders Are Scalable Vision Learners
2021Kaiming He, Xinlei Chen et al.
[12]
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling
2021Bowen Zhang, Yidong Wang et al.
[13]
Semi-Supervised Learning with Multi-Head Co-Training
2021Mingcai Chen, Yuntao Du et al.
[14]
How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
2021A. Steiner, Alexander Kolesnikov et al.
[15]
SimPLE: Similar Pseudo Label Exploitation for Semi-Supervised Classification
2021Zijian Hu, Zhengyu Yang et al.
[16]
In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning
2021Mamshad Nayeem Rizve, Kevin Duarte et al.
[17]
Training data-efficient image transformers & distillation through attention
2020Hugo Touvron, M. Cord et al.
[18]
Exploring Simple Siamese Representation Learning
2020Xinlei Chen, Kaiming He
[19]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020Alexey Dosovitskiy, Lucas Beyer et al.
[20]
Big Self-Supervised Models are Strong Semi-Supervised Learners
2020Ting Chen, Simon Kornblith et al.

Showing 20 of 30 references

Founder's Pitch

"A framework for Vision Transformers offering superior performance in limited label scenarios by using semi-supervised masked autoencoding and pseudo-labels."

Vision TransformersScore: 6View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

2/4 signals

5

Quick Build

2/4 signals

5

Series A Potential

3/4 signals

7.5

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 1/27/2026

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

Related Papers

Loading…

Related Resources