GazeMoE: Perception of Gaze Target with Mixture-of-Experts

PDF Viewer

BUILDER'S SANDBOX

Build This Paper

Use an AI coding agent to implement this research.

OpenAI Codex
OpenAI CodexAI Agent

Lightweight coding agent in your terminal.

Claude Code
Claude CodeAI Agent

Agentic coding tool for terminal workflows.

AntiGravity IDE
AntiGravity IDEScaffolding

AI agent mindset installer and workflow scaffolder.

Cursor
CursorIDE

AI-first code editor built on VS Code.

VS Code
VS CodeIDE

Free, open-source editor by Microsoft.

MVP Investment

$9K - $12K
6-10 weeks
Engineering
$8,000
Cloud Hosting
$240
SaaS Stack
$300
Domain & Legal
$100

6mo ROI

2-4x

3yr ROI

10-20x

Lightweight AI tools can reach profitability quickly. At $500/mo average contract, 20 customers = $10K MRR by 6mo, 200+ by 3yr.

Talent Scout

Z

Zhuangzhuang Dai

Aston University

Z

Zhongxi Lu

University of Leicester

V

Vincent G. Zakka

Aston University

L

Luis J. Manso

Aston University

Find Similar Experts

Gaze experts on LinkedIn & GitHub

References (33)

[1]
GazeTarget360: Towards Gaze Target Estimation in 360-Degree for Robot Perception
2025Zhu Dai, V. G. Zakka et al.
[2]
Gaze-LLE: Gaze Target Estimation via Large-Scale Learned Encoders
2024Fiona Ryan, Ajay Bati et al.
[3]
A Survey on Mixture of Experts in Large Language Models
2024Weilin Cai, Juyong Jiang et al.
[4]
ViTGaze: gaze following with interaction features in vision transformers
2024Yuehao Song, Xinggang Wang et al.
[5]
Merging Multi-Task Models via Weight-Ensembling Mixture of Experts
2024A. Tang, Li Shen et al.
[6]
GazeHELL: Gaze Estimation with Hybrid Encoders and Localised Losses with weighing
2024Shubham Dokania, Vasudev Singh et al.
[7]
Object-aware Gaze Target Detection
2023Francesco Tonini, Nicola Dall’Asen et al.
[8]
ChildPlay: A New Benchmark for Understanding Children’s Gaze Behaviour
2023Samy Tafasca, Anshul Gupta et al.
[9]
Where are they looking in the 3D space?
2023Nora Horanyi, Linfang Zheng et al.
[10]
Detecting Worker Attention Lapses in Human-Robot Interaction: An Eye Tracking and Multimodal Sensing Study
2023Zhuangzhuang Dai, Jinha Park et al.
[11]
Gaze Target Estimation Inspired by Interactive Attention
2022Zhengxi Hu, Kunxu Zhao et al.
[12]
A review of driver fatigue detection and its advances on the use of RGB-D camera and deep learning
2022Fan Liu, Delong Chen et al.
[13]
ESCNet: Gaze Target Detection with the Understanding of 3D Scenes
2022Jun Bao, Buyu Liu et al.
[14]
A Modular Multimodal Architecture for Gaze Target Prediction: Application to Privacy-Sensitive Settings
2022Anshul Gupta, Samy Tafasca et al.
[15]
We Know Where They Are Looking at From the RGB-D Camera: Gaze Following in 3D
2022Zhengxi Hu, Dingye Yang et al.
[16]
Looking here or there? Gaze Following in 360-Degree Images
2021Yunhao Li, Wei Shen et al.
[17]
Dual Attention Guided Gaze Target Detection in the Wild
2021Yi Fang, Jiapeng Tang et al.
[18]
GOO: A Dataset for Gaze Object Prediction in Retail Environments
2021Henri Tomas, Marcus Reyes et al.
[19]
Detecting Attended Visual Targets in Video
2020Eunji Chong, Yongxin Wang et al.
[20]
Detection of eye contact with deep neural networks is as accurate as human experts
2020Eunji Chong, Elysha Clark-Whitney et al.

Showing 20 of 33 references

Founder's Pitch

"GazeMoE leverages a Mixture-of-Experts model to provide state-of-the-art gaze target estimation for robotics and HCI applications."

Gaze EstimationScore: 8View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

2/4 signals

5

Quick Build

4/4 signals

10

Series A Potential

4/4 signals

10

Sources used for this analysis

arXiv Paper

Full-text PDF analysis of the research paper

GitHub Repository

Code availability, stars, and contributor activity

Citation Network

Semantic Scholar citations and co-citation patterns

Community Predictions

Crowd-sourced unicorn probability assessments

Analysis model: GPT-4o · Last scored: 3/6/2026

🔭 Research Neighborhood

Generating constellation...

~3-8 seconds

Why It Matters

This research addresses the need for accurate gaze target estimation in real-world scenarios, enabling improved human-computer interaction and understanding of human cognition through non-invasive means.

Product Angle

The solution can be packaged as a SaaS tool for companies needing to integrate gaze tracking in their robotics, augmented reality, or customer analytics platforms.

Disruption

It could replace less accurate gaze tracking solutions that do not leverage multi-modal cues or advanced Mixture-of-Experts architectures, offering higher performance and versatility across various deployment scenarios.

Product Opportunity

The market is substantial, involving sectors like robotics, automotive (for driver monitoring), retail (consumer analytics), and healthcare (autism research), where accurate gaze tracking is crucial. Companies in these sectors would likely pay for such a technology to enhance their products and services.

Use Case Idea

Deploy GazeMoE in retail environments to analyze customer interest on shelves or products in real-time, aiding in consumer behavior analytics and shelf management.

Science

The paper proposes GazeMoE, a model using Mixture-of-Experts layers to dynamically route and analyze visual cues such as eye landmarks, head poses, gestures and scene context to accurately estimate gaze direction from images, using DINOv2 as a frozen foundation model for feature extraction.

Method & Eval

The model was tested on several benchmark datasets, showing superior performance in terms of prediction accuracy and robustness in diverse and out-of-distribution visual environments compared to existing methods.

Caveats

The model requires fine-tuning and may not be as effective with low-quality input data. Additionally, reliance on large pre-trained models like DINOv2 may introduce limitations in terms of model updates and availability.

Author Intelligence

Zhuangzhuang Dai

LEAD
Aston University
z.dai1@aston.ac.uk

Zhongxi Lu

University of Leicester

Vincent G. Zakka

Aston University

Luis J. Manso

Aston University

Jose M Alcaraz Calero

Aston University

Chen Li

Aalborg University

Related Papers

Loading…