Papers
1–3 of 3Research Paper·Jan 21, 2026
V-CAGE: Context-Aware Generation and Verification for Scalable Long-Horizon Embodied Tasks
Learning long-horizon embodied behaviors from synthetic data remains challenging because generated scenes are often physically implausible, language-driven programs frequently "succeed" without satisf...
7.0 viability
Research Paper·Feb 9, 2026
Self-Supervised Bootstrapping of Action-Predictive Embodied Reasoning
Embodied Chain-of-Thought (CoT) reasoning has significantly enhanced Vision-Language-Action (VLA) models, yet current methods rely on rigid templates to specify reasoning primitives (e.g., objects in ...
6.0 viability
Research Paper·Feb 16, 2026
pFedNavi: Structure-Aware Personalized Federated Vision-Language Navigation for Embodied AI
Vision-Language Navigation VLN requires large-scale trajectory instruction data from private indoor environments, raising significant privacy concerns. Federated Learning FL mitigates this by keeping ...
6.0 viability