Papers
1–4 of 4RoboVIP: Multi-View Video Generation with Visual Identity Prompting Augments Robot Manipulation
The diversity, quantity, and quality of manipulation data are critical for training effective robot policies. However, due to hardware and physical setup constraints, collecting large-scale real-world...
3PoinTr: 3D Point Tracks for Robot Manipulation Pretraining from Casual Videos
Data-efficient training of robust robot policies is the key to unlocking automation in a wide array of novel tasks. Current systems require large volumes of demonstrations to achieve robustness, which...
CABTO: Context-Aware Behavior Tree Grounding for Robot Manipulation
Behavior Trees (BTs) offer a powerful paradigm for designing modular and reactive robot controllers. BT planning, an emerging field, provides theoretical guarantees for the automated generation of rel...
RoboPCA: Pose-centered Affordance Learning from Human Demonstrations for Robot Manipulation
Understanding spatial affordances -- comprising the contact regions of object interaction and the corresponding contact poses -- is essential for robots to effectively manipulate objects and accomplis...