Human-Robot Interaction Comparison Hub
9 papers - avg viability 6.7
Recent advancements in human-robot interaction are focusing on enhancing personalization and adaptability, particularly in multi-user environments. New frameworks leverage large language models to create more engaging and contextually aware interactions, as seen in systems that manage long-term user profiles and dynamically adjust responses. Additionally, innovative approaches are emerging to facilitate intuitive communication, such as gloss-free sign language frameworks that streamline the interaction process by directly mapping gestures to commands, reducing reliance on complex annotations. Researchers are also exploring the impact of robot actions on human behavior, utilizing statistical methods to identify influential behaviors that can improve collaborative systems. Furthermore, gaze-based intent recognition is being refined to support users with limited motor capabilities, enhancing the efficiency of human-robot collaboration. These developments suggest a concerted effort to make robots more responsive and reliable, addressing commercial needs in sectors like healthcare and assistive technology, where effective communication and trust are paramount.
Top Papers
- HiSync: Spatio-Temporally Aligning Hand Motion from Wearable IMU and On-Robot Camera for Command Source Identification in Long-Range HRI(8.0)
HiSync enhances command source identification in long-range human-robot interactions using a novel optical-inertial fusion framework.
- HARMONI: Multimodal Personalization of Multi-User Human-Robot Interactions with LLMs(8.0)
HARMONI enhances human-robot interactions with personalized, multimodal capabilities for multi-user environments.
- Identifying Influential Actions in Human-Robot Interactions(7.0)
Identify key robot actions influencing human behavior to improve robot design and adaptability, potentially leading to more natural and effective human-robot interactions.
- SignVLA: A Gloss-Free Vision-Language-Action Framework for Real-Time Sign Language-Guided Robotic Manipulation(7.0)
A gloss-free vision-language-action system enabling real-time sign language-guided robotic manipulation.
- Dance2Hesitate: A Multi-Modal Dataset of Dancer-Taught Hesitancy for Understandable Robot Motion(7.0)
Dance2Hesitate offers a unique dataset for developing robots that can express hesitancy in human-robot interactions.
- Sticky-Glance: Robust Intent Recognition for Human Robot Collaboration via Single-Glance(7.0)
Sticky-Glance enables robust gaze-based intent recognition for human-robot collaboration, improving efficiency and reducing workload.
- Safe Probabilistic Planning for Human-Robot Interaction using Conformal Risk Control(7.0)
A probabilistic safe control framework for enhancing human-robot interaction through formal safety guarantees.
- Evaluating Zero-Shot and One-Shot Adaptation of Small Language Models in Leader-Follower Interaction(5.0)
A small language model solution for real-time role assignment in leader-follower interactions on mobile robots.
- From Pets to Robots: MojiKit as a Data-Informed Toolkit for Affective HRI Design(4.0)
MojiKit is a toolkit that empowers users to design affective behaviors for social robots using structured resources and a code-free studio.