State of the Field
Recent research in human-AI interaction is increasingly focused on understanding the complexities of relationships and decision-making dynamics between humans and AI systems. Studies are exploring privacy concerns in AI-assisted romantic relationships, revealing how intimacy can blur boundaries and raise issues of personal data exposure. Concurrently, large-scale analyses of AI assistant usage are uncovering patterns of disempowerment, particularly in personal domains, where users may adopt inauthentic behaviors or distorted perceptions due to AI interactions. This highlights a tension between user satisfaction and long-term empowerment. Furthermore, the development of human-LLM archetypes is providing insights into how roles are assigned in decision-making processes, indicating that the chosen interaction patterns can significantly impact outcomes. Overall, the field is moving toward a nuanced understanding of how AI can support human agency while navigating the ethical implications of these interactions, underscoring the need for systems that prioritize user autonomy and reliability in collaborative environments.
Papers
1–6 of 6Non-verbal Real-time Human-AI Interaction in Constrained Robotic Environments
We study the ongoing debate regarding the statistical fidelity of AI-generated data compared to human-generated data in the context of non-verbal communication using full body motion. Concretely, we a...
Privacy in Human-AI Romantic Relationships: Concerns, Boundaries, and Agency
An increasing number of LLM-based applications are being developed to facilitate romantic relationships with AI partners, yet the safety and privacy risks in these partnerships remain largely underexp...
Who's in Charge? Disempowerment Patterns in Real-World LLM Usage
Although AI assistants are now deeply embedded in society, there has been limited empirical study of how their usage affects human empowerment. We present the first large-scale empirical analysis of d...
LVLMs and Humans Ground Differently in Referential Communication
For generative AI agents to partner effectively with human users, the ability to accurately predict human intent is critical. But this ability to collaborate remains limited by a critical deficit: an ...
Epistemology gives a Future to Complementarity in Human-AI Interactions
Human-AI complementarity is the claim that a human supported by an AI system can outperform either alone in a decision-making process. Since its introduction in the human-AI interaction literature, it...
Who Does What? Archetypes of Roles Assigned to LLMs During Human-AI Decision-Making
LLMs are increasingly supporting decision-making across high-stakes domains, requiring critical reflection on the socio-technical factors that shape how humans and LLMs are assigned roles and interact...