Human-AI Interaction

6papers
3.2viability
-50%30d

State of the Field

Recent research in human-AI interaction is increasingly focused on understanding the complexities of relationships and decision-making dynamics between humans and AI systems. Studies are exploring privacy concerns in AI-assisted romantic relationships, revealing how intimacy can blur boundaries and raise issues of personal data exposure. Concurrently, large-scale analyses of AI assistant usage are uncovering patterns of disempowerment, particularly in personal domains, where users may adopt inauthentic behaviors or distorted perceptions due to AI interactions. This highlights a tension between user satisfaction and long-term empowerment. Furthermore, the development of human-LLM archetypes is providing insights into how roles are assigned in decision-making processes, indicating that the chosen interaction patterns can significantly impact outcomes. Overall, the field is moving toward a nuanced understanding of how AI can support human agency while navigating the ethical implications of these interactions, underscoring the need for systems that prioritize user autonomy and reliability in collaborative environments.

Last updated Feb 22, 2026

Papers

1–6 of 6
Research Paper·Mar 2, 2026

Non-verbal Real-time Human-AI Interaction in Constrained Robotic Environments

We study the ongoing debate regarding the statistical fidelity of AI-generated data compared to human-generated data in the context of non-verbal communication using full body motion. Concretely, we a...

6.0 viability
Research Paper·Jan 23, 2026

Privacy in Human-AI Romantic Relationships: Concerns, Boundaries, and Agency

An increasing number of LLM-based applications are being developed to facilitate romantic relationships with AI partners, yet the safety and privacy risks in these partnerships remain largely underexp...

3.0 viability
Research Paper·Jan 27, 2026

Who's in Charge? Disempowerment Patterns in Real-World LLM Usage

Although AI assistants are now deeply embedded in society, there has been limited empirical study of how their usage affects human empowerment. We present the first large-scale empirical analysis of d...

3.0 viability
Research Paper·Jan 27, 2026

LVLMs and Humans Ground Differently in Referential Communication

For generative AI agents to partner effectively with human users, the ability to accurately predict human intent is critical. But this ability to collaborate remains limited by a critical deficit: an ...

3.0 viability
Research Paper·Jan 14, 2026

Epistemology gives a Future to Complementarity in Human-AI Interactions

Human-AI complementarity is the claim that a human supported by an AI system can outperform either alone in a decision-making process. Since its introduction in the human-AI interaction literature, it...

2.0 viability
Research Paper·Feb 12, 2026

Who Does What? Archetypes of Roles Assigned to LLMs During Human-AI Decision-Making

LLMs are increasingly supporting decision-making across high-stakes domains, requiring critical reflection on the socio-technical factors that shape how humans and LLMs are assigned roles and interact...

2.0 viability