State of the Field
Recent advancements in conversational AI are increasingly focused on enhancing user interactions through improved understanding of human behavior and decision-making. Research is exploring how large language models (LLMs) can predict cognitive biases and adapt to user needs in real time, addressing issues like error recovery and topic continuity. For instance, new frameworks are being developed to enable LLMs to recover from conversational errors without altering their core parameters, while models are being trained to maintain topic relevance over extended dialogues. Additionally, evaluation methods are evolving to better assess user satisfaction and align AI outputs with human expectations. These developments aim to create conversational agents that not only respond accurately but also engage users in a more intuitive and context-aware manner, potentially transforming applications in customer service, education, and decision support systems. As the field matures, the emphasis is shifting toward creating AI that complements human cognitive processes rather than merely replicating them.
Papers
1–10 of 13SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems
Current LLM-based conversational recommender systems (CRS) primarily optimize recommendation accuracy and user satisfaction. We identify an underexplored vulnerability in which recommendation outputs ...
ReIn: Conversational Error Recovery with Reasoning Inception
Conversational agents powered by large language models (LLMs) with tool integration achieve strong performance on fixed task-oriented dialogue datasets but remain vulnerable to unanticipated, user-ind...
BoRP: Bootstrapped Regression Probing for Scalable and Human-Aligned LLM Evaluation
Accurate evaluation of user satisfaction is critical for iterative development of conversational AI. However, for open-ended assistants, traditional A/B testing lacks reliable metrics: explicit feedba...
Predicting Biased Human Decision-Making with Large Language Models in Conversational Settings
We examine whether large language models (LLMs) can predict biased decision-making in conversational settings, and whether their predictions capture not only human cognitive biases but also how those ...
Conversational Behavior Modeling Foundation Model With Multi-Level Perception
Human conversation is organized by an implicit chain of thoughts that manifests as timed speech acts. Capturing this perceptual pathway is key to building natural full-duplex interactive systems. We i...
Emulating Aggregate Human Choice Behavior and Biases with GPT Conversational Agents
Cognitive biases often shape human decisions. While large language models (LLMs) have been shown to reproduce well-known biases, a more critical question is whether LLMs can predict biases at the indi...
Retrieval Challenges in Low-Resource Public Service Information: A Case Study on Food Pantry Access
Public service information systems are often fragmented, inconsistently formatted, and outdated. These characteristics create low-resource retrieval environments that hinder timely access to critical ...
Human or Machine? A Preliminary Turing Test for Speech-to-Speech Interaction
The pursuit of human-like conversational agents has long been guided by the Turing test. For modern speech-to-speech (S2S) systems, a critical yet unanswered question is whether they can converse like...
Bounded Minds, Generative Machines: Envisioning Conversational AI that Works with Human Heuristics and Reduces Bias Risk
Conversational AI is rapidly becoming a primary interface for information seeking and decision making, yet most systems still assume idealized users. In practice, human reasoning is bounded by limited...
GameTalk: Training LLMs for Strategic Conversation
Strategic decision-making in multi-agent settings is a key challenge for large language models (LLMs), particularly when coordination and negotiation must unfold over extended conversations. While rec...