Mental Health AI

5papers
4.2viability
-75%30d

State of the Field

Recent research in mental health AI is increasingly focused on enhancing the effectiveness and safety of text-based therapeutic interactions. New frameworks like PsyFIRE and RECAP are addressing the challenge of detecting client resistance, which is crucial for effective counseling, by providing detailed insights into resistance behaviors and improving counselors' intervention strategies. Concurrently, evaluations of large language models (LLMs) reveal a cognitive-affective gap, highlighting the need for better alignment between AI-generated responses and human emotional needs. This gap underscores the importance of developing robust evaluation methodologies that prioritize therapeutic sensitivity. Additionally, methodologies such as TherapyProbe are exploring the dynamics of chatbot interactions over time, identifying patterns that can lead to relational safety failures. The field is moving toward creating comprehensive guidelines and checklists for the responsible design of mental health chatbots, aiming to bridge the gap between technological capabilities and the nuanced needs of users in mental health contexts.

Last updated Mar 1, 2026

Papers

1–5 of 5
Research Paper·Jan 21, 2026

RECAP: Resistance Capture in Text-based Mental Health Counseling with Large Language Models

Recognizing and navigating client resistance is critical for effective mental health counseling, yet detecting such behaviors is particularly challenging in text-based interactions. Existing NLP appro...

8.0 viability
Research Paper·Jan 26, 2026

Assessing the Quality of Mental Health Support in LLM Responses through Multi-Attribute Human Evaluation

The escalating global mental health crisis, marked by persistent treatment gaps, availability, and a shortage of qualified therapists, positions Large Language Models (LLMs) as a promising avenue for ...

5.0 viability
Research Paper·Jan 26, 2026

Expert Evaluation and the Limits of Human Feedback in Mental Health AI Safety Testing

Learning from human feedback~(LHF) assumes that expert judgments, appropriately aggregated, yield valid ground truth for training and evaluating AI systems. We tested this assumption in mental health,...

3.0 viability
Research Paper·Feb 26, 2026

TherapyProbe: Generating Design Knowledge for Relational Safety in Mental Health Chatbots Through Adversarial Simulation

As mental health chatbots proliferate to address the global treatment gap, a critical question emerges: How do we design for relational safety the quality of interaction patterns that unfold across co...

3.0 viability
Research Paper·Jan 21, 2026

A Checklist for Trustworthy, Safe, and User-Friendly Mental Health Chatbots

Mental health concerns are rising globally, prompting increased reliance on technology to address the demand-supply gap in mental health services. In particular, mental health chatbots are emerging as...

2.0 viability