Explainable AI

17papers
4.9viability
-30%30d

State of the Field

Recent advances in explainable AI are focusing on enhancing the interpretability and reliability of machine learning models across various applications. A notable trend is the integration of counterfactual explanations into training regimes, which allows models to generate actionable insights that align with user preferences and decision-making constraints. This shift is particularly relevant in sectors like e-commerce and fraud detection, where understanding model behavior is crucial for trust and accountability. Additionally, new frameworks are emerging that combine collaborative filtering with language models to improve recommendation systems, ensuring that explanations are not only factually correct but also resonate with user intent. The development of methods that address fairness in counterfactual explanations highlights the growing recognition of ethical considerations in AI deployment. Overall, the field is moving towards solutions that prioritize both transparency and user-centric design, addressing commercial challenges while fostering trust in AI systems.

Last updated Mar 5, 2026

Papers

1–10 of 17
Research Paper·Feb 5, 2026·B2BConsumer

Reasoning-guided Collaborative Filtering with Language Models for Explainable Recommendation

Large Language Models (LLMs) exhibit potential for explainable recommendation systems but overlook collaborative signals, while prevailing methods treat recommendation and explanation as separate task...

8.0 viability
Research Paper·Jan 29, 2026

ECSEL: Explainable Classification via Signomial Equation Learning

We introduce ECSEL, an explainable classification method that learns formal expressions in the form of signomial equations, motivated by the observation that many symbolic regression benchmarks admit ...

8.0 viability
Research Paper·Jan 22, 2026

Counterfactual Training: Teaching Models Plausible and Actionable Explanations

We propose a novel training regime termed counterfactual training that leverages counterfactual explanations to increase the explanatory capacity of models. Counterfactual explanations have emerged as...

7.0 viability
Research Paper·Jan 15, 2026

Process-Guided Concept Bottleneck Model

Concept Bottleneck Models (CBMs) improve the explainability of black-box Deep Learning (DL) by introducing intermediate semantic concepts. However, standard CBMs often overlook domain-specific relatio...

7.0 viability
Research Paper·Jan 23, 2026

Provably Robust Bayesian Counterfactual Explanations under Model Changes

Counterfactual explanations (CEs) offer interpretable insights into machine learning predictions by answering ``what if?" questions. However, in real-world settings where models are frequently updated...

6.0 viability
Research Paper·Jan 16, 2026

XChoice: Explainable Evaluation of AI-Human Alignment in LLM-based Constrained Choice Decision Making

We present XChoice, an explainable framework for evaluating AI-human alignment in constrained decision making. Moving beyond outcome agreement such as accuracy and F1 score, XChoice fits a mechanism-b...

6.0 viability
Research Paper·Feb 19, 2026

Owen-based Semantics and Hierarchy-Aware Explanation (O-Shap)

Shapley value-based methods have become foundational in explainable artificial intelligence (XAI), offering theoretically grounded feature attributions through cooperative game theory. However, in pra...

5.0 viability
Research Paper·Mar 1, 2026

Extended Empirical Validation of the Explainability Solution Space

This technical report provides an extended validation of the Explainability Solution Space (ESS) through cross-domain evaluation. While initial validation focused on employee attrition prediction, thi...

5.0 viability
Research Paper·Mar 3, 2026

Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation

LLM-based explainable recommenders can produce fluent explanations that are factually correct, yet still justify items using attributes that conflict with a user's historical preferences. Such prefere...

5.0 viability
Research Paper·Jan 28, 2026

Fair Recourse for All: Ensuring Individual and Group Fairness in Counterfactual Explanations

Explainable Artificial Intelligence (XAI) is becoming increasingly essential for enhancing the transparency of machine learning (ML) models. Among the various XAI techniques, counterfactual explanatio...

5.0 viability
Page 1 of 2