Recent advances in explainable AI are focusing on enhancing the interpretability and reliability of machine learning models across various applications. A notable trend is the integration of counterfactual explanations into training regimes, which allows models to generate actionable insights that align with user preferences and decision-making constraints. This shift is particularly relevant in sectors like e-commerce and fraud detection, where understanding model behavior is crucial for trust and accountability. Additionally, new frameworks are emerging that combine collaborative filtering with language models to improve recommendation systems, ensuring that explanations are not only factually correct but also resonate with user intent. The development of methods that address fairness in counterfactual explanations highlights the growing recognition of ethical considerations in AI deployment. Overall, the field is moving towards solutions that prioritize both transparency and user-centric design, addressing commercial challenges while fostering trust in AI systems.
Top papers
- Reasoning-guided Collaborative Filtering with Language Models for Explainable Recommendation(8.0)
- ECSEL: Explainable Classification via Signomial Equation Learning(8.0)
- Counterfactual Training: Teaching Models Plausible and Actionable Explanations(7.0)
- Process-Guided Concept Bottleneck Model(7.0)
- Provably Robust Bayesian Counterfactual Explanations under Model Changes(6.0)
- XChoice: Explainable Evaluation of AI-Human Alignment in LLM-based Constrained Choice Decision Making(6.0)
- Axiomatic On-Manifold Shapley via Optimal Generative Flows(5.0)
- Extended Empirical Validation of the Explainability Solution Space(5.0)
- Owen-based Semantics and Hierarchy-Aware Explanation (O-Shap)(5.0)
- Fair Recourse for All: Ensuring Individual and Group Fairness in Counterfactual Explanations(5.0)
- Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation(5.0)
- PolySHAP: Extending KernelSHAP with Interaction-Informed Polynomial Regression(4.0)
- Causal Discovery for Explainable AI: A Dual-Encoding Approach(4.0)
- Rules or Weights? Comparing User Understanding of Explainable AI Techniques with the Cognitive XAI-Adaptive Model(4.0)
- Explainable AI: Context-Aware Layer-Wise Integrated Gradients for Explaining Transformer Models(3.0)
- Emergent, not Immanent: A Baradian Reading of Explainable AI(2.0)
- Circuit Tracing in Vision-Language Models: Understanding the Internal Mechanisms of Multimodal Thinking(2.0)
- Position: Explaining Behavioral Shifts in Large Language Models Requires a Comparative Approach(2.0)