4 papers - avg viability 5.3
SHAPCA provides consistent and interpretable explanations for machine learning models on spectroscopy data, enabling trust and adoption in critical applications.
Transform random forest classifiers into efficient circuits for enhanced explainability and decision analysis.
A new framework to evaluate the reliability of AI explanations for detecting poultry diseases, ensuring explanations are based on actual disease indicators and not environmental noise.
A new framework for evaluating AI uncertainty attribution methods to enable more reliable and comparable XAI development.