Papers
1–4 of 4Insight: Interpretable Semantic Hierarchies in Vision-Language Encoders
Language-aligned vision foundation models perform strongly across diverse downstream tasks. Yet, their learned representations remain opaque, making interpreting their decision-making hard. Recent wor...
Causal Neural Probabilistic Circuits
Concept Bottleneck Models (CBMs) enhance the interpretability of end-to-end neural networks by introducing a layer of concepts and predicting the class label from the concept predictions. A key proper...
Hierarchical Concept-based Interpretable Models
Modern deep neural networks remain challenging to interpret due to the opacity of their latent representations, impeding model understanding, debugging, and debiasing. Concept Embedding Models (CEMs) ...
From Basis to Basis: Gaussian Particle Representation for Interpretable PDE Operators
Learning PDE dynamics for fluids increasingly relies on neural operators and Transformer-based models, yet these approaches often lack interpretability and struggle with localized, high-frequency stru...