Schematic representation of KMEx. Left: The black-box classifier is removed and replaced by a nearest neighbor classifier based on prototypes learned using k-means in the embedding space. The UMAP (McInnes et al., 2018) representation is the projection of the learned embedding space for STL-10, along with prototypes, depicted as squares. Right: The prototypes are visualized in the input space using the closest training images.

Blog

Prototypical Self-Explainable Models Without Re-training

October 3, 2024

Publication

Prototypical Self-Explainable Models Without Re-training

December 13, 2023

Gautam, Srishti; Boubekki, Ahcene; Höhne, Marina Marie-Claire; Kampffmeyer, Michael Christian.

Paper abstract

Explainable AI (XAI) has unfolded in two distinct research directions with, on the one hand, post-hoc methods that explain the predictions of a pre-trained black-box model and, on the other hand, self-explainable models (SEMs) which are trained directly to provide explanations alongside their predictions. While the latter is preferred in safety-critical scenarios, post-hoc approaches have received the majority of attention until now, owing to their simplicity and ability to explain base models without retraining. Current SEMs, instead, require complex architectures and heavily regularized loss functions, thus necessitating specific and costly training. To address this shortcoming and facilitate wider use of SEMs, we propose a simple yet efficient universal method called KMEx (K-Means Explainer), which can convert any existing pre-trained model into a prototypical SEM. The motivation behind KMEx is to enhance transparency in deep learning-based decision-making via class-prototype-based explanations that are diverse and trustworthy without retraining the base model. We compare models obtained from KMEx to state-of-the-art SEMs using an extensive qualitative evaluation to highlight the strengths and weaknesses of each model, further paving the way toward a more reliable and objective evaluation of SEMs