Explainability and reliability

Visual Intelligence is developing deep learning methods which provide explainable and reliable predictions, opening the “black box” of deep learning.

Motivation

A limitation of deep learning models is that there is no generally accepted solution for how to open the “black box” of the deep network to provide explainable decisions which can be relied on to be trustworthy. Therefore, there is e a need for explainability, which means that the models should be able to summarize the reasons for their predictions, both to gain the trust of users and to produce insights about the causes of their decisions.

Solving research challenges through new deep learning methodology

Visual Intelligence researchers have proposed new methods that are designed to provide explainable and transparent predictions. These results include methods for:

• content-based CT image retrieval, imbued with a novel representation learning explainability network.

• explainable marine image analysis, providing clearer insights into the decision-making of models designed for marine species detection and classification.

• tackling distribution shifts and adverserial attacks in various federated learning settings involved in images.

• discovering features to spot counterfeit images.

Developing explainable and reliable models is a step towards achieving deep learning models that are transparent, trustworthy, and accountable. Our proposed methods are therefore critical for bridging the gap between technical performance and real-world usage in an ethical and responsible manner.

Highlighted publications

Visual Data Diagnosis and Debiasing with Concept Graphs

September 26, 2024
By
Chakraborty, Rwiddhi; Wang, Yinong; Gao, Jialu; Zheng, Runkai; Zhang, Cheng; De la Torre, Fernando

Interrogating Sea Ice Predictability With Gradients

February 14, 2024
By
Joakimsen, H. L., Martinsen I., Luppino, L. T., McDonald, A., Hosking, S., and Jenssen, R.

Other publications

Hubs and Hyperspheres: Reducing Hubness and Improving Transductive Few-shot Learning with Hyperspherical Embeddings.

By authors:

Trosten, Daniel Johansen; Chakraborty, Rwiddhi; Løkse, Sigurd Eivindson; Wickstrøm, Kristoffer; Jenssen, Robert; Kampffmeyer, Michael.

Published in:

Computer Vision and Pattern Recognition 2023 s.7527-7536

on

August 22, 2023

Explaining Image Classifiers with Multiscale Directional Image Representation

By authors:

Stefan Kolek, Robert Windesheim, Hector Andrade-Loarca, Gitta Kutyniok, Ron Levie

Published in:

2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 2023 pp. 18600-18609.

on

June 1, 2023

Learning Fair Representations through Uniformly Distributed Sensitive Attributes

By authors:

Kenfack, Patrik; Ramírez Rivera, Adín; Khan, Adil; Mazzara, Manuel

Published in:

2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), Raleigh, NC, USA, 2023, pp. 58-67

on

June 1, 2023

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

By authors:

Anna Hedström, Philine Lou Bommer, Kristoffer Knutsen Wickstrøm, Wojciech Samek, Sebastian Lapuschkin, Marina MC Höhne

Published in:

Transactions on Machine Learning Research (06/2023)

on

June 1, 2023

A clinically motivated self-supervised approach for content-based image retrieval of CT liver images

By authors:

Wickstrøm, Kristoffer; Østmo, Eirik Agnalt; Radiya, Keyur; Mikalsen, Karl Øyvind; Kampffmeyer, Michael; Jenssen, Robert.

Published in:

Computerized Medical Imaging and Graphics 2023 ;Volum 107. s.1-12

on

May 9, 2023