Confidence and uncertainty

Background

Deep neural networks are powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong or whether the input is outside the range of which the system is expected to safely perform. For critical or automatic applications, knowledge about the confidence of predictions is essential.

Challenges

For safety-critical applications, e.g. in health, a limitation of current deep learning systems is that they are in general not designed to recognize when their predictions may be wrong or to recognize with some certainty that the input is inside the range of which the system is expected to safely perform. The simple regularization technique "Dropout" provides a measure of variability, but not a statistically sound quantification of uncertainty propagating from input to output. Bayesian deep models are emerging but have so far been challenging to develop for complex image data due to the complexity of the input data and the nonlinear nature of the data processing.

Main objective

To develop deep learning models that can estimate confidence and quantify uncertainty of their predictions.

Highlighted publications

Interrogating Sea Ice Predictability With Gradients
March 22, 2024
The paper focuses on interrogating the effect of the IceNet's input feature with a gradient-based analysis.
March 22, 2024
On the Effects of Self-supervision and Contrastive Alignment in Deep Multi-view Clustering
December 19, 2023
We propose DeepMVC – a unified framework which includes many recent methods as instances.