Medicine and health

This innovation area focuses on developing more efficient deep learning methods for diagnosis support and decision support for diseases such as cardiovascular diseases and cancer.

Motivation

Medical images captured from inside the body using various scanning and imaging techniques have traditionally been challenging and time-consuming to analyze by trained experts. Well-performing deep learning models in the medical domain have the potential to assist healthcare professionals by increasing certainty and streamlining analyses of medical images.

Our innovations

Visual Intelligence researchers have developed several innovations which aim to assist healthcare professionals in the clinical workflow. For instance, our research efforts have resulted in novel deep learning methods, such as for:

  • automatically measuring the left ventricle in 2D echocardiography, in collaboration with GE Vingmed Ultrasound.
  • detecting cancer in mammography images, together with the Cancer Registry of Norway.
  • estimating the arterial input function in dynamic PET scans, in collaboration with the University Hospital of Northern Norway (UNN).
  • improving cancer diagnostic accuracy via digitalized pathology, together with the Cancer Registry of Norway.
  • better augmenting CT images by clipping intensity values tailored to characteristics of organs, such as the liver, in collaboration with UNN.
  • content-based image retrieval of CT liver images using self-supervised learning, together with UNN.

Addressing research challenges

Major obstacles of developing deep learning methods in medicine and health include the availability of training data, the estimation of confidence and uncertainty in the models’ predictions, as well as lack of explainability and reliability. The innovations mentioned above address these research challenges in different ways, enabling progress within this innovation area.

For instance, research on the challenge of learning from limited data is at the core of the clinically inspired data augmentation technique for CT images mentioned above. This method also leverages context and dependencies by exploiting knowledge about the signal-generating process.  

Research on explainable and reliable AI constitutes a significant part of our method for detecting cancer in mammography images. This is also the case for our novel content-based CT image retrieval method.

Synergies within the innovation area and across other areas

When developing deep learning solutions for concrete medical and health challenges that our user partners face, it is important to transfer knowledge and methodologies across innovation areas. Our proposed methodologies within medicine and health synergize well with other work within this innovation area, as well as our other three innovation areas.

For instance, the development of a semi-automatic landmark prediction in cardiac ultrasound depends on context provided in the form of a scan line in the echocardiography. This is inspired by other developed solutions which leverage context in the form of anatomical knowledge, e.g. for cancer detection in mammography.

Self-supervised deep learning, which several of our medical innovations are based on, has not only proven useful within medicine and health, but also in “Marine science” “Energy” and “Earth observation”. For example, the framework for CT image retrieval shares similarities with a content-based image retrieval system for seismic data.

Highlighted publications

Using Machine Learning to Quantify Tumor Infiltrating Lymphocytes in Whole Slide Images

February 14, 2022
By
Nikita Shvetsov, Morten Grønnesby, Edvard Pedersen, Kajsa Møllersen, Lill-Tove Rasmussen Busund, Ruth Schwienbacher, Lars Ailo Bongo, Thomas K. Kilvaer

The Risk of Imbalanced Datasets in Chest X-ray Image-based Diagnostics

February 1, 2022
By
Srishti Gautam, Marina M.-C. Höhne, Stine Hansen, Robert Jenssen and Michael Kampffmeyer

Other publications

Motion compansated interpolation in echocardiography: A Lie-advection based approach

By authors:

H. N. Mirar, S. R. Snare and A. H. S. Solberg

Published in:

IEEE Transactions on Biomedical Engineering, vol. 72, no. 1, pp. 123-136, Jan. 2025

on

January 16, 2025

Are nuclear masks all you need for improved out-of-domain generalisation? A closer look at cancer classification in histopathology

By authors:

Dhananjay Tomar, Alexander Binder, Andreas Kleppe

Published in:

Advances in Neural Information Processing Systems 2024

on

September 24, 2024

BrainIB: Interpretable brain network-based psychiatric diagnosis with graph information bottleneck

By authors:

Zheng, Kaizhong; Yu, Shujian; Li, Baojuan; Jenssen, Robert; Chen, Badong.

Published in:

IEEE Transactions on Neural Networks and Learning Systems

on

September 13, 2024

An exploratory study of self-supervised pre-training on partially supervised multi-label classification on chest X-ray images

By authors:

Nanqing Dong, Michael Kampffmeyer, Haoyang Su, Eric Xing

Published in:

Applied Soft Computing, Volume 163 , 111855

on

September 1, 2024

View it like a radiologist: Shifted windows for deep learning augmentation of CT images

By authors:

Østmo, Eirik Agnalt; Wickstrøm, Kristoffer; Radiya, Keyur; Kampffmeyer, Michael; Jenssen, Robert.

Published in:

2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP), Rome, Italy, 2023, pp. 1-6

on

October 23, 2023