Abstract
The majority of existing methods for machine learning-based medical image segmentation are supervised models that require large amounts of fully annotated images. These types of datasets are typically not available in the medical domain and are difficult and expensive to generate. A wide-spread use of machine learning based models for medical image segmentation therefore requires the development of data-efficient algorithms that only require limited supervision. To address these challenges, this thesis presents new machine learning methodology for unsupervised lung tumor segmentation and few-shot learning based organ segmentation. When working in the limited supervision paradigm, exploiting the available information in the data is key. The methodology developed in this thesis leverages automatically generated supervoxels in various ways to exploit the structural information in the images. The work on unsupervised tumor segmentation explores the opportunity of performing clustering on a population-level in order to provide the algorithm with as much information as possible. To facilitate this population-level across-patient clustering, supervoxel representations are exploited to reduce the number of samples, and thereby the computational cost. In the work on few-shot learning-based organ segmentation, supervoxels are used to generate pseudo-labels for self-supervised training. Further, to obtain a model that is robust to the typically large and inhomogeneous background class, a novel anomaly detection-inspired classifier is proposed to ease the modelling of the background. To encourage the resulting segmentation maps to respect edges defined in the input space, a supervoxel-informed feature refinement module is proposed to refine the embedded feature vectors during inference. Finally, to improve trustworthiness, an architecture-agnostic mechanism to estimate model uncertainty in few-shot segmentation is developed. Results demonstrate that supervoxels are versatile tools for leveraging structural information in medical data when training segmentation models with limited supervision.
Has part(s)
Paper I: Hansen, S., Kuttner, S., Kampffmeyer, M., Markussen, T.V., Sundset, R., Øen, S.K., Eikenes, L. & Jenssen, R. (2021). Unsupervised supervoxel-based lung tumor segmentation across patient scans in hybrid PET/MRI. Expert Systems with Applications, 167, 114244. Also available in Munin at https://hdl.handle.net/10037/20049.
Paper II: Hansen, S., Gautam, S., Jenssen, R. & Kampffmeyer, M. (2022). Anomaly detection-inspired few-shot medical image segmentation through self-supervision with supervoxels. Medical Image Analysis, 78, 102385. Also available in Munin at https://hdl.handle.net/10037/26143.
Paper III: Hansen, S., Gautam, S., Salahuddin, S.A., Kampffmeyer, M. & Jenssen, R. ADNet++: A few-shot learning framework for multi-class medical image volume segmentation with uncertainty-guided feature refinement. (Submitted manuscript).