Vis enkel innførsel

dc.contributor.advisorJenssen, Robert
dc.contributor.authorHansen, Stine
dc.date.accessioned2022-11-30T09:53:11Z
dc.date.available2022-11-30T09:53:11Z
dc.date.issued2022-12-16
dc.description.abstractThe majority of existing methods for machine learning-based medical image segmentation are supervised models that require large amounts of fully annotated images. These types of datasets are typically not available in the medical domain and are difficult and expensive to generate. A wide-spread use of machine learning based models for medical image segmentation therefore requires the development of data-efficient algorithms that only require limited supervision. To address these challenges, this thesis presents new machine learning methodology for unsupervised lung tumor segmentation and few-shot learning based organ segmentation. When working in the limited supervision paradigm, exploiting the available information in the data is key. The methodology developed in this thesis leverages automatically generated supervoxels in various ways to exploit the structural information in the images. The work on unsupervised tumor segmentation explores the opportunity of performing clustering on a population-level in order to provide the algorithm with as much information as possible. To facilitate this population-level across-patient clustering, supervoxel representations are exploited to reduce the number of samples, and thereby the computational cost. In the work on few-shot learning-based organ segmentation, supervoxels are used to generate pseudo-labels for self-supervised training. Further, to obtain a model that is robust to the typically large and inhomogeneous background class, a novel anomaly detection-inspired classifier is proposed to ease the modelling of the background. To encourage the resulting segmentation maps to respect edges defined in the input space, a supervoxel-informed feature refinement module is proposed to refine the embedded feature vectors during inference. Finally, to improve trustworthiness, an architecture-agnostic mechanism to estimate model uncertainty in few-shot segmentation is developed. Results demonstrate that supervoxels are versatile tools for leveraging structural information in medical data when training segmentation models with limited supervision.en_US
dc.description.doctoraltypeph.d.en_US
dc.description.popularabstractAccurate image segmentation is an essential prerequisite in various clinical applications, such as radiotherapy treatment planning, tissue quantification, and diagnostics. A great amount of effort has therefore been put into the investigation of machine learning approaches that learn to segment images by exploiting patterns in collected data. However, the majority of existing methods for machine learning-based medical image segmentation require large amounts of manually annotated images. These are typically not available in the medical domain and are difficult and expensive to generate. A wide-spread use of machine learning based models for medical image segmentation therefore requires the development of data-efficient algorithms that reduce the amount of manually annotated images needed. To address these challenges, this thesis presents new machine learning methodology that either do not rely any annotations or only requires a few labeled images. In particular, the methods exploit information in unlabeled images by leveraging so called supervoxels that define small, consistent sub-regions of the images. The applications considered are lung tumor segmentation and organ segmentation, where promising results are obtained.en_US
dc.description.sponsorshipThe work was supported by The Research Council of Norway (RCN), through its Centre for Research-based Innovation funding scheme [grant number 309439] and Consortium Partners; RCN FRIPRO [grant number 315029]; RCN IKTPLUSS [grant number 303514]; the UiT Thematic Initiative; Northern Norway Regional Health Authority [Grant No. HNF1349-17]; Central Norway Regional Health Authority [Grant No. 46056912]; and the Norwegian Research Council [Grant No. 303514].en_US
dc.identifier.isbn978-82-8236-505-5 (electronic/pdf version).
dc.identifier.isbn978-82-8236-504-8 (printed version)
dc.identifier.urihttps://hdl.handle.net/10037/27613
dc.language.isoengen_US
dc.publisherUiT Norges arktiske universiteten_US
dc.publisherUiT The Arctic University of Norwayen_US
dc.relation.haspart<p>Paper I: Hansen, S., Kuttner, S., Kampffmeyer, M., Markussen, T.V., Sundset, R., Øen, S.K., Eikenes, L. & Jenssen, R. (2021). Unsupervised supervoxel-based lung tumor segmentation across patient scans in hybrid PET/MRI. <i>Expert Systems with Applications, 167</i>, 114244. Also available in Munin at <a href=https://hdl.handle.net/10037/20049>https://hdl.handle.net/10037/20049</a>. <p>Paper II: Hansen, S., Gautam, S., Jenssen, R. & Kampffmeyer, M. (2022). Anomaly detection-inspired few-shot medical image segmentation through self-supervision with supervoxels. <i>Medical Image Analysis, 78</i>, 102385. Also available in Munin at <a href=https://hdl.handle.net/10037/26143> https://hdl.handle.net/10037/26143</a>. <p>Paper III: Hansen, S., Gautam, S., Salahuddin, S.A., Kampffmeyer, M. & Jenssen, R. ADNet++: A few-shot learning framework for multi-class medical image volume segmentation with uncertainty-guided feature refinement. (Submitted manuscript).en_US
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2022 The Author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0en_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)en_US
dc.subjectVDP::Technology: 500::Information and communication technology: 550en_US
dc.subjectVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550en_US
dc.titleLeveraging Supervoxels for Medical Image Volume Segmentation With Limited Supervisionen_US
dc.typeDoctoral thesisen_US
dc.typeDoktorgradsavhandlingen_US


Tilhørende fil(er)

Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
Med mindre det står noe annet, er denne innførselens lisens beskrevet som Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)