Vis enkel innførsel

dc.contributor.advisorKampffmeyer, Michael
dc.contributor.advisorWickstrøm, Kristoffer Knutsen
dc.contributor.authorJoakimsen, Harald Lykke
dc.description.abstractThe predictive power of modern deep learning approaches is posed to revolutionize the medical imaging field, however, their usefulness and applicability are severely limited by the lack of well annotated data. Liver segmentation in CT images is an application that could benefit particularly well from less data hungry methods and potentially lead to better liver volume estimation and tumor detection. To this end, we propose a new semantic segmentation model called ConvMixerSeg and experimentally show that it outperforms an FCN with a ResNet-50 backbone when trained to segment livers on a subset of the Liver Tumor Segmentation Benchmark data set (LiTS). We have further developed a novel Class Activation Map (CAM) based method to train semantic segmentation models with image level labels without adding parameters. The proposed CAM method includes a Neighborhood Correlation Enforcement module using Gaussian smoothing that reduces part domination and prediction noise. Additionally, our experiments show that the proposed CAM method outperforms the original CAM method for both classification and segmentation with high statistical significance given the same ConvMixerSeg backbone.en_US
dc.publisherUiT Norges arktiske universitetno
dc.publisherUiT The Arctic University of Norwayen
dc.rights.holderCopyright 2021 The Author(s)
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)en_US
dc.subjectMaster’s Thesis in Applied Physics and Mathematicsen_US
dc.titleConvMixerSeg: Weakly Supervised Semantic Segmentation for CT Liver Imagesen_US
dc.typeMaster thesisen

Tilhørende fil(er)


Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
Med mindre det står noe annet, er denne innførselens lisens beskrevet som Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)