Inference Guided Few-Shot Segmentation
Permanent link
https://hdl.handle.net/10037/26200Date
2022-06-22Type
Master thesisMastergradsoppgave
Author
Burman, JoelAbstract
Few-shot segmentation has in recent years gotten a lot of attention. The reason is its ability to segment images from classes based on only a handful of labeled support images. This opens up many possibilities when the need for a big dataset is removed.
To do this a few-shot segmentation network need to extract as much quality information from each support image as possible.
In this thesis we are exploring if an existing few-shot segmentation network can be improved by making the inference phase more target class specific. To do this we are introducing our Inference Guided Few-Shot Segmentation (IGFSS) method. It can be applied to an existing few-shot segmentation network. It changes the inference phase from a static network to one that adapts certain class specific parts of the network to each new target class. We tested our method with the Self-Guided Cross-Guided (SGCG) network as backbone. Here we optimized either the prototypes or the decoder. We used the Pascal dataset to compare the results from both methods. This is done on a fixed list from the dataset to be able to make a fair comparison.
In the 5-shot setup, where new classes are segmented based on 5 support images. Here we get a solid improvement when our method is applied to both the prototypes and the decoder. The mean IoU score was increased with 3.7% and 7.5% respectively.
The dataset was analysed with regard to image and object distributions. This gives us a better understanding of the results of our IGFSS method.
While our IGFSS method does benefit all classes this could be a first step towards a Class-Adaptive Inference Guided Few-Shot Segmentation method.
Publisher
UiT Norges arktiske universitetUiT The Arctic University of Norway
Metadata
Show full item recordCollections
Copyright 2022 The Author(s)
The following license file are associated with this item: