Vis enkel innførsel

dc.contributor.advisorSharma, Puneet
dc.contributor.authorRøkenes, Sigurd
dc.date.accessioned2022-09-16T05:49:05Z
dc.date.available2022-09-16T05:49:05Z
dc.date.issued2022-07-11en
dc.description.abstractIn marine science, there is a need for tools for population counting of species. Through this thesis we aim to achieve the follow three objectives: first, briefly discuss the state-of-the-art object detectors that can be used for the detection of porpoises in drone images/videos. Second, test and compare a few stateof-the-art object detectors in both quantitative and qualitative manner. Third, based on our results propose a set of suggestions that can be used for future studies associated with population counting. To answer the second question we compared three state-of-the-art object detection models, two single-stage detectors, and one two-shot detector. The models chosen were the Faster RCNN, YOLOv4 and EfficientDet models, and they were trained and tested on a custom data-set consisting of 7300 labeled images of porpoises, where as 2300 of these were included in the test data set. Through our experiments, we have discovered that YOLOv4 outperforms Faster R-CNN and EfficientDet D1 with detection, where YOLO achieves a recall of 97%, compared to 80% recall with EfficientDet D1 and 75% recall with Faster R-CNN. We also find the average precision 𝐴𝑃@50 values of YOLOv4 to be 0.778, which is greater than EfficientDet D1 with 0.695 and Faster R-CNN with 0.686. Through both qualitative and quantitative methods we discover that both EfficientDet D1 and Faster R-CNN suffers from poor recall especially when porpoises overlap in the images. In the case of Faster R-CNN it misses nearly all detections when the porpoises overlap, but rarely non-overlapping detections. EfficientDet misses a significant portion of the overlapping detections, but also misses a few of the singular. Through examination of the COCO detection metrics, which favor bounding box accuracy, we also show that Faster R-CNN has more precise bounding boxes than YOLOv4 and EfficientDet D1 by comparing the less strict AP@50 values, with the stricter AP@75 and 𝐴𝑃 [.50 : .05 : .95] values. These results imply that a one-stage detection model in YOLOv4 could be used for object detection of porpoises from drone images. Based on the results, a few important areas for further investigation is outlined in the discussion, and a framework was developed which allows marine researches to easily perform porpoise detection from images and videos.en_US
dc.identifier.urihttps://hdl.handle.net/10037/26820
dc.language.isoengen_US
dc.publisherUiT Norges arktiske universitetno
dc.publisherUiT The Arctic University of Norwayen
dc.rights.holderCopyright 2022 The Author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0en_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)en_US
dc.subject.courseIDFYS-3941
dc.subjectMachine Learningen_US
dc.subjectObject Detectionen_US
dc.titleTowards population counting of marine mammals based on drone imagesen_US
dc.typeMastergradsoppgaveno
dc.typeMaster thesisen


Tilhørende fil(er)

Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
Med mindre det står noe annet, er denne innførselens lisens beskrevet som Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)