Show simple item record

dc.contributor.authorCelis, Gerardo
dc.contributor.authorUngar, Peter
dc.contributor.authorSokolov, Aleksandr
dc.contributor.authorSoininen, Eeva M
dc.contributor.authorBöhner, Hanna
dc.contributor.authorLiu, Desheng
dc.contributor.authorGilg, Olivier
dc.contributor.authorFufachev, Ivan
dc.contributor.authorPokrovskaya, Olga
dc.contributor.authorIms, Rolf Anker
dc.contributor.authorZhou, Wenbo
dc.contributor.authorMorris, Dan
dc.contributor.authorEhrich, Dorothee
dc.date.accessioned2024-09-05T09:12:01Z
dc.date.available2024-09-05T09:12:01Z
dc.date.issued2024-03-26
dc.description.abstractCamera traps are a powerful, practical, and non-invasive method used widely to monitor animal communities and evaluate management actions. However, camera trap arrays can generate thousands to millions of images that require significant time and effort to review. Computer vision has emerged as a tool to accelerate this image review process. We propose a multi-step, semi-automated workflow which takes advantage of site-specific and generalizable models to improve detections and consists of (1) automatically identifying and removing lowquality images in parallel with classification into animals, humans, vehicles, and empty, (2) automatically cropping objects from images and classifying them (rock, bait, empty, and species), and (3) manually inspecting a subset of images. We trained and evaluated this approach using 548,627 images from 46 cameras in two regions of the Arctic: “Finnmark” (Finnmark County, Norway) and “Yamal” (Yamalo-Nenets Autonomous District, Russia). The automated steps yield image classification accuracies of 92% and 90% for the Finnmark and Yamal sets, respectively, reducing the number of images that required manual inspection to 9.2% of the Finnmark set and 3.9% of the Yamal set. The amount of time invested in developing models would be offset by the time saved from automation after 960 thousand images have been processed. Researchers can modify this multi-step process to develop their own site-specific models and meet other needs for monitoring and surveying wildlife, balancing the acceptable levels of false negatives and positives.en_US
dc.identifier.citationCelis, Ungar, Sokolov, Soininen, Böhner, Liu, Gilg, Fufachev, Pokrovskaya, Ims, Zhou, Morris, Ehrich. A versatile, semi-automated image analysis workflow for time-lapse camera trap image classification. Ecological Informatics. 2024;81en_US
dc.identifier.cristinIDFRIDAID 2269837
dc.identifier.doi10.1016/j.ecoinf.2024.102578
dc.identifier.issn1574-9541
dc.identifier.issn1878-0512
dc.identifier.urihttps://hdl.handle.net/10037/34526
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.relation.journalEcological Informatics
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2024 The Author(s)en_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0en_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)en_US
dc.titleA versatile, semi-automated image analysis workflow for time-lapse camera trap image classificationen_US
dc.type.versionpublishedVersionen_US
dc.typeJournal articleen_US
dc.typeTidsskriftartikkelen_US
dc.typePeer revieweden_US


File(s) in this item

Thumbnail

This item appears in the following collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)