dc.contributor.author | Celis, Gerardo | |
dc.contributor.author | Ungar, Peter | |
dc.contributor.author | Sokolov, Aleksandr | |
dc.contributor.author | Soininen, Eeva M | |
dc.contributor.author | Böhner, Hanna | |
dc.contributor.author | Liu, Desheng | |
dc.contributor.author | Gilg, Olivier | |
dc.contributor.author | Fufachev, Ivan | |
dc.contributor.author | Pokrovskaya, Olga | |
dc.contributor.author | Ims, Rolf Anker | |
dc.contributor.author | Zhou, Wenbo | |
dc.contributor.author | Morris, Dan | |
dc.contributor.author | Ehrich, Dorothee | |
dc.date.accessioned | 2024-09-05T09:12:01Z | |
dc.date.available | 2024-09-05T09:12:01Z | |
dc.date.issued | 2024-03-26 | |
dc.description.abstract | Camera traps are a powerful, practical, and non-invasive method used widely to monitor animal communities
and evaluate management actions. However, camera trap arrays can generate thousands to millions of images
that require significant time and effort to review. Computer vision has emerged as a tool to accelerate this image
review process. We propose a multi-step, semi-automated workflow which takes advantage of site-specific and
generalizable models to improve detections and consists of (1) automatically identifying and removing lowquality images in parallel with classification into animals, humans, vehicles, and empty, (2) automatically
cropping objects from images and classifying them (rock, bait, empty, and species), and (3) manually inspecting a
subset of images. We trained and evaluated this approach using 548,627 images from 46 cameras in two regions
of the Arctic: “Finnmark” (Finnmark County, Norway) and “Yamal” (Yamalo-Nenets Autonomous District,
Russia). The automated steps yield image classification accuracies of 92% and 90% for the Finnmark and Yamal
sets, respectively, reducing the number of images that required manual inspection to 9.2% of the Finnmark set
and 3.9% of the Yamal set. The amount of time invested in developing models would be offset by the time saved
from automation after 960 thousand images have been processed. Researchers can modify this multi-step process
to develop their own site-specific models and meet other needs for monitoring and surveying wildlife, balancing
the acceptable levels of false negatives and positives. | en_US |
dc.identifier.citation | Celis, Ungar, Sokolov, Soininen, Böhner, Liu, Gilg, Fufachev, Pokrovskaya, Ims, Zhou, Morris, Ehrich. A versatile, semi-automated image analysis workflow for time-lapse camera trap image classification. Ecological Informatics. 2024;81 | en_US |
dc.identifier.cristinID | FRIDAID 2269837 | |
dc.identifier.doi | 10.1016/j.ecoinf.2024.102578 | |
dc.identifier.issn | 1574-9541 | |
dc.identifier.issn | 1878-0512 | |
dc.identifier.uri | https://hdl.handle.net/10037/34526 | |
dc.language.iso | eng | en_US |
dc.publisher | Elsevier | en_US |
dc.relation.journal | Ecological Informatics | |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2024 The Author(s) | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0 | en_US |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) | en_US |
dc.title | A versatile, semi-automated image analysis workflow for time-lapse camera trap image classification | en_US |
dc.type.version | publishedVersion | en_US |
dc.type | Journal article | en_US |
dc.type | Tidsskriftartikkel | en_US |
dc.type | Peer reviewed | en_US |