dc.contributor.author | Torpmann-Hagen, Birk Sebastian Frostelid | |
dc.contributor.author | Riegler, Michael | |
dc.contributor.author | Halvorsen, Pål | |
dc.contributor.author | Johansen, Dag | |
dc.date.accessioned | 2024-09-26T08:39:10Z | |
dc.date.available | 2024-09-26T08:39:10Z | |
dc.date.issued | 2024-04-24 | |
dc.description.abstract | Deep Neural Networks have been shown to perform poorly or even fail altogether when
deployed in real-world settings, despite exhibiting excellent performance on initial benchmarks. This
typically occurs due to relative changes in the nature of the production data, often referred to as distributional
shifts. In an attempt to increase the transparency, trustworthiness, and overall utility of deep learning
systems, a growing body of work has been dedicated to developing distributional shift detectors. As part
of our work, we investigate distributional shift detectors that utilize statistical tests of neural network-based
representations of data. We show that these methods are prone to fail under sample-bias, which we
argue is unavoidable in most practical machine learning systems. To mitigate this, we implement a novel
distributional shift detection framework which explicitly accounts for sample-bias via a simple sample-selection procedure. In particular, we show that the effect of sample-bias can be significantly reduced by
performing statistical tests against the most similar data in the training set, as opposed to the training set as
a whole. We find that this improves the stability and accuracy of a variety of distributional shift detection
methods on both covariate- and semantic-shifts, with improvements to balanced accuracy typically ranging
between 0.1 and 0.2, and false-positive-rates often being eliminated altogether under bias. | en_US |
dc.identifier.citation | Torpmann-Hagen, Riegler, Halvorsen, Johansen. A Robust Framework for Distributional Shift Detection Under Sample-Bias. IEEE Access. 2024;12:59598-59611 | en_US |
dc.identifier.cristinID | FRIDAID 2271569 | |
dc.identifier.doi | 10.1109/ACCESS.2024.3393296 | |
dc.identifier.issn | 2169-3536 | |
dc.identifier.uri | https://hdl.handle.net/10037/34876 | |
dc.language.iso | eng | en_US |
dc.publisher | IEEE | en_US |
dc.relation.journal | IEEE Access | |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2024 The Author(s) | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0 | en_US |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) | en_US |
dc.title | A Robust Framework for Distributional Shift Detection Under Sample-Bias | en_US |
dc.type.version | publishedVersion | en_US |
dc.type | Journal article | en_US |
dc.type | Tidsskriftartikkel | en_US |
dc.type | Peer reviewed | en_US |