Visual explanations for polyp detection: How medical doctors assess intrinsic versus extrinsic explanations
Permanent lenke
https://hdl.handle.net/10037/34480Dato
2024-05-31Type
Journal articleTidsskriftartikkel
Peer reviewed
Forfatter
Hicks, Steven; Storås, Andrea; Riegler, Michael; Midoglu, Cise; Hammou, Malek; Lange, Thomas de; Parasa, Sravanthi; Halvorsen, Pål; Strumke, IngaSammendrag
Deep learning has achieved immense success in computer vision and has the potential to
help physicians analyze visual content for disease and other abnormalities. However, the
current state of deep learning is very much a black box, making medical professionals skeptical about integrating these methods into clinical practice. Several methods have been proposed to shed some light on these black boxes, but there is no consensus on the opinion of
medical doctors that will consume these explanations. This paper presents a study asking
medical professionals about their opinion of current state-of-the-art explainable artificial
intelligence methods when applied to a gastrointestinal disease detection use case. We
compare two different categories of explanation methods, intrinsic and extrinsic, and gauge
their opinion of the current value of these explanations. The results indicate that intrinsic
explanations are preferred and that physicians see value in the explanations. Based on the
feedback collected in our study, future explanations of medical deep neural networks can be
tailored to the needs and expectations of doctors. Hopefully, this will contribute to solving
the issue of black box medical systems and lead to successful implementation of this powerful technology in the clinic.
Forlag
PLOSSitering
Hicks, Storås, Riegler, Midoglu, Hammou, Lange, Parasa, Halvorsen, Strumke. Visual explanations for polyp detection: How medical doctors assess intrinsic versus extrinsic explanations. PLOS ONE. 2024;19(5)Metadata
Vis full innførselSamlinger
Copyright 2024 The Author(s)