Show simple item record

dc.contributor.advisorMahler, Tobias
dc.contributor.authorHauglid, Mathias Karlsen
dc.date.accessioned2024-04-22T09:40:23Z
dc.date.available2024-04-22T09:40:23Z
dc.date.embargoEndDate2028-04-26
dc.date.issued2024-04-26
dc.description.abstractOne of the most promising utilisations of Artificial Intelligence in healthcare is Clinical Decision Support (AI-CDS) systems: AI systems that support clinical assessments and decision-making by producing relevant classifications and predictions that may be relied on by clinicians and patients. However, an oft-cited concern is that AI systems can be ‘biased’ – a nebulous term often used to describe certain undesirable side-effects of AI. There exists a widespread apprehension that the use of AI to aid decision-making might, due to the presence of ‘bias,’ produce results that amount to discrimination. In response to various risks associated with AI systems, the EU legislature has proposed a common European regulatory framework for AI systems – the Artificial Intelligence Act. While facilitating innovation and trade, the AI Act aims to ensure the effective protection of the safety and fundamental rights of EU citizens, including the right to non-discrimination. Particularly, the proposed AI Act will require that certain preventive measures are taken to ensure compliance with applicable requirements before AI systems can lawfully be deployed in the EU. These preventive measures may, in one form or another, require that discrimination in AI systems must be assessed before deployment. The assessment of discrimination in an AI-CDS system before its deployment invokes the need for development of appropriate assessment methodologies. The objective of the thesis is to develop certain methodological elements of assessing discrimination in these systems in a pre-deployment setting. More precisely, the thesis sets out to develop considerations, principles, criteria, and methods that ought to be included in a pre-deployment discrimination assessment based on the non-discrimination principle in EU law. As a foundation for the development of these methodological elements, the thesis explores the issue of ‘bias’ in AI-CDS systems and the mechanisms through which equality-related biases may occur in these systems.en_US
dc.description.doctoraltypeph.d.en_US
dc.description.popularabstractThere are widespread expectations that Artificial Intelligence may revolutionise healthcare. The use of AI as clinical decision support may contribute to faster, more efficient, more accessible, and more accurate medical care. However, AI technologies are often biased to the detriment of certain patient groups. Could biases in AI systems cause discrimination in healthcare? And how may one assess discrimination in an AI system before it is deployed?en_US
dc.identifier.isbn978-82-93021-46-9en_US
dc.identifier.urihttps://hdl.handle.net/10037/33423
dc.language.isoengen_US
dc.publisherUiT The Arctic University of Norwayen_US
dc.publisherUiT Norges arktiske universiteten_US
dc.rights.holderCopyright 2024 The Author(s)
dc.subject.courseIDDOKTOR-005
dc.subjectartificial intelligenceen_US
dc.subjectbiasen_US
dc.subjectdiscriminationen_US
dc.subjecthealth technologyen_US
dc.subjectEU lawen_US
dc.titleBias and Discrimination in Clinical Decision Support Systems Based on Artificial Intelligenceen_US
dc.typeDoctoral thesisen_US
dc.typeDoktorgradsavhandlingen_US


File(s) in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following collection(s)

Show simple item record