dc.contributor.advisor | Kozyri, Elisavet | |
dc.contributor.advisor | Gjerdrum, Anders Tungeland | |
dc.contributor.author | Ingebrigtsen, Marius Johan | |
dc.date.accessioned | 2024-06-18T05:33:44Z | |
dc.date.available | 2024-06-18T05:33:44Z | |
dc.date.issued | 2024-05-15 | en |
dc.description.abstract | Artificial Intelligence (AI) and the underlying Machine Learning (ML) technology is experiencing increased applications in various areas. The training of ML models requires significant amounts of data, and data might contain restrictions regarding their permitted usage. High-performant models are often called black-boxes because of their complex decision-making process. Thus, ML applications threaten compliance with data restrictions by the lack of explainability with this technology.
Data labels can enforce data restrictions in a system’s computational pipeline by being propagated from input to procedure output. A Label Propagation Mechanism (LPM) can employ an influence-based policy to propagate labels of input data that contribute towards the computation of the output. However, the application of influence-based label propagation in ML faces challenges due to the complete cross-taint of information inside these models. This thesis proposes an influence-based LPM that employs explanations from Explainable Artificial Intelligence (XAI) to propagate input labels to ML outputs.
This thesis concerns the proof of concept regarding the application of XAI to propagate to the output of a black-box ML model only the labels of inputs that have a high influence to that output. We first bridge the gap between conventional label propagation and the problematic application in ML. We then detail how LPMs can use XAI explanations to inform their label propagation. Next, we design and execute experiments with different XAI methods, models, and data. We evaluate the results based on the propagated labels and the faithfulness of the explanations for the model output. | en_US |
dc.identifier.uri | https://hdl.handle.net/10037/33828 | |
dc.language.iso | eng | en_US |
dc.publisher | UiT Norges arktiske universitet | no |
dc.publisher | UiT The Arctic University of Norway | en |
dc.rights.holder | Copyright 2024 The Author(s) | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-sa/4.0 | en_US |
dc.rights | Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) | en_US |
dc.subject.courseID | INF-3990 | |
dc.subject | Traceability | en_US |
dc.subject | Machine Learning | en_US |
dc.subject | Artificial Intelligence | en_US |
dc.subject | Explainable Artificial Intelligence | en_US |
dc.title | Label Propagation in Machine Learning Systems: Providing End-to-End Traceability with Explainable Artificial Intelligence | en_US |
dc.type | Mastergradsoppgave | no |
dc.type | Master thesis | en |