Instruction-guided deidentification with synthetic test cases for Norwegian clinical text
Permanent link
https://hdl.handle.net/10037/35695Date
2024Type
Journal articleTidsskriftartikkel
Peer reviewed
Author
Lund, Jørgen Aarmo; Burman, Per Joel Burman; Woldaregay, Ashenafi Zebene; Jenssen, Robert; Mikalsen, Karl ØyvindAbstract
Deidentification methods, which remove directly identifying information, can be useful tools to mitigate the privacy risks associated with sharing healthcare data. However, benchmarks to evaluate deidentification methods are themselves often derived from real clinical data, making them sensitive themselves and therefore harder to share and apply. Given the rapid advances in generative language modelling, we would like to leverage large language models to construct freely available deidentification benchmarks, and to assist in the deidentification process. We apply the GPT-4 language model to, for the first time, construct a synthetic and publicly available dataset of synthetic Norwegian discharge summaries with annotated identifying details, consisting of 1200 summaries averaging 100 words each. In our sample of documents, we find that the generated annotations highly agree with human annotations, with an F1 score of 0.983. We then examine whether large language models can be applied directly to perform deidentification themselves, proposing methods where an instruction-tuned language model is prompted to either annotate or redact identifying details. Comparing the methods on our synthetic dataset and the NorSynthClinical-PHI dataset, we f ind that GPT-4 underperforms the baseline method proposed by Br˚athen et al. [1], suggesting that named entity recognition problems are still challenging for instruction-tuned language models.
Description
Publisher
PMLRCitation
Lund, Burman, Woldaregay, Jenssen, Mikalsen. Instruction-guided deidentification with synthetic test cases for Norwegian clinical text. Proceedings of Machine Learning Research (PMLR). 2024;233:145-152Metadata
Show full item recordCollections
Copyright 2024 The Author(s)