Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Demystifying the Risk of Reidentification in Neuroimaging Data – A technical and regulatory analysis
2
Zitationen
3
Autoren
2023
Jahr
Abstract
Sharing research data has been widely promoted in the field of neuroimaging and has enhanced the rigor and reproducibility of neuroimaging studies. Yet the emergence of novel software tools and algorithms, such as face recognition, has raised concerns due to their potential to reidentify neuroimaging data that are thought to have been deidentified. Despite the surge of privacy concerns, however, the risk of reidentification via these tools and algorithms has not yet been examined outside the limited settings for demonstration purposes. There is also a pressing need to carefully analyze regulatory implications of this new reidentification attack because it has been argued that conventional methods to deidentify neuroimaging data may no longer be compliant with regulatory requirements. This study aims to tackle these gaps through rigorous technical and regulatory analyses. We first tested the generalizability of the matching accuracies in the recent face recognition studies through a simulation analysis. The results showed that the real-world likelihood of reidentification via face recognition would be substantially lower than that reported in the previous studies. Next, by taking a US jurisdiction as a case study, we analyzed whether the novel reidentification risk posed by face recognition would affect achieving data deidentification under the current regulatory regime. Our analysis suggests that defaced neuroimaging data using existing tools would still meet the regulatory requirements for data deidentification. A brief comparison with the EU’s General Data Protection Regulation (GDPR) was also provided. Then we examined the implication of NIH’s new Data Management and Sharing Policy on the current practice of neuroimaging data sharing based on the results of our simulation and regulatory analyses. Finally, we discussed future directions of open data sharing in neuroimaging.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.