Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
On Patient Safety: The Lure of Artificial Intelligence—Are We Jeopardizing Our Patients’ Privacy?
19
Zitationen
1
Autoren
2020
Jahr
Abstract
The use of artificial intelligence (AI) in health care has generated tremendous interest within orthopaedics [10, 11, 13, 18, 23]. The promise of machine learning or so-called adaptive algorithms that can, for instance, predict the need for (and longevity of) joint replacement in specific patients by analyzing enormous datasets [5, 17] has captured the imagination of software developers, device makers, and healthcare providers who are racing to design such systems. While promising, we should familiarize ourselves with the many serious safety risks associated with AI [14]. Foremost among these are patient-privacy concerns. Technology companies need vast quantities of patient data to build and train functioning machine-learning AI systems. Google Inc, for example, is training its systems using millions of patient records from Ascension, the country’s second largest hospital system [21]. These alliances between technology companies and medical providers can put patient privacy at risk in multiple ways. The most obvious concern with widely shared electronic data is security breaches. Data breaches resulting in the release of millions of patients’ private medical information have already become all-too common [8]. Furthermore, the deidentification of patient data prior to sharing, as required by the Health Insurance Portability and Accountability Act (HIPPA), is not proof against subsequent reidentification [15]. When HIPPA was passed into law, large-scale data sharing between technology companies and healthcare providers could not have been imagined. And so, while the law does require deidentification of patient data prior to sharing, there is no legal injunction barring technology firms (or advertisers or any other noncovered entities) from reidentifying patient records. Reidentification becomes possible once sufficient quantities of overlapping data becomes available across multiple datasets by a method called “data triangulation”. A well-known example of this technique is digital “fingerprinting”—the collection of seemingly innocuous data bits gleaned from our smartphone applications and other online devices [1, 3]. This digital fingerprint then allows technology firms to track us—both across the web and by physical location. Once enough information about any device becomes known, hiding that device becomes extremely difficult [3]. As more medical data is entered into datasets, the risk of using data triangulation to match specific patients to their health records becomes more real [14]. For example, even prior to the Cambridge Analytica data scandal [5], Facebook sought to collect anonymized patient data from major medical providers to match up with the user data Facebook itself had collected, using a fingerprinting technique known as “hashing” [6]. This would have allowed Facebook—and any entity to which they sold the data—to obtain patients’ most-sensitive information, including sexual histories, disease states, and use of medications. This reidentified patient data can then move freely—and legally—across the web just like any other datasets. In addition to all the unsavory actors who might hope to use this data, an entire industry of data aggregators pools and packages consumer information like this for analysis and resale across the internet [4]. I believe that current safeguards and laws are inadequate for protecting our patients’ privacy as AI moves into health care, and I also believe patient privacy will be difficult to ensure as more data are shared across platforms and more personal information, including genomic data, are transmitted. Our patients should have a voice in the use of their own medical data. Every healthcare provider—from physician offices to national healthcare systems—must prioritize patient privacy in any collaborative effort with technology companies to develop AI systems. HIPPA is in dire need of reform and modernization to match the capabilities of our technological age. When HIPPA was adopted in 1996, only 20 million Americans used the internet, and they spent an average of a half an hour per month on the web [14]; now 92% of Americans spend an average of nearly 24 hours a week on the internet [12]. Regulation must be modernized to match this change in our use of the digital world. All medical records across all platforms should be legally protected from reidentification, ideally by new federal statute, and a revised HIPPA should permit individual patients the right to sue for personal damages from negligent providers (currently, providers are subject to federal fines for HIPPA violations, but injured patients must seek redress via a patchwork of state laws [9]—in a way similar to how consumers could respond to any other personal data breach). Furthermore, we should recognize our patients’ health records for what they are—our patients’ records. All health providers—including orthopaedic surgeons—should obtain explicit patient consent (and offer reasonable financial compensation [19]) prior to sharing health records with technology firms working to monetize patient data by building AI systems. Patients should first be made aware of the risks of data reidentification and the number of data breaches involving medical information that have already occurred. As of this writing, I do not believe patients generally have any idea that medical data is being widely shared and subject to the risks enumerated above; this is highly unfair to patients and must end. Individual orthopaedic providers and our state professional societies also can and should engage with our state government representatives to convey the urgency of passing legislation that augments patient privacy protections. The California Consumer Privacy Act [22] could be used as a starting point for reasonable reform as it grants consumers many rights over the use of their data, including monetary damages for unauthorized data disclosure [2]. In 1984, George Orwell’s protagonist, muses: “Nothing was your own except the few cubic centimeters inside your skull” [16] as he ponders his complete lack of privacy. Orwell’s vision in the novel is a technologically enabled dystopian world; we must actively guard against it becoming our reality. As you read this, Facebook and Neuralink are building an algorithm to push things a step beyond Orwell by reading brain waves [20]. The goal may be admirable—to help patients with paralysis control assistive devices—but such an algorithm also highlights how intrusive AI in medicine can be. We must enact both legal and provider-based safeguards to ensure our patients’ privacy as technology companies mine millions of our patients’ records to build new, ever-more-powerful AI systems.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.