Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From understanding to justifying: Computational reliabilism for AI-based forensic evidence evaluation
7
Zitationen
4
Autoren
2024
Jahr
Abstract
Techniques from artificial intelligence (AI) can be used in forensic evidence evaluation and are currently applied in biometric fields. However, it is generally not possible to fully understand how and why these algorithms reach their conclusions. Whether and how we should include such 'black box' algorithms in this crucial part of the criminal law system is an open question that has not only scientific but also ethical, legal, and philosophical angles. Ideally, the question should be debated by people with diverse backgrounds. Here, we present a view on the question from the philosophy of science angle: computational reliabilism (CR). CR posits that we are justified in believing the output of an AI system, if we have grounds for believing its reliability. Under CR, these grounds are classified into 'reliability indicators' of three types: technical, scientific, and societal. This framework enables debates on the suitability of AI methods for forensic evidence evaluation that take a wider view than explainability and validation. We argue that we are justified in believing the AI's output for forensic comparison of voices and forensic comparison of faces. Technical indicators include the validation of the AI algorithm in itself, validation of its application in the forensic setting, and case-based validation. Scientific indicators include the simple notion that we know faces and voices contain identifying information along with operationalizing well-established metrics and forensic practices. Societal indicators are the emerging scientific consensus on the use of these methods, as well as their application and interpretation by well-educated and certified practitioners. We expect expert witnesses to rely more on technical indicators to be justified in believing AIsystems, and triers-of-fact to rely more on societal indicators to believe the expert witness supported by the AIsystem.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.620 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.876 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.435 Zit.
Fairness through awareness
2012 · 3.293 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.