Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence-generated informed patient consent in various ophthalmological procedures: A comparative study of correctness, completeness, readability, and real-word application between Deepseek and ChatGPT 4o
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Dear Editor, We read with considerable interest the article “Artificial intelligence-generated informed patient consent in various ophthalmological procedures: A comparative study of correctness, completeness, readability, and real-world application between Deepseek and ChatGPT 4o” by Das et al.,[1] published in the Indian Journal of Ophthalmology (IJO) on September 25, 2025. As artificial intelligence (AI) rapidly permeates clinical medicine, including ophthalmology, evidence evaluating the quality and legal robustness of AI-generated informed consent documents offers immense value for future practice and policymaking. Das et al. conducted a cross-sectional observational study at a tertiary eye hospital in India.[1] They selected ten common ophthalmological procedures for which standardized scenarios were fed to two public AI chatbots—ChatGPT 4o and Deepseek. The responses were scrutinized for correctness, completeness, language/readability, presence of additional or irrelevant information, and real-world applicability in Indian clinical contexts. The study found that ChatGPT-generated consents were shorter (fewer words and sentences), while Deepseek produced longer, more readable documents but required more prompt attempts. Deepseek’s outputs had higher Flesch-Kincaid and Gunning Fog readability scores, indicating easier comprehension. Nonetheless, 40% of consents created by both AI models were considered unfit for Indian medical scenarios—either incomplete, inaccurate, or lacking contextualized references to Indian legal requirements. Several key limitations and flaws should be highlighted for further research and clinical translation: Despite substantial effort, almost half of the AI-generated consents were not considered suitable for use in India, underscoring current limitations of these models in localizing content for Indian legal, ethical, and cultural contexts. Future iterations should integrate region-specific normative requirements.[1] The AI models used are trained on global databases, potentially lacking Indian biomedical, legal, and linguistic content. The resultant consents may omit critical local statutory language, legal waivers, or documentation requirements (e.g., inclusion of witness signatures).[1] The expert raters were all ophthalmologists from a single tertiary centre, potentially limiting broader generalizability. Expanding the assessment to legal experts, ethicists, and clinicians from varied Indian regions could offer richer perspectives for future validation. The study neither stratified consents by subspecialty or procedural risk, nor did it evaluate AI chatbot performance in high-risk, multi-step, or emergent procedures where accuracy and legal thoroughness are even more critical. This gap should be addressed in future research. While readability was assessed using standard indices, direct patient or surrogate comprehension and acceptance were not measured. Incorporating lay patient feedback may help optimize consent construction for real-world uptake. Implications and Future Directions The findings from Das et al. reinforce the need for caution in adopting AI-generated consent without rigorous local adaptation. As AI models improve and begin to integrate region-specific legal and cultural content, their use could enhance efficiency, personalization, and potentially reduce clinician workload. However, current shortcomings highlighted by this pilot study must be rectified before widespread clinical implementation—including regulatory review, legal endorsement, and multi-stakeholder vetting.[2,3] Furthermore, the study points toward future applications where AI might not just generate static consent but actively support the physician–patient dialogue, providing dynamic, interactive, and personalized information. Conclusion Das et al. provide a timely, well-conceived pilot comparison of AI-based informed consent generation in ophthalmology that sets the stage for further improvement and validation.[1] While Deepseek demonstrates more detailed and readable outputs, both models presently fall short for the Indian scenario on critical medico-legal metrics. Concerted efforts must direct future AI development toward training with locally sourced datasets and legal guidelines. A multidisciplinary approach involving clinicians, legal experts, and patient representatives will be central to ensuring AI consent fulfills both its legal and ethical obligations. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.