Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence technology in the clinical analysis of a patient with a mental disorder (case report)
0
Zitationen
4
Autoren
2025
Jahr
Abstract
The rapid growth of artificial intelligence (AI) technology has led to its integration into various fields, including medicine, including psychiatry. Modern neural networks, such as neural networks and large language models (LLMs), claim to be able to diagnose, prescribe treatments, and predict the course of mental disorders based on historical and clinical data. The capabilities of neural networks in identifying hidden patterns make them an essential component of scientific research, and these advances are expected to be implemented in clinical practice in the near future. Artificial intelligence technology shows great potential in medical education. This study was conducted to evaluate the capabilities of ChatGPT-4-based system in the clinical analysis of a patient with a mental disorder. The analysis involved comparing the results of the LLM’s analysis of clinical data with psychiatrists’ clinical analysis. The findings showed that the LLM demonstrated high potential in formulating psychiatric diagnoses within an operational framework and supporting the algorithm for therapy and prognosis based on evidence-based approach. However, at the current stage of development, neural networks are unable to fully implement dynamic and phenomenological analysis of the clinical case, which significantly limits the accuracy of diagnostics. This clinical case emphasizes the importance of considering patients’ cultural and psychological background when conceptualizing their phenomenological experiences. Assessing subjective experience in a biopsychosocial context and considering temporal dynamics allows for the most accurate diagnosis. The work on «neuromorphizing» neural networks and adapting their analytical apparatus to human thinking patterns seems promising.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.