Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring the efficacy of <scp>ChatGPT</scp> in understanding and identifying intimate partner violence
6
Zitationen
9
Autoren
2025
Jahr
Abstract
Abstract Objective This study aims to examine the efficiency and consistency of ChatGPT in identifying intimate partner violence (IPV) and the frequency of emotional and informational support ChatGPT provided. Background The integration of artificial intelligence–based conversational large language models, such as ChatGPT, in understanding relationship dynamics has sparked both interest and debate within the scientific community. This tool could be valuable in offering immediate, personalized responses to questions about relationships, including those involving conflicts and violence. Method We extracted 500 posts involving IPV and 80 posts involving nonviolent family tension from online IPV help‐seeking forums as prompts for ChatGPT (Version 3.5). We coded ChatGPT's responses and examined their congruence and consistency in identifying IPV compared to human experts. We also examined incidents where ChatGPT misjudged. Lastly, we assessed the presence of informational and emotional support in ChatGPT's responses to prompts involving IPV. Results ChatGPT‐3.5 was able to identify cases involving IPV (physical violence, psychological violence, and controlling behavior) correctly in 91.2% of the cases. Misjudgment mostly occurred due to community policies or nuanced context information. ChatGPT consistently provided emotional support and informational support to users who presented IPV‐related inquiries. Conclusion ChatGPT‐3.5 could reach a relatively high accuracy and consistency in identifying IPV and can provide supportive responses. Implications ChatGPT can serve as an initial resource for individuals and family members seeking help with IPV, offering immediate, empathetic, and informational support. However, improvements are needed to address its limitations in handling nuanced cases and to ensure ethical use and user safety.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.