Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large Language Models for Efficient Mental Health Parity Oversight
1
Zitationen
1
Autoren
2025
Jahr
Abstract
OBJECTIVE: The author examined whether a large language model (LLM) can help identify noncompliance with the Mental Health Parity and Addiction Equity Act (MHPAEA) in health insurance plan documents. METHODS: Using Anthropic's Claude 3.5 Sonnet between December 1, 2024, and January 31, 2025, the author analyzed primary documentation for the Essential Health Benefits benchmark plans for 2026. An LLM prompt was first validated, and the author assessed the LLM's positive predictive value (PPV) in applying that prompt to identify areas of potential MHPAEA noncompliance. The LLM then prioritized the top 10 areas of noncompliance among those accurately identified. RESULTS: The LLM identified on average 3.8 areas of potential noncompliance per document, with an average PPV of 49%. CONCLUSIONS: The findings indicate that LLMs currently have a relatively poor PPV in regulatory oversight tasks but may help improve efficiency by enabling rapid identification of potential MHPAEA noncompliance to prioritize areas for further review.
Ähnliche Arbeiten
The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods
2009 · 5.714 Zit.
The Stress Process
1981 · 4.480 Zit.
Mental health problems and social media exposure during COVID-19 outbreak
2020 · 2.793 Zit.
Cross-national prevalence and risk factors for suicidal ideation, plans and attempts
2008 · 2.633 Zit.
Psychological Aspects of Natural Language Use: Our Words, Our Selves
2002 · 2.557 Zit.