Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
CHiLL: Zero-shot Custom Interpretable Feature Extraction from Clinical Notes with Large Language Models
20
Zitationen
4
Autoren
2023
Jahr
Abstract
We propose <b>CHiLL (Crafting High-Level Latents)</b>, an approach for <i>natural-language specification of features for linear models</i>. CHiLL prompts LLMs with expert-crafted queries to generate interpretable features from health records. The resulting noisy labels are then used to train a simple linear classifier. Generating features based on queries to an LLM can empower physicians to use their domain expertise to craft features that are clinically meaningful for a downstream task of interest, without having to manually extract these from raw EHR. We are motivated by a real-world risk prediction task, but as a reproducible proxy, we use MIMIC-III and MIMIC-CXR data and standard predictive tasks (e.g., 30-day readmission) to evaluate this approach. We find that linear models using automatically extracted features are comparably performant to models using reference features, and provide greater interpretability than linear models using "Bag-of-Words" features. We verify that learned feature weights align well with clinical expectations.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.564 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.840 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.407 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.882 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.484 Zit.