OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 24.04.2026, 09:04

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

CHiLL: Zero-shot Custom Interpretable Feature Extraction from Clinical Notes with Large Language Models

2023·20 ZitationenOpen Access
Volltext beim Verlag öffnen

20

Zitationen

4

Autoren

2023

Jahr

Abstract

We propose <b>CHiLL (Crafting High-Level Latents)</b>, an approach for <i>natural-language specification of features for linear models</i>. CHiLL prompts LLMs with expert-crafted queries to generate interpretable features from health records. The resulting noisy labels are then used to train a simple linear classifier. Generating features based on queries to an LLM can empower physicians to use their domain expertise to craft features that are clinically meaningful for a downstream task of interest, without having to manually extract these from raw EHR. We are motivated by a real-world risk prediction task, but as a reproducible proxy, we use MIMIC-III and MIMIC-CXR data and standard predictive tasks (e.g., 30-day readmission) to evaluate this approach. We find that linear models using automatically extracted features are comparably performant to models using reference features, and provide greater interpretability than linear models using "Bag-of-Words" features. We verify that learned feature weights align well with clinical expectations.

Ähnliche Arbeiten