Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring Language Patterns in a Medical Licensure Exam Item Bank
0
Zitationen
3
Autoren
2021
Jahr
Abstract
This study examines the use of natural language processing (NLP) models to evaluate whether language patterns used by item writers in a medical licensure exam might contain evidence of biased or stereotypical language. This type of bias in item language choices can be particularly impactful for items in a medical licensure assessment, as it could pose a threat to content validity and defensibility of test score validity evidence. To the best of our knowledge, this is the first attempt using machine learning (ML) and NLP to explore language bias on a large item bank. Using a prediction algorithm trained on clusters of similar item stems, we demonstrate that our approach can be used to review large item banks for potential biased language or stereotypical patient characteristics in clinical science vignettes. The findings may guide the development of methods to address stereotypical language patterns found in test items and enable an efficient updating of those items, if needed, to reflect contemporary norms, thereby improving the evidence to support the validity of the test scores.
Ähnliche Arbeiten
Fundamental Considerations in Language Testing
1991 · 4.401 Zit.
Interpretative Phenomenological Analysis
2020 · 4.054 Zit.
Implicit memory: History and current status.
1987 · 2.905 Zit.
Recognizing: The judgment of previous occurrence.
1980 · 2.677 Zit.
Category Interference in Translation and Picture Naming: Evidence for Asymmetric Connections Between Bilingual Memory Representations
1994 · 2.563 Zit.