OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 07:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

More Than Just a Pretty Face? Nudging and Bias in Chatbots

2023·5 Zitationen·Annals of Internal Medicine
Volltext beim Verlag öffnen

5

Zitationen

4

Autoren

2023

Jahr

Abstract

Ideas and Opinions6 June 2023More Than Just a Pretty Face? Nudging and Bias in ChatbotsMarlee Akerson, BA, Matt Andazola, MPH, Annie Moore, MD, MBA, and Matthew DeCamp, MD, PhDMarlee Akerson, BACenter for Bioethics and Humanities, University of Colorado Anschutz Medical Campus, Aurora, Colorado (M.Akerson)Search for more papers by this author, Matt Andazola, MPHUCHealth, Denver, Colorado (M.Andazola)Search for more papers by this author, Annie Moore, MD, MBADivision of General Internal Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado (A.M.)Search for more papers by this author, and Matthew DeCamp, MD, PhDCenter for Bioethics and Humanities and Division of General Internal Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado (M.D.).Search for more papers by this authorAuthor, Article, and Disclosure Informationhttps://doi.org/10.7326/M23-0877 SectionsAboutFull TextPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissions ShareFacebookTwitterLinkedInRedditEmail The sudden and shocking appearance of ChatGPT (OpenAI)—able to write scientific articles, pass medical licensing examinations, fetch CPT (Current Procedural Terminology) codes, and develop differential diagnoses (1, 2)—raises immediate questions about how health systems will use conversational artificial intelligence, or chatbots, in patient-facing contexts. ChatGPT may catalyze expansion of this technology’s uses in patient communication. Chatbots are already using other natural language processing methods to check COVID-19 symptoms, manage chronic diseases, support mental health treatment, and deliver genetic test results (3).Chatbots promise to support medical education, research, and practice but not without peril. They raise ethical issues around safety, ...References1. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2:e0000198. [PMID: 36812645] doi:10.1371/journal.pdig.0000198 CrossrefMedlineGoogle Scholar2. Schinkel M, Paranjape K, Nanayakkara P. Written by humans or artificial intelligence? That is the question [Editorial]. Ann Intern Med. 2023;176:572-573. [PMID: 36913691] doi:10.7326/M23-0154 LinkGoogle Scholar3. McGreevey JD, Hanson CW, Koppel R. Clinical, legal, and ethical aspects of artificial intelligence-assisted conversational agents in health care. JAMA. 2020;324:552-553. [PMID: 32706386] doi:10.1001/jama.2020.2724 CrossrefMedlineGoogle Scholar4. Liao Y, He J. Racial mirroring effects on human-agent interaction in psychotherapeutic conversations. In: Paternò F, Oliver N, Conati C, et al., eds. IUI ’20: Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020. Association for Computing Machinery; 2020:430-442. doi:10.1145/3377325.3377488 CrossrefGoogle Scholar5. Takeshita J, Wang S, Loren AW, et al. Association of racial/ethnic and gender concordance between patients and physicians with patient experience ratings. JAMA Netw Open. 2020;3:e2024583. [PMID: 33165609] doi:10.1001/jamanetworkopen.2020.24583 CrossrefMedlineGoogle Scholar6. Thaler RH, Sunstein CR. Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale Univ Pr; 2008. Google Scholar7. Blumenthal-Barby JS. Good Ethics and Bad Choices: The Relevance of Behavioral Economics for Medical Ethics. MIT Pr; 2021. Google Scholar8. Eyssel F, Hegel F. (S)he's got the look: gender stereotyping of robots. J Appl Soc Psychol. 2012;42:2213-2230. doi:10.1111/j.1559-1816.2012.00937.x CrossrefGoogle Scholar9. Darcy A, Daniels J, Salinger D, et al. Evidence of human-level bonds established with a digital conversational agent: cross-sectional, retrospective observational study. JMIR Form Res. 2021;5:e27868. [PMID: 33973854] doi:10.2196/27868 CrossrefMedlineGoogle Scholar10. Rajkomar A, Hardt M, Howell MD, et al. Ensuring fairness in machine learning to advance health equity. Ann Intern Med. 2018;169:866-872. [PMID: 30508424] doi:10.7326/M18-1990 LinkGoogle Scholar Author, Article, and Disclosure InformationAuthors: Marlee Akerson, BA; Matt Andazola, MPH; Annie Moore, MD, MBA; Matthew DeCamp, MD, PhDAffiliations: Center for Bioethics and Humanities, University of Colorado Anschutz Medical Campus, Aurora, Colorado (M.Akerson)UCHealth, Denver, Colorado (M.Andazola)Division of General Internal Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado (A.M.)Center for Bioethics and Humanities and Division of General Internal Medicine, University of Colorado Anschutz Medical Campus, Aurora, Colorado (M.D.).Grant Support: By a Making a Difference Grant from the Greenwall Foundation (“The Chatbot is In: Ethics and Conversational AI in Health Care”).Disclosures: Disclosures can be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M23-0877.Corresponding Author: Matthew DeCamp, MD, PhD, Center for Bioethics and Humanities and Division of General Internal Medicine, University of Colorado Anschutz Medical Campus, Mail Stop B137, 13080 East 19th Avenue, Aurora, CO 80045; e-mail, matthew.decamp@cuanschutz.edu.Author Contributions: Conception and design: M. Akerson, M. DeCamp, A. Moore.Analysis and interpretation of the data: M. Akerson, M. DeCamp, A. Moore.Drafting of the article: M. Akerson, M. Andazola, M. DeCamp.Critical revision of the article for important intellectual content: M. DeCamp, A. Moore.Final approval of the article: M. Akerson, M. Andazola, M. DeCamp, A. Moore.Provision of study materials or patients: M. Akerson.Obtaining of funding: M. DeCamp, A. Moore.Administrative, technical, or logistic support: M. Akerson, M. Andazola, M. DeCamp, A. Moore.Collection and assembly of data: M. Akerson.This article was published at Annals.org on 6 June 2023. PreviousarticleNextarticle Advertisement FiguresReferencesRelatedDetails Metrics July 2023Volume 176, Issue 7Page: 997-998KeywordsArtificial intelligenceBioethicsHealth careHealth equity ePublished: 6 June 2023 Issue Published: July 2023 Copyright & PermissionsCopyright © 2023 by American College of Physicians. All Rights Reserved.PDF downloadLoading ...

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationCOVID-19 diagnosis using AIAI in Service Interactions
Volltext beim Verlag öffnen