OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 10:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Toward Trustworthy Pediatric AI: A Call to Action From the National Academy of Medicine

2025·2 Zitationen·PEDIATRICS
Volltext beim Verlag öffnen

2

Zitationen

4

Autoren

2025

Jahr

Abstract

Artificial intelligence (AI) is increasingly prevalent in pediatrics. A steady stream of AI-enabled tools is emerging for enhancing preventive care, improving diagnostic precision, and improving access to care in underserved communities. Examples include expanding translation services, assessing gross motor performance with wearables,1 supporting autism screening,2 identifying adolescents at risk for suicide,3 and conducting tuberculosis screening by local health workers in rural Indian clinics.4AI must support the unique needs of children, including their rapidly changing development and dependence on adult decision-makers. Systems trained predominantly on adult data can misinterpret pediatric signs and symptoms, underestimate risk, or mislead, as recently reported when adult models misinterpreted normal pediatric images as pathological.5 Moreover, children’s limited capacity to be informed about risks, their vulnerability to bias, and the long-term implications of algorithmic decisions influence the development and use of AI differently from adults. For instance, a child may not realize that an AI-generated video discovered through a chatbot promotes unhealthy behavior or lacks their best interests at heart. These concerns underscore the urgent need for pediatricians to take a leading role in shaping child-focused AI and in guiding families on its use.To promote responsible AI in health care, the National Academy of Medicine (NAM), a nongovernmental institution that advises the nation on health and medical issues, developed the Artificial Intelligence Code of Conduct for Health and Medicine (AICC).6 This report distills the NAM’s decades of ethical guidance, cross-sector collaboration, and systems thinking into core commitments aimed at guiding responsible AI development and deployment. Unlike standalone AI frameworks from the World Health Organization or the US Food and Drug Administration,7 the AICC is integrated within the NAM’s vision of learning health systems as systems in which science, informatics, incentives, and culture are aligned for continuous improvement and innovation. This vision, now in place in numerous academic medical centers, supports seamless integration of best practices and new knowledge into care processes, generation of new knowledge as a byproduct of care delivery, and active participation of patients/families.While the AICC does not specifically address children, it is a blueprint to help child health advocates ensure that AI respects the complexity and dignity of childhood. Below, we explore the implications of the six “Code Commitments” for pediatrics and offer specific actions for child health advocates.Clinician empathy, shared decision-making, and cultural sensitivity must remain central. For example, an AI system that identifies children at risk for developmental delays based on electronic health record data should incorporate family and societal values, not just clinical thresholds. Pediatricians and developers should work to ensure these tools align with what families and communities consider “healthy” beyond just the absence of disease.AI in child health care must also nurture connections between parents and their children and the connection of parents and children to clinical teams.The AICC calls for professional organizations (eg, American Academy of Pediatrics, American Academy of Family Physicians) to develop standards for assessing AI tool alignment with societal and cultural goals related to children’s health and to support caregivers in adopting AI that integrates into the child and family’s values, beliefs, and needs. Such a standard could, for example, require that pediatric AI tools generate caregiver-friendly explanations in multiple languages and account for cultural, social, and economic contexts when recommending care.Pediatric AI must help close health inequities. For example, if an asthma prediction model works better in white children than in Black children due to biased training data, it worsens inequities. One of the best ways to accomplish this is to require datasets that accurately reflect the diverse realities of all children. To date, this requirement has been lacking.3,8,9 Without comprehensive pediatric data, AI risks perpetuating inequities related to gender, race, disability, and socioeconomic status. Pediatricians and professional organizations should advocate for data equity as a scientific and ethical imperative, for example, by collaborating with underrepresented communities to enrich pediatric data resources and AI tool access and by evaluating AI’s fitness for use in a range of pediatric settings.When possible, children and families should be included in decisions as to how pediatric AI is designed and used, as they have varying levels of acceptance and trust in the use of AI in pediatrics.10 For example, parents could be included in the design process of an AI tool for neonatal intensive care unit discharges to ensure it respects their preferences, cultural values, and caregiving capacity. Although children often cannot participate alone in traditional governance, families can be engaged. Local governance efforts must include parent and youth voices and ensure transparency in how AI influences care decisions. One approach is creating pediatric AI advisory councils composed of caregivers, clinicians, and adolescents.11AI offers tools to reduce clinician burnout, especially by automating documentation, supporting decision-making, and providing analysis of patient-contributed data such as headache logs. Pediatricians should work with health systems and vendors to shape AI adoption in ways that genuinely improve professional satisfaction so that clinicians feel supported rather than surveilled. For instance, while there is evidence that ambient scribing tools used in adult care can reduce after-hours note completion and improve the quality of notes by capturing things that the clinician might have otherwise forgotten, the benefits in pediatrics still appear mixed.12Tools and metrics built for evaluating AI use in adults cannot be assumed to be safe or effective in pediatric populations. Recent experiences with tools such as AI sepsis warnings and patient-facing tools such as character.ai remind us of the myriad potential threats to safety and equity.13 Performance metrics must advance humanity, as well as learning from the cybersecurity process known as “red teaming,” in which testing and evaluation proactively uncover, understand, and fix risks before they can be exploited. In the case of AI performance, a “pediatric red team” would identify safety and ethical vulnerabilities that AI might inadvertently exploit in child health care. For example, a team could test a planned AI pediatric emergency department triage tool to detect if it under-triages children with limited English proficiency or with neurodevelopmental disorders. Both may present atypically or could be poorly represented in training data. “Red team” participants could include pediatric clinicians, parents, adolescents, AI engineers, human factors experts, educators, ethicists, patient safety officers, legal advisors, and communication experts, all with an equal opportunity to share their perspectives.AI offers the opportunity to dramatically increase the data captured in each encounter with a child. By using these data, clinicians participating in learning health systems can track the real-world impacts of AI and contribute to iterative improvement. We should champion research that includes AI-related pediatric outcomes and ensure findings are shared across institutions through registries, data collaboratives, or multisite research networks.This report is a call to action for pediatricians. Readers of the AICC will recognize the need to demand pediatric-specific data and testing in all AI tools used in child health, whether developed by vendors or institutions. Table 1 summarizes specific recommendations from the report from the lens of what we, as child health advocates, should prioritize to support the care and safety of children. We must educate ourselves and our trainees in AI literacy and digital ethics by integrating these topics into medical and continuing education. It is equally essential to ensure that families understand and consent to AI applications affecting their child’s care, which may involve codeveloping family-facing materials that clearly explain how AI tools function and are used in practice. We must participate in institutional and policy-level governance of AI, such as serving on technology oversight or ethics committees. National leadership is also critical, including engagement by pediatric professional societies with vendors and developers to shape innovation. Finally, we must ensure the safety and confidentiality of data collected by AI tools, safeguarding against unauthorized access and maintaining trust in these evolving systems.AI in health care is here, and it demands that we take responsibility for molding it to achieve our aims for high-quality, equitable, and potentially transformative pediatric care, which is the intent of the AICC. The AICC also underscores the importance of AI proficiency among all health care leaders and practitioners, including those who care for children, to ensure safe and effective care as well as ongoing professional relevance. If we recognize and advance this new priority and become leaders of this new world, it can do wonders for our children, our profession, and our future. If we do not, we, along with the children who trust us to provide safe and effective care, will be left behind.We thank Elaine Fontaine, Special Advisor, NAM, for her critical review and editing of this manuscript. This manuscript included assistance from OpenAI’s ChatGPT-4, a large language model (GPT-4o, July 2025 version), for creating a short title and reviewing the manuscript to create a draft of the table. The authors reviewed and verified all AI-assisted content for accuracy and integrity.

Ähnliche Arbeiten