Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI and the Public Health Enterprise: From Interest to Action
1
Zitationen
3
Autoren
2025
Jahr
Abstract
Background In a recent Management Moment column entitled, “Artificial Intelligence and the Practice of Public Health,” we issued a call to action for public health leaders to begin exploring artificial intelligence (AI) not as a futuristic experiment but as a practical tool for enhancing their daily practice.1 As a follow-up, this column is intended to further equip leaders with the knowledge and confidence to begin taking tangible steps. The age of AI is upon us—and public health must not lag behind.2 From Concept to Capability: Understanding the Context When we talk about AI in public health, we are not referring to its well-publicized applications in clinical diagnostics, drug development, or personalized medicine. While these areas are garnering significant investment and attention, this column focuses squarely on AI as it relates to the practice of public health—addressing such areas of work as surveillance, administration, communication, and community engagement at the local, state, tribal, and territorial levels. Yet despite its potential, AI use in public health is still nascent. According to a 2024 National Association of County and City Health Officials (NACCHO) survey, only 5% of local health departments are currently using AI tools. Another 39% report interest but haven’t yet taken the first step. This hesitancy is often rooted in understandable constraints—limited budgets, under-resourced staff, and regulatory uncertainty. Given the speed with which AI is becoming a part of daily life, this is likely an undercount. Yet, indications are that AI is not yet a routinely utilized tool in most public health agencies. Further complicating acceptance are state governmental restrictions. As of mid-2025, over 30 states have enacted laws or issued executive guidance limiting or governing AI use in state agencies. Sixteen states have adopted substantive controls, sometimes including outright prohibitions. Yet there is no denying that we are at a turning point. In April 2025, the US Office of Management and Budget issued a directive calling on all federal agencies to expand their use of AI with a focus on innovation, governance, and public trust. This directive is already shaping expectations for Centers for Disease Control and Prevention (CDC) and Health Resources and Services Administration grantees.3 Local and state public health agencies are next in line. The takeaway: Regardless of the department’s size or funding, the time has come for every public health agency to develop a basic plan for integrating AI into its practice. A New Public Health Imperative: The AI Plan What does an “AI plan” look like? We propose that public health departments begin by identifying practical, value-driven opportunities in 3 domains: surveillance and assessment, administration, and communications. Surveillance and assessment One of AI’s most immediate contributions to public health practice is in strengthening surveillance. AI can be used to enhance traditional monitoring systems such as BioSense, syndromic surveillance, and even newer tools like wastewater monitoring. It can knit together disparate, emerging epidemiological indicators to provide a more sensitive early warning system. For example, CDC—with contract support from ICF3—built an AI system that scans over 44 000 Facebook pages daily to detect school closures—a proxy signal for disease outbreaks—saving over 200 hours of manual effort per day. AI can also help direct limited resources in a more effective way to enforce regulations. In Chicago, for example, health officials faced the logistical challenge of inspecting over 15 000 food establishments. The city used historical inspection and 311 call data to create an AI-informed risk scoring system. Inspectors prioritized higher-risk sites and found more violations in less time. These real-world use cases suggest a compelling future: AI-driven surveillance systems that integrate EMS call data, weather conditions, and social media signals to predict outbreaks days before the first lab-confirmed case. AI can also help by examining risk groups in ways that feed more targeted risk prevention strategies to AI communications tools. Administration Many health departments are beginning to use AI in administrative functions, often as a time-saving assistant. Grant writing, for example, is one of the most time-consuming but critical tasks. AI-powered language tools can generate first drafts of grant applications, community health assessments, and technical reports, giving overburdened staff a much-needed head start. A particularly promising use case is CommentWorks—an AI-enabled tool that helps summarize thousands of public comments on proposed health regulations. In a recent public health regulation discussion, AI helped compile and analyze stakeholder feedback from a wastewater surveillance meeting and enabled the release of a white paper in days instead of weeks. While there should always be a human in the loop to limit erroneous administrative decisions, AI’s ability to streamline repetitive tasks, auto-generate summaries, and identify patterns in stakeholder feedback can be transformative. Communications Public health communication has always been about reaching the right people with the right message in the right way. In this realm, AI is emerging as a powerful ally. In Onondaga County, NY, the health department partnered with Black and Latino youth to develop an AI-powered chatbot on contraception. The result: a tailored educational experience, amplified by AI-generated videos that adapted messages to different audiences across platforms.4 In other settings, AI has helped generate public health materials in multiple languages, tone-customized for different communities. Looking ahead, we can imagine multilingual, always-on outreach platforms that use AI to answer health questions in plain language—customized for age, literacy level, and culture. These systems could explain the risks of a new disease or promote vaccination 24/7, in over 30 languages. Simple communications tasks can be a good place to begin for uncertain health departments. Generative, large language models like ChatGPT, Claude, or CoPilot can be helpful in drafting an email, a memo, or a speech. Building for Tomorrow: Bold Ideas Worth Exploring We are just at the beginning of a rapidly developing technology. Ideas that seem far-fetched today could be routine tomorrow. Here are some possible use cases to consider: Hyperlocal risk prediction models: These models could use real-time, multimodal data—911 calls, weather trends, social media, and wastewater—to predict health risks with granularity down to the neighborhood level. They could ideally involve community members in the priority-setting planning and implementation stage. Digital twins of communities: Inspired by urban planning, public health AI could create “digital twins”—that is, virtual representations of real-world populations, environments, and systems to allow public health professionals to simulate the impact on community health of potential actions like increased Supplemental Nutrition Assistance Program benefits or mobile clinic deployment. Navigating Resistance: Concerns and Mitigation Despite these exciting possibilities, public health leaders must acknowledge and address the concerns that have made many departments hesitant to engage with AI: Workforce limitations: Nearly 60% of local health departments do not have a full-time data or informatics specialist. Without trained personnel, AI tools may feel inaccessible. Accuracy and reliability: AI systems are known to “hallucinate”—producing content that sounds correct but may be false. A recent case where a federal report cited nonexistent journal articles generated by AI only adds to the concern. Privacy and data security: AI’s power depends on data—and in public health, such data are often highly sensitive. Improper handling or use of unsecured AI systems risks both breaches and bias. Job displacement fears: In some industries, AI has led to job losses. Understandably, public health staff may fear the same especially at a time of budget cutbacks. We recommend a few best practices for use in addressing these very real concerns: Human in the loop: AI should not replace human judgment. Instead, treat AI like a capable intern or companion—excellent at drafting and summarizing but not responsible for the final product. Privacy controls: When sensitive or personal data are involved, use AI tools only behind strict firewalls. Prevent access to non-public data using techniques like encryption and data masking that are already familiar to public health IT teams. Transparency: Engage your agency and the communities you serve in discussions about how AI might be used and how misuse could be prevented. Build trust through two-way, clear communication. Practical First Steps: A Guide for Health Leaders As leaders embark on a journey of exploration in the use of AI in day-to-day practice, we encourage consideration of these first steps: Start with principles: Use frameworks from trusted sources such as the National Association of Counties,5 Health Resources in Action, or ICF3 to build guidelines around transparency, privacy, and equity. Talk to your team and to community partners: Ask staff and community partners what they know about AI. What worries them? What would help them? Involve staff and partners in building buy-in and surfacing useful ideas. Pilot small projects: Use AI to summarize meeting notes, draft a press release, or prepare a first draft of your CHA. Start small. The important thing is to begin. Experiment informally: Get familiar with generative AI platforms like ChatGPT, Gemini, or Co-Pilot. Ask them to write a policy memo or summarize an article. Think of it as a brainstorming partner. Learn from peers: Join NACCHO, Association of State and Territorial Health Officials, and Big Cities Health Coalition webinars; follow case studies; and borrow what works. Health departments across the country are innovating—and are often willing to share. Participate in the Public Health Communications Collaborative and other nationally recognized coalitions of public health specialists who are crafting the messages that can be deployed with the assistance of AI agents. Conclusion: The Future Is Now AI will not solve all the challenges facing public health. It is not magic. But it is a tool—a powerful one—that can help us serve our communities more effectively, more efficiently, and more equitably. As public health leaders, we have a responsibility not only to respond to today’s threats but to anticipate tomorrow’s needs. AI, if approached with humility, transparency, and strategy, can become one of the most valuable tools in our evolving public health toolbox. Let’s not wait until we’re left behind. The future is already knocking. Let’s answer—with wisdom, courage, and compassion.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.