OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 03.05.2026, 10:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A structured framework for effective and responsible generative artificial intelligence chatbot prompt engineering throughout the scientific process: a comprehensive guide for the health and medical researcher

2026·0 Zitationen·Frontiers in Artificial IntelligenceOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Generative artificial intelligence (GenAI) chatbots powered by large language models (LLMs) are becoming increasingly integrated into health and medical research workflows, offering researchers new tools to enhance efficiency, support innovation, and assist with knowledge translation. Although their use in health and medical research is expanding rapidly, the practical application of these tools across the broader health and medical research landscape remains complex and evolving. Health and medical researchers often engage with complex study designs, theoretical frameworks, and population needs, all of which require thoughtful, effective and responsible use when involving AI tools. This 10-chapter guide serves as a practical, evidence-informed resource for health and medical researchers to engage effectively and responsibly with GenAI chatbots through the practice of prompt engineering, the design of clear, structured, and purposeful prompts that guide GenAI chatbot outputs. It presents strategies to improve prompt quality and adapt GenAI chatbot interactions to the varied methodological and disciplinary contexts found across health and medical research. The article outlines a structured framework for how GenAI chatbots can be applied throughout the research cycle, including research question development, study design, literature searching, querying for appropriate reporting guidelines and appraisal tools, quantitative and qualitative data analysis, writing and dissemination, and implementation. AI-generated content should be treated as a preliminary draft and must always be reviewed, verified against credible sources, and aligned with disciplinary standards. Risks such as hallucinated content, embedded biases, and ethical challenges are addressed, particularly in sensitive or high-stakes settings. Transparency in AI use and researcher accountability are essential. While GenAI chatbots have the potential to expand access to research support and foster innovation, they cannot replace critical thinking, methodological rigour, or contextual understanding. Instead, they should augment, not replace, human expertise. This guide encourages effective and responsible use of GenAI chatbots and support their thoughtful integration into the health and medical research process.

Ähnliche Arbeiten