Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.
ChatGPT: Capabilities, Uses, Risks, and Research Directions — A comprehensive review
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Abstract ChatGPT is a widely deployed family of conversational large-language-model (LLM) systems built on transformer architectures and scaled pretraining. It can generate fluent natural language, summarize, translate, answer questions, draft code, and assist with creative and analytic tasks — enabling broad applications in education, healthcare, research, creative industries, customer service, and more. At the same time, LLMs exhibit important limitations: hallucinations (confident but incorrect statements), sensitivity to prompt phrasing, social and cultural bias, privacy concerns, and potential for misuse (e.g., academic cheating, disinformation). Field studies and systematic reviews show promising utility in domains such as healthcare education and drafting, but they caution about reliability, ethical risks, and the need for human oversight (Sallam, 2023; Huang et al., 2025). Responsible deployment requires rigorous evaluation, grounding/ retrieval techniques, transparency, policy safeguards, and user training. This review synthesizes technical background, current capabilities, major use cases, empirical evidence, limitations and harms, mitigation strategies, governance issues, and a research agenda for ChatGPT and similar LLMs. Keywords: ChatGPT; large language models (LLMs); transformer architecture; artificial intelligence applications; natural language processing; hallucinations and bias; ethical and governance issues; human–AI collaboration; responsible AI; evaluation and mitigation strategies
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.578 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.470 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.984 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.