Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative AI and Large Language Models in Conversational Systems: Trends and Future Directions
0
Zitationen
5
Autoren
2025
Jahr
Abstract
The rapid advancement of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) has significantly transformed Conversational AI, enabling more natural and human-like interactions through cutting-edge deep learning frameworks. This research offers a comprehensive evaluation of leading LLMs, including GPT-4, BERT, T5, ChatGPT, and Claude, assessing their performance based on key metrics such as accuracy, loss, AUC-ROC, F1-score, and computational complexity. The study examines the role of Reinforcement Learning with Human Feedback (RLHF), self-supervised learning, and transfer learning in improving model efficiency across various natural language processing tasks. A thorough experimental assessment was performed using a diverse dataset of AI models to evaluate their effectiveness in Conversational AI applications. The RandomForestClassifier was utilized to predict LLM performance, achieving approximately 92% accuracy, an AUC-ROC score of 0.94, and an F1-score of 0.89. The confusion matrix demonstrates strong predictive capabilities, while an analysis of computational trade-offs highlights the influence of data scale and training complexity. Findings indicate that while larger datasets and more advanced architectures significantly improve model accuracy and adaptability, they also demand higher computational resources. Additionally, the study addresses key challenges, including model bias, hallucinations, and ethical considerations, while exploring potential strategies for optimization in real-world scenarios. These insights contribute to shaping the future of GenAI-powered Conversational AI, underscoring the importance of scalability, efficiency, and ethical AI practices. By proposing innovative approaches to enhance language comprehension, user interaction, and AI deployment, this research advances the field and supports the development of more robust and responsible AI systems.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.