Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Overcoming Challenges in AI-Powered NLP Models: Enhancing the Capabilities of ChatGPT and DeepSeek
0
Zitationen
5
Autoren
2025
Jahr
Abstract
AI-driven Natural Language Processing (NLP) models have experienced substantial advancements, propelled by transformer architectures and pretraining methodologies. Notable examples include ChatGPT and DeepSeek, which have demonstrated exceptional performance in various tasks such as text generation, comprehension, and reasoning. Nevertheless, ChatGPT and DeepSeek lack factual consistency, contextual awareness, and interpretability, which implies persistent deficiencies. In this research, one critically examines the architectural points, operations, and limitations of these models. It presents a novel architecture based on a hybrid that combines symbolic reasoning, quantum-based attention, and domain-adaptive fine-tuning to improve the existing flaws. According to that, we have done a comparison analysis by ROUGE-L and BERTScore to assess the efficiency of AI-generated content. DeepSeek performed a little better than ChatGPT with the RoadUncover score of 0.91 and 0.47 ROUGE-L and BERTScore respectively, as compared to 0.89 and 0.43 in ChatGPT respectively. Also, DeepSeek demonstrated better facts and consistency (81.3%) and less hallucination (18.7%) as compared to ChatGPT (78.6 facts and consistency and 21.4 hallucination). These findings confirm the effectiveness of our cross-comparison approach and its capability to identify errors generated by AI. The study provides a viable method of validation of AI outcomes based on real data sets and may be used in the applications of audience profiling, influencer analysis and sentiment-based development of content in business, media analysts and researchers.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.