Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
HOW Google and AI Developers Change Irresponsible and Illegal Risks to Realistic AI in Education An Empirical and Analysis Study Related to the Peter Chew Method for Quadratic Equations
0
Zitationen
1
Autoren
2025
Jahr
Abstract
The rapid development of Artificial Intelligence (AI) and Large Language Models (LLMs) has introduced significant legal and ethical challenges regarding the use of copyrighted academic material for training and generating content. This article provides empirical evidence that the utilization of the Peter Chew Method for Quadratic Equations-published in Scopus-indexed journals-by various AI platforms without proper attribution may be both illegal (violating copyright law) and irresponsible to founders and users. This irresponsibility stems from the fact that AI is not the original founder, and as the evidence shows, some AI systems provide incomplete answers. The lack of citation prevents users from referring to the original, authoritative source to verify information, ultimately harming both the founder and their AI users. To stop this chain reaction of uncited information that misleads global users, AI developers must either issue an open apology or pursue a strategic collaboration with Professor Dr. Peter Chew to achieve realistic AI in education. This case highlights the urgent need for robust accountability mechanisms for AI developers to protect intellectual property and ensure accuracy in educational contexts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.508 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.393 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.864 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.564 Zit.