Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ethical recommendations for Artificial Intelligence technology in the Geological Sciences - with a focus on Language Models
0
Zitationen
1
Autoren
2024
Jahr
Abstract
Artificial Intelligence (AI) offers many opportunities for the geosciences to improve productivity, reduce uncertainty in models and stimulate discovery of new knowledge. There are also risks to geoscience, from the spread of obsolete, inaccurate and misinformation, to threats on fundamental human rights. Whilst ethical AI frameworks exist from numerous institutions such as UNESCO, they are high level and lack practical detail in the geosciences particularly for Large Language Models (LLM). This is evidenced by the misalignment between the way current geoscience AI/LLMs are being designed, trained and deployed, with core ethical principles. Using principles and frameworks from UNESCO and the International Science Council (ISC), a set of ten recommendations are proposed to bridge the gap between practice and these ethical frameworks. Critical Realism is used as an underlying philosophy which allows the potential to provide justifiable recommendations to ethical and moral questions using judgemental rationality. These recommendations may help stakeholders in the international community reach conclusions on what ‘good looks like’ for ethical AI in the geological sciences focusing on Language Models and their applications. This may inform developers, regulators, policy advisors, journal editors, geological surveys, societies, institutions and unions, publishers, funding bodies, geoscientists and decision makers. This is believed to be the first research paper on AI ethics in the geological sciences with a focus on Generative AI. Understanding the nuances of our ethical choices for both the development and use of LLMs and other AI tools in the geosciences, has the potential to positively impact science integrity, and critically, ensure fairness, personal privacy, democratic norms and human rights are safeguarded.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.