Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Authors’ reply to Sathe et al., Cherulil et al., Vaishya et al., and Gupta et al.
1
Zitationen
3
Autoren
2023
Jahr
Abstract
We appreciate the interest shown by four groups of authors[1-4] in our ChatGPT survey publication,[5] and the accompanying editorial.[6] It is gratifying to see such engagement and recognition of our work. Clearly, our study has generated a lot of interest. It is very satisfying to note that conducting (and rapidly publishing) this survey on ChatGPT,[5] within one month of its launch, was much appreciated. Readers are accustomed to critically evaluating clinical trial publications, particularly those aimed at obtaining regulatory approval for new medications, whose results are applicable to a small subset of patients based on a stringent set of inclusion and exclusion criteria. However, such studies are often outdated by the time they are published because the standard of care has evolved. Our survey was the exact opposite. It focused on preliminary general perceptions only. Therefore, its simplicity and “limitations” were its strengths.[5] When an easily accessible tool gets 100 million users within a month of its launch, we consider that sufficient proof that it is disruptive and has garnered attention among people of all ages and all backgrounds that are active on social media.[7] We should not underestimate their ability to comprehend what ChatGPT stands for – even if they have not accessed it personally. Our questions were broad and encompassed different dimensions. Expecting a change in one’s life during 2023 (short-term) is quite different from the possible impact on a professional career (long-term) and totally different from the potential impact on the human race during their lifetime (infinitely long period of time). These are also the reasons we thought it appropriate to repeat the same survey five months after the launch of ChatGPT; we shall publish those results shortly. We believe that our survey indicated that the participants were too optimistic in certain aspects (like influence on professional career) and too pessimistic in others (threat to the human race).[5] We have already seen more than 2,70,416 job losses in the United States of America alone in the first 5 months of 2023.[8] Although all of them cannot be attributed to ChatGPT alone, the major impact has been on industries related to information technology (IT) and the big four consulting firms. When the daily cost of running ChatGPT is $700,000, it has a ripple effect.[9] Although ChatGPT and artificial intelligence (AI) may not replace human jobs right now, humans proficient in the use of ChatGPT and other large language models will certainly replace other humans without that skillset for various jobs. ChatGPT indeed has several limitations. Its tendency to “hallucinate” and give fake information is well documented.[10] We recently discovered that ChatGPT’s response to our inquiry included a reference to a purportedly published article from a PubMed-indexed journal in 2019. However, upon further investigation, we found that the article does not exist. GPT 4.0 is easily accessible to most of us and provides real-time data with corresponding citations and is no longer limited to information available till 2021. At the other end of the spectrum are tools used to identify text generated through the use of AI (like ZeroGPT) that suggest that the major portion of the United States Constitution was a product of AI (which is impossible). In fact, a professor at Texas University failed his entire class after he perceived (falsely) that the essays submitted to him were computer-generated. He later had to apologize for jumping to the wrong conclusion.[11] No wonder there is a growing clamor for restrictions and regulations to be imposed on AI. The fact that this movement includes the CEO of OpenAI, Sam Altman, ex-Google Turing award recipient, Geoffrey Hinton (known as the “godfather of AI”), and business magnate, Elon Musk (CEO of Twitter, Tesla, and SpaceX) indicates the gravity of its “clear and present danger.”[12-14] We believe that there is no turning the clock back on AI.[15] The financial ramifications are overwhelming, and greed will steamroll all other considerations. Like the proliferating nuclear arms race and the failure of climate control, the trillions of dollars controlled by the likes of Blackrock and Vanguard will dictate our future; the FAANG companies (Facebook, Apple, Amazon, Netflix, and Google) are mere midgets in comparison. Remember the infamous 1998 George Soros interview on Columbia Broadcasting System (CBS) (60 min).[16] “They are here to make money and do not care about morality or the consequences of their actions on human society.” Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.