Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Captive markets and medical artificial intelligence
6
Zitationen
8
Autoren
2024
Jahr
Abstract
Artificial intelligence (AI), including large language models (LLM), is likely to have an increasing influence on medical systems. There have been increasing discussions about the best approaches to trial and regulate such algorithms.1 An important aspect of these discussions involves the implications of economic structures for AI deployment,2 including their potential long-term consequences for healthcare systems. Valuable insights can be obtained by examining the historical trajectory of large technology companies in diverse domains. A captive market is defined by limited consumer choice in service providers within a specific domain.3 The field of AI has unique potential to create captive markets. LLM derived from large or high-quality datasets offer significant competitive advantages in model performance.4 In many instances, companies or groups possessing large, curated datasets will likely generate the most accurate models. Accordingly, strategies to obtain these datasets will become important. Accurate models may be made accessible at a reduced, or free of, charge if an agreement can be established where AI providers utilise the available data to enhance and further refine the algorithms. These approaches may be seen to create a positive feedback loop in which small numbers of groups may achieve unassailably large datasets and, accordingly, accurate algorithms. Given the relative scarcity of site-specific healthcare data, this issue is of relevance to healthcare systems. The concept of the captive market has additional relevance to healthcare when it comes to institutional dependence. Should a healthcare institution or system become dependent on a given AI application, then this institution would become a captive market. Healthcare system dependency on a given AI provider may develop in multiple ways. Examples of mechanisms of dependency include shifting healthcare expectations based upon AI reliance (e.g. reliance upon outpatient reviews at a speed that is AI transcription-enabled, without capacity to supplement medical scribes in the event of AI unavailability)5 and a loss of ability (e.g. when given tasks become machine tasks, therefore, humans no longer have the means or ability to perform these tasks).6 Dependency could also arise through the development of bespoke data engineering pipelines (e.g. the means by which data are obtained and provided to a given AI) without clear means by which such pipelines could be used or developed for alternative AI providers. A lack of interoperability7 (e.g. a lack of ability to swap between AI providers due to inability to exchange information) can similarly lead to a captive market. Engagement in captive markets can provide high-quality service but also has the potential to result in adverse outcomes for consumers (in this instance, healthcare systems and their patients). In certain instances, having a sole effective service provider can provide user benefits. For example, single-provider benefits can be seen in the large social networks available through Facebook and the connectivity of multiple devices within the Apple ecosystem. Conversely, examples of situations with adverse consumer outcomes have been described in multiple technological fields in which a provider achieves supremacy through a period of loss-leading service provision. Subsequently, changing service structures can maximise the value provided to shareholders for a given unit of service that is provided to consumers. Commentators, such as Cory Doctrow, have described this phenomenon in multiple areas, pertaining to Uber, Facebook and Amazon.8 For example, if an online platform provides a free or low-cost service (such as videos) in exchange for advertising revenue, then, after a user base is established, a profit-maximising strategy would be to increase the number of advertisements to the maximum tolerable number per user. The potential implications should such a situation arise for healthcare are clear. If a company obtains a position such that a healthcare system has no alternative AI provider, this situation could come at a cost to patients. Hypothetically, the AI provider could subsequently modify business structures to maximise profit with a healthcare system left with no options but to continue paying for a service upon which they are dependent. It is a reasonable argument that it is not the responsibility of the AI developer to consider this dependence, as it is in their best interest to create and retain a large user base. It could be argued that it is the duty of healthcare systems and governments to foresee the impacts of captive markets and avoid this scenario. While these arguments remain largely theoretical at this stage, given the potential implications for AI in healthcare, consideration should be given to these factors proactively. One strategy to mitigate these potential issues is to consider, and require, interoperability for any AI application on deployment. This approach would enable healthcare systems to seek competitive alternatives to given AI applications without being ‘locked in’ to a given provider. It can be seen that, with mandatory interoperability, the risk of captive markets is substantially reduced, assuming that other providers could deliver a similar service. Other strategies include the development of open-source AI algorithms and the fostering of local AI expertise. Regulation and financial structures may also play a role in mitigating the risks of captive markets. Open-source technology includes source codes and architectures of programs that are publicly available for modification.9, 10 Many open-source algorithms have been successful for AI in healthcare in the stratification of cancer therapy selection, triaging of outpatient referrals and application of clinical codes.11-13 Open-source programs have been common amongst technology industries (e.g. Apache HTTP server and Mozilla Firefox) and present users multiple advantages.10 For example, the performance of open-source AI models can be assessed by all researchers, facilitating robust validation experiments.14 Once curated, open-source datasets reduce the investment required to develop models and hence may promote the sharing of subsequently developed models. However, it is noteworthy that open-source code can be validated on publicly unavailable testing datasets, producing potentially unreliable results. Neutral third-party validation datasets may help to mitigate this issue. Open-source programming can pose a challenge for regulatory bodies.9, 14 Performances of open-source models can be dependent on the quality and availability of training datasets and has shown to have potential for very high classification performances.13 By addressing the limitations, open-source technology has the potential to develop exponentially for real-time application in clinical practice to improve the delivery of healthcare to patients. Promoting the cultivation and training of local AI expertise would also reduce dependence on external providers.9 Additionally, local production and augmentation of AI programs may facilitate the development of site-specific models. Inclusion of in-house digital training into medical and fellowship curricula could support such a solution.15 In addition to offsetting the risk of captive markets through the development of AI, this investment will develop a local body of experts that is valuable in critically appraising future vendor-proposed AI applications. Legislation and regulation of medical AI applications can also play a role in preventing the creation of captive markets. The Therapeutics Goods Administration have existing regulations for software-based medical devices in Australia.16 Similarly the United Kingdom and United States have introduced regulation of technological services, with up to 692 approved AI tools.17, 18 Regulators already have multiple challenges in this area, including the dissemination of erroneous medical information and usability standards, such as variable population digital literacy and ease of interaction with the equipment.16, 19 Accordingly, the responsibility for such regulation should be undertaken by multiple bodies, including those involved in the creation of site-specific healthcare institution policies. For example, specific agreement as to whether data necessary to provide a service may be used in subsequent AI model development is an important consideration in every case. Conflict of interest disclosures for those working in medical AI procurement for healthcare systems must also be robust. Interoperability standards could also be mandated. Consideration of additional legislation on such issues may be useful. Various financial models may be implemented in healthcare when engaging with AI services and have relevance to the potential impacts of captive markets. One structure could be conceptualised as a ‘fee for service’ model in which individual ‘tokens’ are purchased, which are then exchanged for specific AI tasks. Subscription models may also be provided, requiring a fixed fee at regular intervals to gain access to AI services. Additionally, a flat-fee purchase of a given service and profit-sharing models could also be implemented. An important consideration is that the costs involved in data engineering pipeline establishment and maintenance may be entirely separate to model acquisition. Consideration as to fixed or variable rates for AI activities must also be given at the outset of engaging with healthcare AI applications.20 The most effective financial structure for any given application will likely be context-dependent; however, a model that aligns the incentives of AI providers and healthcare institutions and avoids hidden maintenance costs, is a reasonable starting place. Captive markets relating to medical AI convey significant risk for healthcare institutions. However, these risks are also balanced against the potential benefits of comprehensive single-provider services. A key strategy to reduce this risk includes pre-emptive requests pertaining to interoperability of AI systems, enabling future transitions between providers. Other approaches include the development of local capabilities and software, the promotion of open-source programs, proactive legislation and regulation, and the utilisation of financial structures that avoid perverse incentives and hidden ongoing costs. The authors declare that there is no conflict of interest. Open access publishing facilitated by The University of Adelaide, as part of the Wiley - The University of Adelaide agreement via the Council of Australian University Librarians. This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.