Healthcare providers worldwide need to be cautious about adopting Artificial Intelligence (AI) technologies, particularly in low- and middle-income countries, warns the World Health Organization (WHO).
While Large Multi-Modal Models (LMMs), a subset of generative AI, have the potential to revolutionize healthcare, the WHO underlines the need for awareness and responsible implementation.
AI in healthcare: A game-changer
The WHO acknowledges the significant potential of Large Multi-Modal Models (LMMs) in healthcare. These AI systems, like ChatGPT, Bard, and Bert platforms, have rapidly gained prominence. LMMs can process various data inputs, including text, videos, and images, to generate various outputs.
Their applications in healthcare encompass diagnostics, scientific research, drug development, medical training, administration, and even patient self-assessment of symptoms. By analyzing vast amounts of medical data, such as images, scans, and electronic health records, LMMs can enhance diagnostics, improve treatments, predict patient outcomes, and increase efficiency.
One of the most significant advantages of AI in healthcare is the potential to save lives by providing accurate diagnoses and personalized treatment plans. Moreover, it can alleviate the burden on healthcare professionals, allowing them to focus on more critical tasks than routine paperwork. In regions with a shortage of medical practitioners, LMMs can play a pivotal role in improving healthcare accessibility, ensuring a broader and more equitable reach of medical care.
Risks and challenges
Despite the promising outlook, the WHO cautions against overlooking the associated risks. Misdiagnoses and inappropriate treatment decisions may result from overestimating the capabilities of LMMs, particularly if their limitations are not adequately acknowledged.
Furthermore, healthcare systems are likely to become overly dependent on LMMs, especially in low- and middle-income countries where maintenance and updates may be inadequate. This reliance could also lead to job losses and necessitate significant retraining for healthcare workers.
Moreover, the environmental cost of training and utilizing these AI models is a concern. AI models are known to contribute to carbon emissions and water consumption. Additionally, the development and deployment of LMMs are primarily concentrated in the hands of large tech companies due to the high financial costs involved, potentially reinforcing their power and dominance in the field.
Inequalities in access
The WHO raises issues related to the equality of access to AI in healthcare. The digital divide and high subscription fees could limit access to these models, exacerbating health inequalities between developed and developing countries. Furthermore, if LMMs are trained on biased data, they could perpetuate these biases within healthcare systems.
Addressing these challenges necessitates building the necessary infrastructure and implementing regulations for the use of AI across both public and private sectors. Transparency, robust data governance, and ethical considerations are paramount.
Initiatives such as providing grants, access to shared cloud computing resources, and open datasets could significantly benefit low- and middle-income countries, leveling the playing field.
International organizations can facilitate knowledge transfer and support countries in obtaining local data, ensuring that these AI models reflect regional needs accurately. The involvement of stakeholders from nations with fewer resources in developing and governance new LMM technologies is crucial to championing inclusive development.
Ultimately, the WHO recognizes that some harm from AI in healthcare is inevitable. Therefore, the guidance offered by the organization includes recommendations on liability schemes and calls for compensation mechanisms in case patients suffer due to AI. Establishing clear liability norms and robust regulatory oversight is essential to guaranteeing that individuals adversely affected by LMMs receive adequate compensation and legal recourse.