Loading...

WHO Releases Ethical Guidelines for AI in Healthcare

Healthcare

Most read

Loading Most Ready posts..

TL;DR

  • WHO issues guidelines for ethical AI use in healthcare.
  • Guidelines target risks and challenges of AI in health systems.
  • Emphasis on stakeholder engagement and governmental responsibility.

In response to the rapid advancement of generative artificial intelligence (AI) in healthcare, the World Health Organization (WHO) has issued a comprehensive set of guidelines. These guidelines aim to steer the ethical use of Large Multi-Modal Models (LMMs) in the healthcare sector.

Guiding responsible AI use in healthcare

The new WHO guidance, consisting of over 40 recommendations, targets a broad range of stakeholders, including governments, technology companies, and healthcare providers. The primary objective is to ensure that LMMs, which can process diverse data types such as text, images, and videos, are used responsibly in healthcare to promote and protect public health.

Dr. Jeremy Farrar, WHO Chief Scientist, emphasized the transformative potential of generative AI technologies in healthcare. However, he also highlighted the critical need for transparent information and policies to effectively manage associated risks.

LMMs are particularly noted for their ability to mimic human communication and perform tasks beyond their explicit programming. The WHO has identified five key healthcare applications for these models: diagnosis and clinical care, patient-guided symptom and treatment investigation, administrative tasks in electronic health records, medical and nursing education through simulated patient encounters, and scientific research and drug development.

Addressing risks and challenges

The WHO guidelines underscore the risks associated with LMMs, such as generating false, inaccurate, or biased information. This misinformation can lead to harmful health decisions. The quality and bias in training data, reflecting factors like race, ethnicity, and gender identity, are crucial concerns that could impact the integrity of LMM outputs.

Further, the guidelines acknowledge broader challenges to health systems posed by LMMs. These challenges include the accessibility and affordability of advanced LMMs, the potential for ‘automation bias’ in healthcare professionals and patients, and cybersecurity vulnerabilities that could compromise patient information and the trustworthiness of AI algorithms in healthcare.

Stakeholder engagement and governmental responsibility

The WHO stresses the importance of stakeholder engagement in the development and deployment of LMMs. Active participation from governments, technology companies, healthcare providers, patients, and civil society is essential for responsible AI use.

Governments bear the primary responsibility for setting standards for LMM development, deployment, and integration into health and medical practices. The guidelines urge governments to invest in or provide infrastructure like computing power and public data sets, contingent on adherence to ethical principles. Additionally, the establishment of laws, policies, and regulations is crucial to ensure that LMMs in healthcare meet ethical obligations and human rights standards.

The guidelines recommend that governments assign regulatory agencies to assess and approve LMMs for healthcare use. Mandatory post-release auditing and impact assessments by independent parties are also advised for large-scale LMM deployments. These assessments should focus on data protection and human rights, with outcomes disaggregated by user characteristics such as age, race, or disability.

Developer responsibilities and ethical AI design

Developers of LMMs are tasked with ensuring stakeholder engagement from early AI development stages. The design process should be transparent, inclusive, and structured, allowing stakeholders to raise ethical issues and provide input.

LMMs should be designed to perform well-defined tasks with the necessary accuracy and reliability to enhance health systems and benefit patients. Developers must also anticipate and understand potential secondary outcomes of their AI applications.

The WHO’s guidelines represent a significant step towards ensuring that the integration of AI in healthcare is governed by ethical principles. By addressing risks and setting standards for stakeholder engagement and government responsibility, the WHO aims to harness the benefits of AI in healthcare while mitigating its potential harms.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

John Palmer

John Palmer is an enthusiastic crypto writer with an interest in Bitcoin, Blockchain, and technical analysis. With a focus on daily market analysis, his research helps traders and investors alike. His particular interest in digital wallets and blockchain aids his audience.

Stay on top of crypto news, get daily updates in your inbox

Related News

Cryptopolitan
Subscribe to CryptoPolitan