The World Health Organization (WHO) recently published “Ethics and Governance of Artificial Intelligence for Health: Guidance on large multi-modal models” (LMMs), which is designed to provide “guidance to assist Member States in mapping the benefits and challenges associated with the use of for health and in developing policies and practices for appropriate development, provision and use. The guidance includes recommendations for governance within companies, by governments, and through international collaboration, aligned with the guiding principles. The principles and recommendations, which account for the unique ways in which humans can use generative AI for health, are the basis of this guidance.”

The guidance focused on one type of generative AI, large multi-modal models (LMMs), “which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm.” According to the report, LMMs have “been adopted faster than any consumer application in history.” The report outlines the benefits and risks of LLMs, particularly the risk of using LLMs in the healthcare sector.

The report proposes solutions to address the risks of using LMMs in health care during development, provision, and deployment of LMMs and ethics and governance of LLMs, “what can be done, and by who.”

In the ever-changing world of AI, this is one report that is timely and provides steps and solutions to follow to tackle the risk of using LMMs.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.