The health care industry, like all industries, is experimenting with AI tools. As we have commented before, the legal issues that are present with the use of AI tools apply to all industries and consideration should be given to mitigating those risks.

Another consideration for the health care industry was recently thoughtfully outlined by Carrie Pallardy of Information Week in her post entitled “How AI Ethics Are Being Shaped in Health Care Today.” She posits that as AI is used in health care decisions, there is a “clear potential for harm.” Although a study in JAMA Internal Medicine found that ChatGPT outperformed physicians in answering patients’ questions and could “ease the burden on clinicians and making patient care better,” her interviews with providers led her to the conclusion that the use of AI tools may harm patients. One of her interviewees concluded: “Will patient harm be inevitable? Yes, the question is how much.”.

Those in the health care industry who are contemplating the use of AI tools in the clinical setting should be aware of a number of resources Pallardy lists, including guidelines from the European Union, the American Medical Association, the World Medical Association, the World Health Organization, and the Coalition for Health AI. All of these publications should be considered when determining how to govern the use of AI tools in a clinical setting. Pallardy concludes, and I wholeheartedly agree, that the development of AI tools is far outpacing the ability of organizations and regulators to monitor, put guardrails around, evaluate, and implement appropriate regulation. This leaves the governance and ethical considerations of the use of AI tools in the health care industry largely with health care organizations. All the more reason for health care organizations to be leading the effort now to determine the appropriate strategy, ethical constraints, and governance of the use of AI tools in patient care for the well-being of patients.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.