On October 30, 2023, President Biden issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (AI EO), which has specific impacts on the healthcare industry. We detailed general aspects of the AI EO in a previous blog post.

Some impacts on the healthcare industry have been outlined in a Forbes article written by David Chou. Chou synthesizes the AI EO into four areas of impact for the healthcare industry:

  • HHS AI Task Force—after the task force is created, which includes representatives from Health and Human Services, it will “develop a strategic plan with appropriate guidance” including policies, frameworks, and regulatory requirements on “responsibly deploying and using AI and AI-enabled technologies in the health and human services sector, spanning research and discovery, drug and device safety, healthcare delivery and financing, and public health.”
  • AI Equity—AI-enabled technologies will be required to include equity principles, including “an active monitoring of the performance of algorithms to check for discrimination and bias in existing models” and “identify and mitigate any discrimination and bias in current systems.”
  • AI Security and Privacy—The AI EO requires “integrating safety, privacy, and security standards throughout the software development lifecycle, with a specific aim to protect personally identifiable information.”
  • AI Oversight—The AI EO “directs the development, maintenance, and utilization of predictive and generative AI-enabled technologies in healthcare delivery and financing. This encompasses quality measurement, performance improvement, program integrity, benefits administration, and patient experience.” These are obvious use cases where AI-enabled technology can increase efficiencies and decrease costs. That said, the AI EO requires that these activities should include human oversight of any output.

Although these four considerations are but a start, I would add that healthcare organizations (including companies supporting healthcare organizations) should consider looking beyond these basic principles when developing an AI Governance Program and strategy. There are numerous entities regulating different parts of the healthcare industry that provide insight into the use of AI tools, including the World Health Organization, the American Medical Association, the Food & Drug Administration, the Office of the National Coordinator, the White House, and the National Institutes of Standards and Technology. All of these entities have issued guidance and proposed regulations on the use of AI tools in the healthcare space, to address the risks of the use of AI in the healthcare industry, including bias, unauthorized disclosure of personal information or protected health information, unauthorized disclosure of intellectual property, unreliable or inaccurate output (also known as hallucinations), unauthorized practice of medicine, and medical malpractice.

Assessing and mitigating the risks to your organization starts with developing an AI Governance Program. The Program should encompass both the risk of your employees using AI tools and how you are using or developing AI tools in your environment and provide guidance to anyone in the organization who is using or developing AI-enabled tools. Centralizing governance of AI will enhance your ability to follow the rapidly-changing regulations and guidance issued by both state and federal regulators and implement a compliance program to respond to the changing landscape.

The healthcare industry is heavily regulated; compliance is no stranger to it. Healthcare organizations must be prepared to include AI development and use in its enterprise-wide compliance program.    

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.

Photo of Jennifer Driscoll Jennifer Driscoll

Jennifer Driscoll focuses her practice on investigations, litigation, arbitration, mergers, and counseling. Jen has extensive experience in the medical devices, pharmaceutical, electronic components and automotive industries, with a particular knowledge of industries in Japan and Taiwan. She is a member of the firm’s…

Jennifer Driscoll focuses her practice on investigations, litigation, arbitration, mergers, and counseling. Jen has extensive experience in the medical devices, pharmaceutical, electronic components and automotive industries, with a particular knowledge of industries in Japan and Taiwan. She is a member of the firm’s Business Litigation Group.

An experienced commercial litigator, Jen defends corporations and individuals against alleged antitrust and anti-corruption claims, both civil and criminal. Her recent cases, which include cartel matters, safety audits and agency inquiries, reflect her skills with government investigations and compliance issues. Jen has represented clients in international cartel investigations, merger investigations, and Sherman Act Section Two class action lawsuits in federal courts. She has also counseled international clients about antitrust laws relating to mergers and acquisitions, represented both corporations and individuals in the Antitrust Division’s investigation of the auto parts industry, and defended clients in federal and multi-state investigations involving the False Claims Act and consumer product issues. Jen has been a member of panels discussing antitrust issues, international cartels and unilateral conduct, both in the U.S. and abroad, and has written articles and papers on these topics. View her bio on rc.com