This post was co-authored by Josh Yoo, legal intern at Robinson+Cole. Josh is not admitted to practice law.

Health care entities maintain compliance programs in order to comply with the myriad, changing laws and regulations that apply to the health care industry. Although laws and regulations specific to the use of artificial intelligence (AI) are limited at this time and in the early stages of development, current law and pending legislation offer a forecast of standards that may become applicable to AI. Health care entities may want to begin to monitor the evolving guidance applicable to AI and start to integrate AI standards into their compliance programs in order to manage and minimize this emerging area of legal risk.

Executive Branch: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Following Executive Order 13960 and the Blueprint for an AI Bill of Rights, Executive Order No. 14110 (EO) amplifies the current key principles and directives that will guide federal agency oversight of AI. While still largely aspirational, these principles have already begun to reshape regulatory obligations for health care entities. For example, the Department of Health and Human Services (HHS) has established an AI Task Force to regulate AI in accordance with the EO’s principles by 2025. Health care entities would be well-served to monitor federal priorities and begin to formally integrate AI standards into their corporate compliance plans.

  • Transparency: The principle of transparency refers to an AI user’s ability to understand the technology’s uses, processes, and risks. Health care entities will likely be expected to understand how their AI tools collect, process, and predict data. The EO envisions labelling requirements that will flag AI-generated content for consumers as well.
  • Governance: Governance applies to an organization’s control over deployed AI tools. Internal mechanical controls, such as evaluations, policies, and institutions, may ensure continuous control throughout the AI’s life cycle. The EO also emphasizes the importance of human oversight. Responsibility for AI implementation, review, and maintenance can be clearly identified and assigned to appropriate employees and specialists.
  • Non-Discrimination: AI must also abide by standards that protect against unlawful discrimination. For example, the HHS AI Task force will be responsible for ensuring that health care entities continuously monitor and mitigate algorithmic processes that could contribute to discriminatory outcomes. It will be important to permit internal and external stakeholders to have access to equitable participation in the development and use of AI.

National Institute of Standards and Technology: Risk Management Framework

The National Institute of Standards and Technology (NIST) published a Risk Management Framework for AI (RMF) in 2023. Similar to the EO, the RMF outlines broad goals (i.e., Govern, Map, Measure, and Manage) to help organizations address and manage the risks of AI tools and systems. A supplementary NIST “Playbook”  provides actionable recommendations that implement EO principles to assist organizations to proactively mitigate legal risk under future laws and regulations. For example, a health care organization may uphold AI governance and non-discrimination by deploying a diverse, AI-trained compliance team.

Privacy and Security Laws

The design, deployment, and use of AI will have implications for a health care entity’s specific obligations under key federal privacy and security laws. The Health Insurance Portability and Accountability Act (HIPAA)) and 15 U.S.C. Sec. 45(a)(1) (Section 5) generally govern data practices by health care entities.

  • HIPAA Privacy Rule: HIPAA generally limits a covered entity’s use and disclosure of electronically transmitted or maintained protected health information (PHI) to certain discrete, permitted purposes, such as treatment, payment or health care operations or other purposes authorized by a patient.. Effective HIPAA compliance requires an organization to map, monitor, and control the flow of PHI in accordance with a permitted access, use and disclosure. AI will likely strain existing privacy controls because AI tools and AI’s utility often involve access to and the processing of large amounts of data that may exceed existing HIPAA privacy standards such as the minimum necessary standard. Organizations will need to consider methods to segment AI’s access to PHI, ensure lawful processing, and avoid inappropriate use by and disclosure to third-party developers.
  • HIPAA Security Rule: HIPAA also requires health care entities to safeguard PHI with administrative, physical, and technical safeguards. AI has already begun to significantly disrupt existing security standards in a number of ways, including the creation of software vulnerabilities. Bad actors also leverage their own AI to bypass cybersecurity protections. Effective compliance programs will likely need to adapt to the reality of AI as an emerging cybersecurity risk.
  • Section 5: The regulation of AI extends beyond PHI to personal information. Under Section 5, the Federal Trade Commission (FTC) may consider and pursue as illegal “unfair or deceptive acts or practices in or affecting commerce” that cause or are likely to cause reasonably foreseeable injury. AI access to personally identifiable data, such as personal and/or health information maintained by health care applications, could trigger Section 5 liability under theories of deception and unfairness, as well as liability under the FTC’s data breach rule where personal information is impermissibly used to train the model. Notably, this information cannot be easily removed once the model has been trained. AI’s potential to compromise existing cybersecurity could also result in liability. Finally, companies developing AI tools must comply with Section 5 with respect  to the claims they make regarding their tools’ capabilities.

Pending Legislation

Congressional activity has exploded with AI’s increasing prevalence. Legislative proposals are highly varied, ranging from the establishment of an AI legislative committee to required studies on AI’s environmental impact. Several bills, including the following may substantively impact AI compliance efforts if signed into law.

Key Takeaways

As AI becomes further intertwined with health care, health care entities will need to strategize and plan for AI in their compliance infrastructure. Like any other tool, AI has its benefits and drawbacks that need to be taken into account in compliance efforts. As a practical matter, health care entities’ compliance efforts would be well-advised to include the following:

  1. Inventory Existing and Upcoming AI Use: AI functions will likely intersect with various internal and external systems when supporting a health care entity’s services. To effectively integrate and monitor AI’s impact on existing systems, health care entities would be well-advised to begin to inventory the existing and upcoming use of AI in their organizations, and conduct data mapping, risk assessments, and audits to understand and prepare to strengthen organizational compliance before taking on AI risk.
  • Education: Although current guidance on AI is unsettled, health care entities need to be on the lookout for key updates. Under the EO’s direction, HHS intends to implement health care-centric AI regulations and industry guidance through the AI Task Force and Office of the Chief Artificial Intelligence Officer (OCAIO). Industry segment-specific compliance program guidance from the Office of the Inspector General (OIG) may also focus on AI’s use in health care. Meanwhile, NIST’s National Artificial Intelligence Advisory Committee will provide general recommendations for AI built upon the RMF.  
  • Adaptation: Compliance plans that do not take into account the risk of AI are likely to become outdated with the rise of an evolving body of laws applicable to AI. It is important for health care entities to prepare for organizational change and consult with legal counsel prior to implementing new AI tools and navigating the numerous compliance requirements that currently exist as well as those that may be implemented in the near future. Health care entities’ ability to adapt to emerging laws and regulations applicable to AI will enable them to manage and mitigate the inevitable risks associated with AI tools and systems.
Photo of Kathleen Healy Kathleen Healy

Kathleen Healy advises health care entities on transactional and complex health care regulatory matters. She represents a wide range of clients, including hospital systems, behavioral health providers, physician groups, accountable care organizations, clinically integrated networks and data collaboratives. She is a member of…

Kathleen Healy advises health care entities on transactional and complex health care regulatory matters. She represents a wide range of clients, including hospital systems, behavioral health providers, physician groups, accountable care organizations, clinically integrated networks and data collaboratives. She is a member of the firm’s Health Law Group and the Health Care Industry, Artificial Intelligence and Data Privacy + Cybersecurity Teams. View Kate’s full bio on