On October 30, 2023, the Biden Administration issued its “Executive Order on the Safe, Secure, and Trustworthy Development and use of Artificial Intelligence.” The EO outlines how Artificial Intelligence (AI) “holds extraordinary potential for both promise and peril.” As the Administration “places the highest urgency on governing the development and use of AI safely and responsibly,” the EO is designed to advance “a coordinated, Federal Government-wide approach” on “governing the development and use of AI safely and responsibly.”

The EO outlines eight guiding principles and priorities:

  1. Artificial Intelligence must be safe and secure, including understanding and mitigating risks of AI systems before use.
  2. Promoting responsible innovation, competition, and collaboration around AI’s use, including investments in AI-related education, training, development, research, and capacity to promote a fair, open, and competitive ecosystem and marketplace.
  3. A commitment to support American workers for the responsible development and use of AI.
  4. AI policies that are consistent with advancing equity and civil rights.
  5. Protecting Americans using AI and AI-enabled products in daily activities from harm.
  6. Protecting Americans’ privacy and civil liberties as AI advancements continue.
  7. Managing the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.
  8. Enabling the Federal Government to lead the way to global societal, economic, and technological progress.

The EO tasks the Secretary of Commerce with leading the effort on a number of fronts. For instance, the EO requires the Department of Commerce’s National Institute of Standards and Technology, in coordination with Energy, Homeland Security “and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate” shall within 270 days of the Order:

  • Establish guidelines and best practices with promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems.
  • Establish appropriate guidelines, procedures, and processes to enable developers of AI to conduct red-teaming tests.

The EO grants the Secretary of Commerce to “take such actions, including the promulgation of rules and regulations, and to employ all powers granted to the president by the International Emergency Economic Powers Act…, as may be necessary to carry out the purposes” of certain sections of the EO.

The EO also requires that within 90 days of the EO, and at least annually thereafter, the head of each agency with regulatory authority over critical infrastructure, in coordination with the Cybersecurity and Infrastructure Security Agency, to consider cross-sector risks and evaluate potential risks related to the use of AI in critical infrastructure sectors.

The EO establishes the White House Artificial Intelligence Council (the White House AI Council) with representatives from 28 federal agencies and departments tasked with “coordinating the activities of agencies across the Federal Government to ensure the effective formation, development, communication, industry engagement related to, and timely implementation of AI-related policies.” As we continue to wade through the 63-page EO and consider its implications for different industries, we will update you on relevant portions that may be applicable.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.