There are many factors to consider when assisting clients with assessing the use of artificial intelligence (AI) tools in an organization and developing and implementing an AI Governance Program. Although adopting an AI Governance Program is a no-brainer, no form of a governance program is insufficient. Each organization has to evaluate how it will use AI tools, whether (and how) it will develop its own, whether it will allow third-party tools to be used with its data, the associated risks, and what guardrails and guidance to provide to employees about their use.

Many organizations don’t know where to start when thinking about an AI Governance Program. I came across a guide that I thought might be helpful in kickstarting your thinking about the process: Syncari’s “The Ultimate AI Governance Guide: Best Practices for Enterprise Success.”

Although the article scratches the surface of how to develop and implement an AI Governance Program, it is a good start to the internal conversation regarding some basic questions to ask and risks that may be present with AI tools. Although the article mentions AI regulations, including the EU AI Act and GDPR, it is important to consider state AI regulations being introduced and passed daily in the U.S. In addition, when considering third-party AI tools, it is important to question the third-party on how it collects, uses, and discloses company data, and whether company data is being used to train the AI tool.

Now is the time to start discussing how you will develop and implement your AI Governance Program. Your employees are probably already using it, so assess the risk and get some guardrails around it.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.