Many companies are exploring the use of generative artificial intelligence technology (“AI”) in day-to-day operations. Some companies prohibit the use of AI until they get their heads around the risks. Others are allowing the use of AI technology and waiting to see how it all shakes out before determining a company stance on its use. And then there are the companies that are doing a bit of both and beta testing its use.

No matter which camp you are in, it is important to set a strategy for the organization now before users adopt AI and the horse is out of the barn, much like we are seeing with the issues around TikTok. Once users get used to using the technology in day to day operations, it will be harder to pull them back. Users don’t necessarily understand the risk posed to organizations when they use AI while performing their work.

Hence, the need to evaluate the risks, set a corporate strategy around the use of AI in the organization, and disseminate the strategy in a clear and meaningful way to employees.

We have learned much from the explosion of technology, applications, and tools through our experience over the last few decades with social media, tracking technology, disinformation, malicious code, ransomware, security breaches and data compromise. As an industry, we responded to each of those risks in a haphazard way. It would be prudent to learn from those lessons and try to get ahead of the use of AI technology to reduce the risk posed by its use.

A suggestion is to form a group of stakeholders from the organization to evaluate the risk posed by the use of AI, how the organization may reduce the risks, set a strategy around the use of AI within the organization, and put controls in place to educate and train users on its use within the organization. Setting a strategy around AI is no different than any other risk to the organization and similar processes can be used to develop a plan and program.

There are myriad resources to consult when evaluating the risk of using AI. One I found to be helpful is: A CISO’s Guide to Generative AI and ChatGPT Enterprise Risks published this month by the Team8 CISO Village.

The Report outlines risks to consider and categorizes them into High, Medium, and Low, and then outlines how to make risk decisions. It is spot on and a great resource guide if you are just starting the conversation within your organization. 

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.