According to a new LayerX report, most users are logging into GenAI tools through personal accounts that are not supported or tracked by an organization’s single sign on policy. These logins to AI SaaS applications are unknown to the organization and are “not subject to organizational privacy and data controls by the LLM tool.” This is because most GenAI users are “casual, and may not be fully aware of the risks of GenAI data exposure.” As a result, a small number of users that can expose large volumes of data. LayerX concludes that “[a]pproximately 18% of users paste data to GenAI tools, and about 50% of that is company information.” LayerX’s findings include that 77% of users are using ChatGPT for online LLM tools.

We have outlined on several occasions the risk of data leakage with GenAI tools, and this report confirms that risk.

In addition, the report notes that “most organizations do not have visibility as to which tools are used in their organizations, by whom, or where they need to place controls.” Further, “AI-enabled browser extensions often represent an overlooked ‘side door’ through which data can leak to GenAI tools without going through inspected web channels, and without the organization being aware of this data transfer.”

LayerX provides solid recommendations to CISO’s including:

  • Audit all GenAI activity by users in the organization
  • Proactively educate employees and alert them to the risks of GenAI tools
  • Apply risk-based restrictions “to enable employees to use AI securely”

Employees must do their part as well. CISOs can implement operational measures to attempt to mitigate the risk of data leakage, but employees should follow organizational policies around the use of GenAI tools, collaborate with employers on the appropriate and authorized use of GenAI tools within the organization, and take responsibility for securing company data.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.