I hang out with a lot of Chief Information Security Officers (CISOs), so this piece is for them. Of course, it will be of interest to all security professionals struggling with assessing the risk of large language models (LLMs).

According to DarkReading, Berryville Institute of Machine Learning (BIML) recently issued a report entitled “An Architectural Risk Analysis of Large Language Models: Applied Machine Learning Security,” which is designed “to provide CISOs and other security practitioners with a way of thinking about the risks posed by machine learning and artificial intelligence (AI) models, especially LLMs and the next-generation large multimodal models so they can identify those risks in their own applications.”

The core issue addressed in the report is that users of LLMs do not know how the developers have collected and validated the data to train the LLM models. BIML found that the “lack of visibility into how artificial intelligence (AI) makes decisions is the root cause of more than a quarter of risks posed by LLMs….”

According to BIML, risk decisions are being made by large LLM developers “on your behalf without you even knowing what the risks are…We think that it would be very helpful to open up the black box and answer some questions.”

The report concludes that “[s]ecuring a modern LLM system (even if what’s under scrutiny is only an application involving LLM technology) must involve diving into the engineering and design of the specific LLM system itself. This architectural risk analysis is intended to make that kind of detailed work easier and more consistent by providing a baseline and a set of risks to consider.”

CISOs and security professionals may wish to dive into the report by requesting a download from BIML. The 28-pager is full of ideas.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.