The Office of the Controller of the Currency (OCC) issues a semiannual risk perspective report that “addresses key issues facing banks, focusing on those that pose threats to the safety and soundness of banks and their compliance with applicable laws and regulations.” The most recent report “presents data in five main areas: the operating environment, bank performance, special topics in emerging risks, trends in key risks, and supervisory actions.”

One of the special topics in emerging risks is artificial intelligence (AI). Although the OCC acknowledges the potential benefit of using AI in the banking industry, it also acknowledges the risks associated with its use, particularly generative AI tools.

The OCC states: “Consistent with existing supervisory guidance, it is important that banks manage AI use in a safe, sound, and fair manner, commensurate with the materiality and complexity of the particular risk of the activity or business process(es) supported by AI usage. It is important for banks to identify, measure, monitor, and control risks arising from AI use as they would for the use of any other technology.” Although this general statement is a no brainer, banks need better guidance on how to deal with the risks associated with AI. Telling the banking industry that the OCC is “monitoring” the use of AI is not particularly helpful.

As a former regulator, my sense is that it would be helpful if regulators would provide solid guidance to regulated industries on how the use of AI will be regulated. The risks associated with AI have been documented and are known. We are already behind in mitigating those risks. Regulators must take an active role to shape appropriate uses and mitigate risks posed by the use of AI and not wait until bad things happen to consumers. Regulations are always behind reality, and this is no exception.

One risk that is obvious and concerning to me is the use of voice recognition technology by banks and financial institutions to authenticate customers. With the astonishingly accurate depictions of voices by AI generated tools, threat actors are and will be using deep fakes with financial institutions to perpetrate fraud. Why don’t we just start there? Let’s figure out how financial institutions can identify customers without using Social Security numbers or voice recognition.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.