Colorado is poised to become one of the first states to regulate how insurers can use big data and AI-powered predictive models to determine risk for underwriting. The Department of Insurance recently proposed new rules that would require insurance companies to establish strict governing principles on how they deploy algorithms and how they submit to significant oversight and reporting demands.

The draft rules are enabled by Senate Bill (SB) 21-169 , which protects Colorado consumers from insurance practices that result in unfair discrimination on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identify, or gender expression. SB 21-169 holds insurers accountable for testing their big data systems – including external consumer data and information sources, algorithms, and predictive models – to ensure they are not unfairly discriminating against consumers on the basis of a protected class.

The draft rules regulate the use of predictive models based on nontraditional factors including credit scores, social media habits, purchasing habits, home ownership, educational attainment, licensures, civil judgments, court records, and occupation that does not have a direct relationship to mortality, morbidity, or longevity risk for insurance underwriting. Insurers that use this sort of nontraditional information or algorithms based on it will need to implement an extensive governance and risk management framework and submit documentation to the Colorado Division of Insurance. New York City recently postponed enforcement of its AI bias law amid criticism of vagueness and impracticability, as we recently reported. In contrast, Colorado’s draft insurance rule is among the most detailed AI bias regulations to come out yet. AI regulation is a rapidly growing landscape, and these draft rules may be a sign of what’s to come

Photo of Kathryn Rattigan Kathryn Rattigan

Kathryn Rattigan is a member of the Business Litigation Group and the Data Privacy+ Cybersecurity Team. She concentrates her practice on privacy and security compliance under both state and federal regulations and advising clients on website and mobile app privacy and security…

Kathryn Rattigan is a member of the Business Litigation Group and the Data Privacy+ Cybersecurity Team. She concentrates her practice on privacy and security compliance under both state and federal regulations and advising clients on website and mobile app privacy and security compliance. Kathryn helps clients review, revise and implement necessary policies and procedures under the Health Insurance Portability and Accountability Act (HIPAA). She also provides clients with the information needed to effectively and efficiently handle potential and confirmed data breaches while providing insight into federal regulations and requirements for notification and an assessment under state breach notification laws. Prior to joining the firm, Kathryn was an associate at Nixon Peabody. She earned her J.D., cum laude, from Roger Williams University School of Law and her B.A., magna cum laude, from Stonehill College. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.