A growing number of states have enacted laws this year to study artificial intelligence (AI), ahead of possible legislative action to address expected threats to jobs, civil liberties, and property rights with the emerging technology. The specific goals of these committees have varied. For instance, Minnesota is studying how intelligence sharing with AI might enable law enforcement lead to civil liberty violations while North Dakota is considering how the technology could affect matters ranging from the job market to the 2024 elections. Perhaps leading the pack, Vermont released a detailed inventory of the AI currently deployed in its state government. The state plans to use this information to develop a robust AI ethics board and audit procedures to protect the rights of Vermont citizens amidst future AI developments.
Industry-specific guidance has begun to emerge as well. For instance, many state insurance regulators are weighing in on “novel data sources,” or non-traditional data points that insurers may use to inform underwriting decisions. These data sources may include everything from educational attainment to social media presence. Regulators are coming to different conclusions on the matter, though. For instance, proposed Colorado Insurance Commissioner guidance regulates motor vehicle reports and criminal history external as novel data sources, while guidance from New York’s Department of Financial Services does not.
Businesses seeking to leverage AI’s transformative power will need to keep a close eye on these developments, and wise companies may consider proactively forming an AI governance committee.