As artificial intelligence, also known as “AI” becomes more of a household word, it is worth pointing out not only how cool it can be, but also how some uses raise privacy concerns.

The rapid growth of technological capabilities often surpasses our ability to understand long-term implications on society. Decades later, we find ourselves looking back and wishing that development of certain technology would have been more measured and controlled to mitigate risk. Examples of this are evident in the massive explosion of smartphones and social media. Studies today show clear negative consequences from the proliferation of the use of certain technology.

The development of AI is still in its early stage, even though it has been developed for years. It is not widely used yet by individuals, though it is clear that we are on the cusp.

The privacy risks of AI have been outlined in an article published in The Digital Speaker, Privacy in the Age of AI: Risks, Challenges and Solutions. The concerns about privacy in the use of AI is succinctly summarized by the author:

Privacy is crucial for a variety of reasons. For one, it protects individuals from harm, such as identity theft or fraud. It also helps to maintain individual autonomy and control over personal information, which is essential for personal dignity and respect. Furthermore, privacy allows individuals to maintain their personal and professional relationships without fear of surveillance or interference. Last, but not least, it protects our free will; if all our data is publicly available, toxic recommendation engines will be able to analyse our data and use it to manipulate individuals into making certain (buying) decisions.

In the context of AI, privacy is essential to ensure that AI systems are not used to manipulate individuals or discriminate against them based on their personal data. AI systems that rely on personal data to make decisions must be transparent and accountable to ensure that they are not making unfair or biased decisions.

The article lists the privacy concerns of using AI, including a violation of one’s privacy, bias and discrimination and job displacement, data abuse, the power of big tech on data, the collection and use of data by AI companies, and the use of AI in surveillance by private companies and law enforcement. The examples used by the author are eye-opening and worth a read. The article sets forth a cogent path forward in the development and use of AI that is broad and thoughtful.

The World Economic Forum published a paper last year (before ChatGPT was in most people’s vocabulary) also outlining some of the privacy concerns raised by the use of AI and why privacy must be included in the design of AI products. The article posits:

Massive databases might encompass a wide range of data, and one of the most pressing problems is that this data could be personally identifiable and sensitive. In reality, teaching algorithms to make decisions does not rely on knowing who the data relates to. Therefore, companies behind such products should focus on making their datasets private, with few, if any, ways to identify users in the source data, as well as creating measures to remove edge cases from their algorithms to avoid reverse-engineering and identification….

We have talked about the issue of reverse engineering, where bad actors discover vulnerabilities in AI models and discern potentially critical information from the model’s outputs. Reverse engineering is why changing and improving databases and learning data is vital for AI use in cases facing this challenge….

As for the overall design of AI products and algorithms, de-coupling data from users via anonymization and aggregation is key for any business using user data to train their AI models….

AI systems need lots of data, and some top-rated online services and products could not work without personal data used to train their AI algorithms. Nevertheless, there are many ways to improve the acquisition, management, and use of data, including the algorithms themselves and the overall data management. Privacy-respecting AI depends on privacy-respecting companies.

Both articles give a good background on the privacy concerns posed by the use of AI and solutions for the development and use of AI that are worth consideration to have a more comprehensive approach to the future of collection, use and disclosure of big data. Hopefully, we will learn from past mistakes to think about the use of AI for good purposes and minimize its use for nefarious or bad purposes. Now is the time to develop a comprehensive strategy and work together to implement it. One way we can help is to stay abreast of the issues and concerns and use our voices to advocate for a comprehensive approach to the problem.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full bio here.