Researchers at Meta, the owner of Facebook, released a report this week which indicated that since March 2023, Meta “has blocked and shared with our industry peers more than 1,000 malicious links from being shared across our technologies” of unique ChatGPT-themed web addresses designed to deliver malicious software to users’ devices.

According to Meta’s report, “to target businesses, malicious groups often first go after the personal accounts of people who manage or are connected to business pages and advertising accounts. Threat actors may design their malware to target a particular online platform, including building in more sophisticated forms of account compromise than what you’d typically expect from run-of-the-mill malware.”

In one recent campaign, the threat actor “leveraged people’s interest in Open AI’s ChatGPT to lure them into installing malware…we’ve seen bad actors quickly pivot to other themes, including posing as Google Bard, TikTok marketing tools, pirated software and movies, and Windows utilities.”

The Meta report provides useful tools to guard against these attacks and responses in the event a device is affected.

Bad actors will use the newest technology as weapons. According to Cyberscoop, Meta researchers have said “hackers are using the skyrocketing interest in artificial intelligence chatbots such as ChatGPT to convince people to click on phishing emails, to register malicious domains that contain ChatGPT information and develop bogus apps that resemble the generative AI software.”

With any new technology comes new risk. Staying abreast of these risks and understanding how threat actors can pivot from personal accounts to business accounts, may prevent attacks against individuals and their companies.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.