ChatGPT is amazing! It is all the rave. Its capabilities are awe-inspiring (except to educators who are concerned their students will never write a term paper again). It has reportedly passed a bar exam and a physician board exam and has written sermons, research papers, and more.

But all amazing technology has its ups and downs. Not wishing to take anything away from ChatGPT, it is important to understand that since it is so amazing, not only do we want to use it, but so do the bad guys.

Without getting into a much longer discussion on the ethical considerations of using AI (do a little research yourself on that topic), there are some concerns being raised about the use of AI products, including ChatGPT, that are worth keeping an eye on.

According to Axios, researchers at Check Point Research recently discovered that hackers were using ChatGPT to “write malware, create data encryption tools and write code creating new dark web marketplaces.” It is also being used to generate phishing emails.

Similarly concerning is that some software code developers are using AI to write code and are “creating more vulnerable code.” Those using AI were “also more likely to believe they wrote secure code than those without access.”

ChatGPT and other AI assistants are extremely helpful when used for everyday purposes but can also be used maliciously by threat actors. It is just another tool in their toolbox to use to attack victims. Being aware of how new technology can be used maliciously is an important way to stay vigilant and prevent becoming a victim.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.