Chinese authorities have arrested alleged hackers in what appears to be the first-ever reported case of hackers using AI to develop ransomware. These alleged hackers reportedly used ChatGPT to refine the code for their home-grown ransomware encryption tool. ChatGPT has been banned in China in favor of Chinese tools such as Baidu’s Ernie Bot. However, many residents avoid Chinese website restrictions through virtual private networks (VPNs).

Legitimate developers have been singing AI’s praises as a method to automate repetitive coding tasks, and many startups are offering AI-powered coding suites. Attackers using AI to automate and improve their code makes complete sense – ransomware is just malicious use of legitimate encryption techniques. It is hardly surprising that other legitimate tools are finding criminal uses. 

While AI has legitimate uses, it can also be used to automate and improve malicious activities. Experts in this nascent industry need to develop strategies to prevent and mitigate the misuse of AI, ensuring it benefits society rather than causes harm.