I am not a huge fan of using chatbots, as I never end up getting my questions fully answered. I get the efficiency of using a chatbot for simple questions, but my questions are usually not so easily resolved, so I end up completely frustrated with the process and trying to find a human being to help. This happens a lot with my internet service provider. I start with the chatbot, don’t get very far and then yell, “Can’t you just let me talk to someone who can fix my problem?”

At any rate, it seems that lots of people use chatbots and are quite comfortable giving chatbots all sorts of information. Probably not a great idea after reading a summary of research done by Trustwave.

Bleeping Computer obtained research from Trustwave before publication which shows that threat actors are deploying phishing attacks “using automated chatbots to guide visitors through the process of handing over their login credentials to threat actors.” Using a chatbot “gives a sense of legitimacy to visitors of the malicious sites, as chatbots are commonly found on websites for legitimate brands.”

According to Bleeping Computer, the process begins with a phishing email claiming to have information about the delivery of a package (it’s an old trick that still works) from a well-known delivery company. After clicking on “Please follow our instructions” to figure out why your package can’t be delivered, the victim is directed to a PDF file that contains links to a malicious phishing site. When the page loads, a chatbot appears to explain why the package couldn’t be delivered – the explanation usually being that the label was damaged – and shows the victim a picture of the parcel. Then the chatbot requests that the victim provide their personal information and confirms the scheduled delivery of the package.

The victim is then directed to a phishing page where the victim inserts account credential to pay for the shipping, including credit card information. The threat actors provide legitimacy to the process by requiring a one-time password to the victim’s mobile phone number (which the victim gave the chatbot) via SMS so the victim believes the transaction is legit.

The moral of this story: continue to be suspicious of any emails, texts, or telephone calls -(phishing, smishing, and vishing) and now chatbots – asking for your personal or financial information.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chairs the firm’s Data Privacy and Security and Artificial Intelligence Teams. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.