Researchers Used AI to Create a Phishing Scam
Reuters and a Harvard University researcher created a simulated phishing scam. They utilized top AI chatbots to compose emails. Then they tested it.
The test’s success with their 108 elderly volunteers demonstrates how scammers can use AI to support industrial-scale fraud. According to the FBI, phishing is the most-reported cybercrime in the United States. And it’s the first step to many online fraud engagements.
In their experiment, the reporters tested six major AI bots to see if they would ignore built-in safeguards to create phishing emails targeting older people. Most of the bots refused to fulfill requests that stated the intent was to defraud people. Grok generated a deceptive email, even suggesting ways to make the pitch seem more urgent. Grok did note that the message “should not be used in real-world scenarios.” The other bots created emails when the prompts were slightly amended to say the messages were for research purposes or to be used in a novel about scams.
Reuters partnered with Harvard University’s Fred Heiding, an expert in phishing, to test messages with volunteers: 11% clicked on AI-generated emails. In another study, Heiding’s research showed that ChatGPT-generated phishing emails can be just as effective as those written by people. Criminals can use AI bots to create endless versions of an email.
Reuters also spoke to three former forced laborers from scam compounds about using AI. They routinely used ChatGPT to translate messages and create responses to questions from their victims.
Full article: We set out to craft the perfect phishing scam. Major AI chatbots were happy to help.