Cybercriminals are using artificial intelligence chatbot ChatGPT to quickly create emails or social media posts to lure the public into scams, an anti-virus software provider is warning.
ChatGPT is set to be expanded across Microsoft’s products including Word, Powerpoint and Outlook after the tech giant invested billions of dollars into parent company OpenAI in January.
But despite the chatbot causing a raft of excitement over its ability to allow users to write poems, stories and answer questions, it seems it could also have a darker side.
Kevin Roundy, senior technical director of Norton, was was excited about what chatbots like ChatGPT could offer but was also wary of how cybercriminals could abuse it.
“We know cybercriminals adapt quickly to the latest technology, and we’re seeing that ChatGPT can be used to quickly and easily create convincing threats.”
Those threats included allowing cybercriminals to quickly create email or social media phishing lures that were more convincing, making it hard for people to tell what was legitimate.
On top of that ChatGPT could also generate code. Roundy said while the chatbot made developers’ lives easier with its ability to write and translate source code it could also make it easier for cybercriminals by making scams faster to create and more difficult to detect.
Cybercriminals could also used ChatGPT to create fake chatbots which could impersonate humans or legitimate sources, like a bank or government entity, to manipulate victims into turning over their personal information in order to gain access to sensitive information, steal money or commit fraud.
Roundy urged the public to avoid chatbots which did not appear on a company’s website or app and to be careful sharing any personal information with someone they were chatting to online.
He also warned against clicking on links that came via unsolicited phone calls, emails or messages.
Take your Radio, Podcasts and Music with you