Houston Chronicle Sunday

How cyber criminals can corrupt ChatGPT

- By Chyna Blackmon

A new artificial intelligen­ce tool that has been used in classrooms, online forums and social media posts is now being used to steal your private informatio­n and money.

ChatGPT has gained a lot of attention for its ability to generate realistic human responses to text-based input, particular­ly in academia.

So far, it’s been used for multiple legitimate purposes.

Malicious purposes

But the Better Business Bureau explained recently that cyber criminals have also taken advantage of the program’s AI’s capabiliti­es for malicious purposes, like phishing, impersonat­ion and even romance scams.

“Scammers have historical­ly been on the cutting edge of technology and I don’t see this being any different,” Tom Bartholomy, CEO of the Better Business Bureau of Southern Piedmont and Western North Carolina, said. “As they see that work, as they see people engaging with it, they’re just going to continue to refine it and continue to find other scams that they can feed that same technology into.”

What to look out for

Bartholomy said most of the ChatGPT scams so far have involved phishing and impersonat­ion. For example, scammers posing as Amazon send out emails notifying customers that accounts have been deactivate­d and later requesting personal informatio­n.

“One of the tells that we’ve always cautioned people on when you get an email or you get a text is that if there’s any misspellin­gs or if the grammar’s poor or if the sentence structure is just off …t hat that can be a pretty good sign that you’re dealing with a scammer. ChatGPT takes all that away,” Bartholomy said.

Chatbots have been around for years, especially for business customer service assistance. Bartholomy explained that ChatGPT’s advanced conversati­onal model has made it harder for consumers to pick up on red flags.

Newspapers in English

Newspapers from United States