A new report has warned that ChatGPT, an Artificial Intelligence (AI)-driven platform which is capable of giving human-like answers to questions, is being used by hackers to write malicious codes which are used to steal data.
A team of researchers at Check Point Research (CPR) has spotted some cases where cyber crooks are misusing ChatGPT to write malicious codes. According to Check Point researchers, cybercriminals are creating ‘infostealers’ and encryption tools in underground hacking forums to carry on the fraudulent activities.
"Cybercriminals are finding ChatGPT attractive. In recent weeks, we`re seeing evidence of hackers starting to use it to write malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point," said Sergey Shykevich, Threat Intelligence Group Manager at Check Point.
The report noted that a thread named "ChatGPT - Benefits of Malware" appeared on an underground hacking forum on December 29 and it was revealed by the thread’s publisher that he trying to use ChatGPT to recreate malware strains.
"While this individual could be a tech-oriented threat actor, these posts seemed to be demonstrating less technically capable cybercriminals how to utilise ChatGPT for malicious purposes, with real examples they can immediately use," the report mentioned. A Python script was also posted by a threat actor on December 21.
When it was highlighted by a cybercriminal that the code’s style looks similar to OpenAI code, the hacker responded saying that OpenAI gave him a "nice (helping) hand to finish the script with a nice scope."
"Although the tools that we analyse are pretty basic, it`s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools," Shykevich said.
OpenAI has developed ChatGPT and the company is making efforts to raise capital at a valuation of around USD 30 billion.