Scammers are using generative AI, such as ChatGPT and ElevenLabs, to impersonate voices, produce more sophisticated phishing emails, and develop malware. To stay up with constantly growing threats, cybersecurity organisations are adopting AI themselves.
According to a cybersecurity business that is also utilising AI to combat the danger, generative artificial intelligence (AI) creates a host of cybersecurity threats, notably in social engineering schemes that are popular in Hong Kong.
According to Kim-Hock Leow, Asia CEO of Wizlynx Group, a Switzerland-based cybersecurity services firm, the rise of powerful generative AI tools such as ChatGPT will make some sorts of frauds more widespread and effective.
“We can see that AI voice and video mimicking continues to seem more genuine, and we know that it can be used by actors looking to gain footholds in a company’s information and cybersecurity systems,” he added.
“Creative” uses of AI tools by scammers
Social engineering techniques, such as those used over the phone or through phishing emails, are intended to trick victims into thinking they are speaking with a real person on the other end of the queue.
Scams done through internet chats, phone calls, and text messages have cheated individuals out of HK$4.8 billion (US$611.5 million) in Hong Kong. AI-generated voice, video, and text make these sorts of frauds even more difficult to identify.
Also read: Scammers clone girl’s voice using AI in ‘kidnapping scam,’ demand $1 million as ransom
In one case from 2020, a Hong Kong-based manager at a Japanese bank was duped into authorising a US$35 million transfer request by deep fake audio that mimicked his director’s voice, according to a court document originally published by Forbes.
The crooks eventually got away with $400,000 because the manager thought he had emails confirming the director’s request in his inbox.
A similar fraud had persuaded a British energy business to send $240,000 to an account in Hungary just a year before.
Governments are beginning to recognise the new threat. In a message on WeChat in February, Beijing’s municipal public security office warned that “villains” may utilise generative AI to “commit crimes and spread rumours.”
AI voice generation the most widely used tool
The US Federal Trade Commission issued a warning in March about scammers using AI-cloned voices to mimic individuals, saying that all they need is a brief audio clip of the person’s voice from the internet.
According to Leow, AI-generated text used in phishing emails is a significantly more likely scenario than AI audio or video.
“Everyone gets phishing attacks, but they can be easily detected due to length, typos, or a lack of relevant context to you and your job,” he explained. “However, cybercriminals can now use new AI language models to make their phishing emails more sophisticated.”
Cybersecurity needs to step up
“Based on the knowledge and data that AI can gather and generate over time, cybersecurity professionals can use it to get a more accurate identification of a security system’s risk and vulnerability areas,” said Leow.
“We need to encourage cybersecurity professionals and other industries to use ChatGPT to strengthen defences,” he added. “It’s a two-edged sword that will be used for both cybersecurity and cybercrime.”
Some of the threats identified by cybersecurity firms remain hypothetical for the time being. Digitpol, a worldwide provider of digital risk solutions, has cautioned that by rapidly creating malware and dangerous code, AI models may be trained to avoid security filters and detection signatures.
“We must hope that the owners of ChatGPT and other generative AI models will do everything possible to reduce the likelihood of abuse by bad actors,” he stated.
How AI is developed also has a role to play
OpenAI’s terms of service for ChatGPT ban the usage of its technology for illegal purposes. According to Leow, the firm has technical solutions in place, but there is a chance that bad actors would escape ChatGPT’s filters.
Cybercrime is expected to cost US$8 trillion globally this year in damages that include stolen funds, property loss and reduced productivity, according to a report from Cybersecurity Ventures. This would be larger than the gross domestic product of every country except the US and China.
According to David Fairman, Asia-Pacific chief information officer at Netskope, who formerly held senior security responsibilities at major institutions such as Royal Bank of Canada and JPMorgan Chase, in the face of this danger, cybersecurity specialists will continue to grow in what is becoming an AI arms race.
“We will see security teams effectively embracing AI in the coming years and months to improve threat identification and automate much of the defence process,” he said. “AI is commonly used in many of the latest cybersecurity products that are used by security teams today, and we will see this continue to evolve.”
Read all the Latest News, Trending News, Cricket News, Bollywood News,
India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.