Imminent threat: AI-driven cyber attacks hard to uncover

Artificial intelligence is not a revolution available just for cyber-security defenders.

Published: 05th January 2019 08:41 AM  |   Last Updated: 05th January 2019 08:41 AM   |  A+A-

Express News Service

HYDERABAD: Artificial intelligence is not a revolution available just for cyber-security defenders. The future would see the deployment of AI-powered cyber attacks and would dethrone those malwares operated by highly-skilled, malicious actors, found a study.

A research paper, “Can Artificial Intelligence Power Future Malware?”, published by ESET Security observed that it would be even more difficult to uncover, track and mitigate AI cyberattacks than the current ones. Apart from malwares which use automation, there are various other malicious application of machine-learning algorithms, it noted.

It could be easily used as a tool by attackers, who might use AI for detecting cybersecurity defenders, generate new content such as phishing emails, high-quality spam and spread disinformation by combining legitimate information with fake news -- which it would do by learning the victim’s preferences, it said.
In what would make it even harder for cybersecurity proponents, artificial intelligence would also help malicious actors in identifying patterns and mistakes in their generated content giving them the chance to “enhance” their content.

AI could also help in finding the most effective attack technique. “Attack techniques can be abstracted and combined to identify the most effective approaches. These can be prioritised for future exploitation. In case defenders render one of the vectors ineffective, the attacker only needs to restart the algorithm,” the research said.

When choosing targets, AI could be utilised to decide if the visitor is worth attacking and serve them with malware. Other applications include infecting apps with its own modifications and attacking weak internet-connected security cameras and other Internet of Things (IoT) devices.

However, despite the apparent varied nature of dangers involved with AI, there are limitations too. “To use machine learning effectively a lot of input samples are needed, every one of which must be correctly labelled. This takes a lot of time, and even with the input, results are not guaranteed.”

Dangers many, but there are limitations too

Despite the apparent varied nature of dangers involved with AI, there are limitations too. “To use machine learning effectively a lot of input samples are needed, every one of which must be correctly labelled. This takes a lot of time, and even with the input, results are not guaranteed.”