Article

The dark side of AI

Artificial intelligence is helping to protect organisations more effectively, but there is also the potential for it to be used by cybercriminals.

By Tim Ferguson

Fri 25 Jan 2019 @ 16:44

The world of artificial intelligence (AI) is evolving rapidly, from personal assistants on smartphones and voice-activated devices to thermostats that adjust temperature based on anticipated needs to using data from wearable devices to help diagnose medical conditions.

In cybersecurity, AI is helping to deal with the volume of alarms that security teams receive every day. It can be hard to cover all of these alarms, while alarm fatigue means some threats can be missed as analysts struggle to filter out false positives.

AI offers real-time analytics for threat detection and prevention using algorithms and statistical rules around normal behaviour, and can also help automate security processes. It allows systems to use log data, user behaviour and data flows to create a picture of normal behaviour and then reacts instantly to any behaviour that strays too far from the norm.

Artificial intelligence can help detect and prevent threats in other ways too. For example, spotting malware that is lurking on the network and waiting to spring into action. The issue can be dealt with before it becomes a problem, sparing the organisation many hours of work to restore systems and mitigate damage resulting from a breach.

The weaponisation of AI

But there is another side to AI that could cause problems, with the potential for it to be weaponised to evade systems and tools that defend networks and mitigate compromises.

There is talk of an AI-driven arms race by attackers looking to get a greater return on their investment. According to a report by the National Natural Science Foundation of China, “hackers weaponised by AI will create more sophisticated and increasingly stealthy automated attacks that will demand effective detection and mitigation techniques”.

An example of how AI could help cybercriminals is spear phishing. AI tools could create numerous fake messages to enable more people to be targeted. Or AI could be employed to take over chat conversations between customers and service desk operatives to steer the conversation so that personal data is shared or access is granted to a corporate system.

There is also evidence that AI is already being employed by cybercriminals to work out answers to people’s secret answers to common security questions, such as the name of their favourite pet or their mother’s maiden name. AI could also be used to learn the defences and tools it encounters, allowing it to create malware that is better at fooling security software designed to spot rogue code.

In this context, it’s easy to understand why there are concerns among senior IT decision makers that the potential for AI technology to fall into the wrong hands poses new risks.

But there is also the argument that there is plenty of low-hanging fruit for cybercriminals to target that doesn’t require the capabilities that AI could provide. In addition, developing AI capabilities may well be beyond the scope of many cybercriminals, due to the level of resources and effort they require.

Still, there is clearly potential for AI to help well-funded threat actors – particularly those backed by nation states – to come up with new and sophisticated ways to achieve their goals. As a result, there is likely to be ongoing competition between the cybersecurity industry and criminals to outwit each other using AI.

AI is already becoming a feature of cybersecurity, but this may just be the beginning, as the industry strives to keep up with the threats that AI enables.

Learn more

Discover how LogRhythm CloudAI enables faster threat detection through the use of artificial intelligence and machine learning.