Criminals are exploiting machine learning. Beware of these top vulnerabilities

Tim Olsen, Intelligent Automation Director at Hays Technology

Criminals are early adopters. They are frequently amongst the first to use new technologies for their own nefarious means, and industries struggle to keep one step ahead to plug security gaps. As we see the accelerating rollout of machine learning, it is becoming apparent that it is also opening the door to a whole new raft of security vulnerabilities.

We are seeing this emerging at two levels:

1. The direct use of machine learning to override common security tools, and

2. The exploitation of machine learning systems themselves.

Let’s take a deeper look into both aspects.

The use of machine learning for breaking and entering

Criminals are using machine learning as a tool to evade or override security systems by exploiting their ability to recognise images, gather data and learn from outcomes.

CAPTCHA has become a very common means of preventing bots from emulating humans and interacting with applications for criminal purposes. Machine learning can now be taught to recognise a bike, bus or traffic lights, for example. As image recognition improves, we are starting to see machine learning having very high success rates even against leading platform CAPTCHAs.  Once the image is identified the first level of security is neutralised.

Even passwords are vulnerable to machine learning. By using machine learning to study a user’s social media and public accounts, a dataset can be implied and targeted password infiltration attempts can be made with greater success than standard brute force methods.

Malware is a huge problem for software and OS suppliers, they depend on being able to identify malware and then provide updates to defeat it. Machine learning is being touted as a means of creating Chameleon malware which morphs its profile to avoid detection, with the software unable to identify the trojan horse and terminate it.

Finding the cracks

Rather than using machine learning as a tool in itself, criminals are increasingly exploiting new vulnerabilities within machine learning models themselves, by creating false biases by tampering with the learning datasets.

Hackers are able to introduce almost imperceptible changes to source data in order to create false patterns. For example, a hacker might introduce a single white pixel into a variety of images – the machine learning learns to associate that pixel with any given outcome and the hacker exploits this to introduce false learning.

Similarly, if the hacker is able to tamper with the original source data and introduce sufficient cases to introduce a bias, the model becomes ‘poisoned’ and ineffective. Consider a case where machine learning was reading news articles to determine sentiment, but was then targeted by fake news articles with sufficient volume to bias the outcome.

Training data for machine learning is commonly available in scale online and is frequently used by developers. If these sets were maliciously tampered with the model collapses. In this scale of data it would be very difficult to initially identify an attempt to introduce a deliberate bias.

In some models with limited scale datasets, such as small medical trials, it may even be possible to reverse engineer the machine learning model to identify the original data from the outcome, which could have huge confidentiality implications.

Machine learning can bring huge benefits when harnessed correctly, but as with all technologies, it will be exploited by some undesirable elements if the opportunity exists. We must ensure that we employ due diligence and the latest technologies to combat the criminals when designing solutions and stay ahead of their game.

If you are an organisation or tech enthusiast looking to find out more about security, you can find more blogs and interviews here.

 

Author

Tim Olsen
Intelligent Automation Director, Hays Technology

Tim worked in digital transformation for 20 years developing solutions to improve user journeys and experience for blue chip clients. More recently he grew the UK’s largest RPA CoE and went on to specialise in helping organisations overcome their barriers to scaling automation. He is a thought leader and evangelist for Intelligent Automation, and leads the IA Consulting specialism for Hays.

00