Toronto Star

Artificial intelligen­ce versus the hackers

New tech could tilt the balance in favour of improved security

- DINA BASS BLOOMBERG

Last year, Microsoft Corp.’s Azure security team detected suspicious activity in the cloud computing usage of a large retailer: One of the company’s administra­tors, who usually logs on from New York, was trying to gain entry from Romania. And no, the admin wasn’t on vacation. A hacker had broken in.

Microsoft quickly alerted its customer, and the attack was foiled before the intruder got too far.

Chalk one up to a new generation of artificial­ly intelligen­t software that adapts to hackers’ constantly evolving tactics. Microsoft, Alphabet Inc.’s Google, Amazon.com Inc. and various startups are moving away from solely using older “rules-based” technology designed to respond to specific kinds of intrusion and deploying machine-learning algorithms that crunch massive amounts of data on logins, behaviour and previous attacks to ferret out and stop hackers.

“Machine learning is a very powerful technique for security — it’s dynamic, while rules-based systems are very rigid,” says Dawn Song, a professor at the University of California at Berkeley’s Artificial Intelligen­ce Research Lab. “It’s a very manual intensive process to change them, whereas machine learning is automated, dynamic and you can retrain it easily.”

Hackers are themselves famously adaptable, of course, so they too could harness machine learning to create fresh mischief and overwhelm the new defences. For example, they could figure out how companies train their systems and use the data to evade or corrupt the algorithms. The big cloud services companies are painfully aware that the foe is a moving target, but argue that the new technology will help tilt the balance in favour of the good guys.

“We will see an improved ability to identify threats earlier in the attack cycle and thereby reduce the total amount of damage and more quickly restore systems to a desirable state,” says Amazon chief informatio­n security officer Stephen Schmidt. He acknowledg­es that it’s impossible to stop all intrusions, but says his industry will “get incrementa­lly better at protecting systems and make it incrementa­lly harder for attackers.”

Before machine learning, security teams used blunter instrument­s. For example, if someone based at headquarte­rs tried to log in from an unfamiliar locale, they were barred entry.

Or spam emails featuring various misspellin­gs of the word “Viagra” were blocked. Such systems often work.

But they also flag lots of legitimate users — as anyone prevented from using their credit card while on vacation knows. A Microsoft system designed to protect customers from fake logins had a 2.8 per cent rate of false positives, according to Azure chief technology officer Mark Russinovic­h. That might not sound like much, but was deemed unacceptab­le since Microsoft’s larger customers can generate billions of logins.

To do a better job of figuring out who is legit and who isn’t, Microsoft technology learns from the data of each company using it, customizin­g security to that client’s typical online behaviour and history. Since rolling out the service, the company has managed to bring down the false positive rate to .001 per cent. This is the system that outed the intruder in Romania.

Training these security algorithms falls to people like Ram Shankar Siva Kumar, a Microsoft manager who goes by the title of data cowboy. Siva Kumar joined Microsoft six years ago from Carnegie Mellon after accepting a second-round interview because his sister was a fan of Grey’s Anatomy, the medical drama set in Seattle. He manages a team of about 18 engineers who develop the machine learning algorithms and then make sure they’re smart and fast enough to thwart hackers and work seamlessly with the software systems of companies paying big bucks for Microsoft cloud services.

Siva Kumar is one of the people who gets the call when the algorithms detect an attack. He has been woken in the middle of the night, only to discover that Microsoft’s in-house “red team” of hackers were responsibl­e. (They bought him cake to compensate for lost sleep.)

The challenge is daunting. Millions of people log into Goo- gle’s Gmail each day alone. “The amount of data we need to look at to make sure whether this is you or an impostor keeps growing at a rate that is too large for humans to write rules one by one,” says Mark Risher, a product management director who helps prevent attacks on Google’s customers.

Google now checks for security breaches even after a user has logged in, which comes in handy to nab hackers who initially look like real users. With machine learning able to analyze many different pieces of data, catching unauthoriz­ed logins is no longer a matter of a single yes or no. Rather, Google monitors various aspects of behaviour throughout a user’s session. Someone who looks legit initially may later exhibit signs they are not who they say they are, letting Google’s software boot them out with enough time to prevent further damage.

Amazon’s Macie service uses machine learning to find sensitive data amid corporate info from customers like Netflix and then watches who is accessing it and when.

Besides using machine learn- ing to secure their own networks and cloud services, Amazon and Microsoft are providing the technology to customers. Amazon’s GuardDuty monitors customers’ systems for malicious or unauthoriz­ed activity. Many times service discovers employees doing things they shouldn’t — such as putting bitcoin mining software on their work PCs.

Machine learning security systems don’t work in all instances, particular­ly when there is insufficie­nt data to train them. And researcher­s and companies worry constantly that they can be exploited by hackers. For example, they could mimic users’ activity to foil algorithms that screen for typical behaviour. Or hackers could tamper with the data used to train the algorithms. That’s why it’s so important for companies to keep their algorithmi­c criteria secret, says Battista Biggio, a professor at the University of Cagliari’s pattern recognitio­n and applicatio­ns lab in Sardinia, Italy.

“Security is an arms race, and the security of machine learning and pattern recognitio­n systems is not an exception.”

“Security is an arms race, and the security of machine learning ... is not an exception.” PROFESSOR BATTISTA BIGGIO

Newspapers in English

Newspapers from Canada