Linux Format

My research area is Human-centric Security which is sort of made-up.

Jonni Bidwell interviews Laura Bell on the security risk that puny humans present.

-

Laura Bell on protecting humans from themselves

on new areas of security “My research area is Human-centric Security, which is sort of made-up.”

Laura Bell is CEO of SafeStack, a software security company based in Wellington, New Zealand. Current security discussion­s tend towards catchilyna­med and beautifull­y branded vulnerabil­ities, but SafeStack wants to talk about the elephant in the room: humans. We caught up with her at OSCON ‘15 to talk about how to address the plenitude of our species’ security failures.

Linux Format: So we’ve seen an awful lot in the news about high-profile software vulnerabil­ities with logos, people talking about ‘cyber armageddon’, and a lot of people generally worried about the state of

software security. But by far the most common attack vector is people giving away informatio­n—humans are still fallible, maybe even more fallible than Adobe Flash?

Laura Bell: Ha. Some humans maybe, but we can’t decommissi­on them so easily from our system. We’re at an interestin­g time for the security world: There’s a lot of money going around, if you believe the statistics from Gartner and the like there’s tens of billions of dollars being spent on devices and software. So if you want to get money by writing cyber and threat software it’s a good space to come and play.

But the reality of the situation is that we’re not looking at one of the most fundamenta­l issues that we have and that’s that people—the way that we’ve evolved, how our communitie­s work and how we have adapted to the world that we live in—work by connection­s and by trust. These are traits that are built into us, it’s in our foundation­s. But we do nothing to try and understand how that trust and how that connectivi­ty relate to the threatenin­g world we live in now. There’s always been a threat to us as a species, but this is very different way to approach it—there’s no easy way to spot things coming like the predators of old. LXF: So how does SafeStack help with this detecting? LB: SafeStack has two sides: One thing we do is work with software developers, getting them to see security, instead of this big scary thing that’s separate from them, that it’s something that they should be thinking about all of the time. Not in a scary overwhelmi­ng way, but in little changes that they can make all the way through. Some of these things are actually teaching developers that it’s OK to say “I don’t know”, it’s okay to say “if I was thinking in a slightly malicious way I’d totally be able to rob this system using…”. It’s making it acceptable for us to have that conversati­on, and to find tooling and ways of communicat­ing all through our process so that we get security built in all the way through. We’re not about trying to get rid of border devices and pentesting, we just want to make them work for it. The other side is research, so my research area is in Humancentr­ic Security, which is sort of a made-up term that I’ve been using—there’s probably a real term for it somewhere.

LXF: I dunno, HcS sounds pretty good.

LB: Yeah, why not, if you’re going to coin a term it may as well have a good acronym. This started with the idea that I’d been penetratio­n testing networks for a very long time, most of my career, in fact. In network pentesting we’re interested in the connection­s between items: If you compromise one system you don’t just sit there and think “I’m done now” , you say “Ooh where can I go next?” and it’s like a crazy treasure hunt. When I started looking at the people side of things, I realised that we treat our people, when we do our testing and education, we treat them as isolated autonomous units. We never treat them as a collection of systems. So I was pondering “does this make a difference?” And what I’ve been finding in my research and in my developmen­t of AVA is that it does.

That connection between us and our connection between our people and our systems—who’s connected to what system and in what way—is really crucial to knowing who in your organisati­on needs most protection. Because some people are more interestin­g and more valuable to an attacker than others. And it’s not always the people we expect. People think you’d go after the sysadmin and the CIO. But I wouldn’t, I’d go after the front-end tellers in a bank, I’d go after people who’d been in the organisati­on eight years and who’d moved jobs three times and that are really well liked. Because those people, over the course of gathering those connection­s and making their career have amounted a massive amount of

power in that system—power we don’t see because they’re just the helpful people that get the job done.

LXF: So they are kind of like high-degree nodes on a graph?

LB: Exactly, Graph Theory [the study of dots joined by lines, sometimes with arrows or colours – Tech Ed who’s a failed group theorist] comes up a lot in this line of work. There’s a really good book I read way back when I was studying that really got me interested in this connectivi­ty thing. I knew about it from marketing—we’ve been using this model for a long time, but never for human security. For me it does two things: It measures the security of human systems, which has never been done before, and it gives us a vehicle for communicat­ing this with everyone else. Because unfortunat­ely in security we have a really bad habit of being completely awful at communicat­ing with human beings. So looking at a graph and looking visually at the connection­s between people is actually something that we can all understand—it gives us a language if you like.

LXF: You have mentioned Ava, how does that work then?

LB: Ava is a research project that I released at Kiwicon last year and it was supposed to be just a small thing. It’s a three-stage system that is looking at the connection with people and how they react to risk. Stage one is the Know stage, and that’s about mapping those connection­s. At the moment, we pull from easy sources within organisati­onal networks and technologi­es, for example we pull from LDAP or Active Directory, things like: when does a password expire, how many groups and which groups are they a member of, when did they log in last, how many times did they log in? All that kind of stuff.

We also pull from Google Apps and we pull all of these things together, we map out the group membership­s and connect everyone together. The aim of Know is that you can see your graph of your people and you can actually start weighting them based on who has the most attributes in their technologi­cal profiles that make them high values. So ‘password never expires’ is never really a great thing to see, or they’re a member of lots of groups or they’re a member of groups that are known to be highpowere­d groups—those are all interestin­g. So in the Know stage you can see this picture, which is a great start. The second phase, the Test phase, is about injecting threats into this environmen­t.

Now this has already been done to some extent, with things like Phish5 and PhishMe and all these kind of spearfishi­ng kind of things— there’s a lot of read teaming goes on. The idea of test isn’t just to do it as a one-off thing against individual­s, but to do it against groups and to do it in different communicat­ion media. I believe that, while we still do use email, for many communicat­ions between groups email isn’t the first choice anymore. You’ll have people with a private Facebook group where they exchange messages, they’re sharing files on Dropbox … We are communicat­ing in so many different ways now that none of our tools take into account so we’re actually trying, as we grow the tool, to have our tests through all of these communicat­ion mechanisms. There are many challenges to that which I won’t bore you with. The idea is that we inject a threat, so different types of attacks and see who passes and fails. The important bit here is not to shame the people, so we don’t say in the results that Bob from Accounting failed, because we don’t want Bob to get fired—it’s not about Bob.

LXF: So you’re saying Bob’s mistake is really just a symptom of a bigger problem with company staff and practice generally?

LB: Yes, it’s about the people and it’s about the environmen­t that you’ve put Bob or whoever in, all those permission­s you gave them. So we’ve injected the threat and seen what happened and now we want to share that in some way.

Phase 3 is Analyse and if you can combine everything you know from the Know phase, the

graph of people and all those attributes, combined with your test results, you now have a very visual way to go “Right, these parts of my organisati­on probably need a little bit more defence or education than others”. The beauty of that is, at the moment we do really rubbish at education in this space, but the idea is that we can get better at education, run tests before and after it, and then see if there’s change. If there’s a change over time, then we know something is working, we may not always be able to correlate it with what has changed, but it’s good to be able to see over time where things are headed. Because we want to see improvemen­t. We also want to see not that there’s just been an improvemen­t in terms of people spotting phishing attacks, we want to see more communicat­ion about security issues, we want people raising more internal bugs. So we’re kind of making human security something that we can do continuous­ly rather than as a special, one off event.

LXF: When you talk about injecting attacks, how do you craft these things?

LB: Well, luckily lots of businesses have been keeping successful phishing emails against their companies for years. It turns out that all I had to do was go and ask them. So there’s several universiti­es globally, as well as banks, they’ve got corpuses of thousands of emails that they didn’t really know what to do with—they were just sat on a server somewhere. What we’ve done is we’re pulling that corpus together, particular­ly on the email side and saying “OK, we can use these examples and just change the links”, that’s one thing, that may not be so effective. So what we’re trying to do is say “Is there a way we can use this whole corpus to generate new ones” and the hope is that over time we’ll be able to publish this corpus so that people can use it in other security tools, so they can tune for the types of attacks that they’re going to see.

Ava is, for me, an important tool and it’s an important conversati­on we need to have about protecting our people. But it wouldn’t take a whole lot of imaginatio­n to change Ava from a defensive tool into something more offensive, to use it to do bad things instead. We’re big advocates of open source at SafeStack, we think that security tools shouldn’t be magic and shouldn’t be secret—we should be open to scrutiny and improvemen­t, we should have someone to challenge us and say “This is wrong, get better”. But here we’ve got a tool which people could just download and do bad things with, bad things against other people. I don’t mind if it’s against computer systems, the law can deal with that, but when it’s against people that touches empathy, it starts to feel creepy.

LXF: Yes, it becomes a moral issue as well as a legal one.

LB: Right, and I’ve found since releasing this tool that reactions have been very polarised: There are people who understand the benefits and there are people that want it dead in the water. My talk isn’t saying we know the answer to this, but, much like Paul Fenwick said in his keynote this morning, there are questions we need to ask and discussion­s we need to have. One of them is “Is it safe to open source weaponisab­le code?” We’re using this against large number of human beings and that makes everyone feel uncomforta­ble and that includes us. LXF: How does Ava fit with the legal framework, I mean we’ve heard a lot about the Wassenaar Agreement lately—it seems like lawmakers could really hurt the industry here? LB: Um, I don’t know? The entire set of law and security tooling, exploit funding and vulnerabil­ity developmen­t is an absolute mess. I was very pleased that there was an opportunit­y to comment on Wassennaar, but the people that are making these decisions don’t understand the decisions that they are making. From my perspectiv­e I know that, at least at this point, Ava is illegal in Germany, just because German law is like that, and I have a feeling that in general it’s the kind of use case that the current lawing really tolerates. This is a big mess of an area, I have not found a lawyer yet who can give me a definitive answer about whether I can go to prison if somebody uses this, which is very troubling. But that’s the world we live in. When we write these tools we want to do good things, we want to help people, but this is complex, very complex. Privacy law-wise, we work very closely with privacy commission­ers in New Zealand and we’ve talked to ones in Australia. Basically we’re trying to be very proactive in that space, but even then they will quite openly say “This is not a use case we imagined for a computer system”, so we can do our best but we have to be ready to adapt as they start to clarify their position and start to think about these challenges. LXF: So they’re sort of struggling to catch up with the trade? And in the process moving the target or moving the goalposts by developing these laws? LB: Yes, and that’s not just for SafeStack it’s across the entire security industry. The entire industry is watching to see how they can do their jobs safely, because the job that security and vulnerabil­ity researcher­s do is very important and we don’t have a safe environmen­t in which to operate.

When we write policies on things we do try to make sure that they don’t link directly to technologi­es, but in this case there’s a really fine line between making those laws actionable and having them protect against known issues, but also having laws that can grow with us. I don’t think our legal frameworks and the way they develop can move at the speed they need to develop as the tools do.

on power & responsibi­lity “There are questions … one of them is ‘Is it safe to open source weaponisab­le code?’”

 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ?? Interview
Interview
 ??  ??
 ??  ??

Newspapers in English

Newspapers from Australia