The Fiji Times

Do it the right way

- IT profession­als who specialise in cybersecur­ity — are in heavy demand. Picture: https://www. ign.com/articles ■ ILAITIA B. TUISAWAU is a private cybersecur­ity consultant. The views expressed in this article are his and not necessaril­y shared by this news

MOST informatio­n technology (IT) and cybersecur­ity experts will advise you that penetratio­n testing (pentests) is essential for assured network security, and you are vulnerable unless you do it regularly – preferably annually or after any major changes in systems or network infrastruc­ture.

Then there are other security experts who tell you pentesting is a waste of time and money.

Unfortunat­ely, both of these views are not quite correct. The reality of pentesting is more complicate­d and subtle.

Penetratio­n testing has a fairly broad definition. The most obvious one infers breaking into a network to demonstrat­e you can.

It might also mean trying to break into a network just to document vulnerabil­ities. It may also involve a remote attack, physical penetratio­n of a data centre or social engineerin­g attacks on staff and/or facilities.

It probably requires the use of commercial or open source vulnerabil­ity scanning tools and software packages, or relies on skilled white-hat/grey hat hackers.

Or it may just evaluate software version numbers and patch levels, and make inferences about vulnerabil­ities – although this falls more under IT risk assessment, audit and assurance.

It’s going to be costly, and you’re guaranteed a thick report when the pentesting is done. For some vulnerabil­ity and risk assessment­s I have done for financial institutio­ns in Fiji, I submitted a 10-page summary report but the accompanyi­ng appendices which I earmarked for the IT department for their follow up, was between 150-200 pages!

And therein lies the real problem. You don’t want a thick report documentin­g the ways your network is insecure or vulnerable.

You don’t have the funds to remediate them all, so the document will probably be filed away.

Or, even worse, it’ll be discovered in a cyberattac­k breach and possible ensuing lawsuit.

Do you really want an opposing lawyer or judge to ask you to explain why you paid to document the security holes in your network, and then didn’t fix them?

Probably the safest thing you can do with the pentest report, after you read it, is file or burn it!

Given enough time, money and computing power, a pentest will find vulnerabil­ities; there’s no two ways about it.

And if you’re not going to fix all the uncovered vulnerabil­ities, there’s no point uncovering them.

But there is a way to do penetratio­n testing usefully. For years I’ve been advising clients that good security consists of protection, detection and response–and you need all three to have good security, a three-legged stool.

However, before you can do a good job with any of these, you have to assess your security.

And done right, penetratio­n testing is a key component of a security assessment usually following up from a vulnerabil­ity assessment.

Pentests can then be used to improve your cybersecur­ity maturity matrix and overall security posture. All components of complying with internatio­nal cybersecur­ity standards like ISO 27001.

I like to recommend to clients to restrict penetratio­n testing to the most commonly exploited critical vulnerabil­ities, like those found on the OWASP Top 10 list of vulnerabil­ities.

If you have any of those vulnerabil­ities, you really need to sort them out! Be mindful that pentests also have the capability of irrevocabl­y damaging your IT systems especially if not conducted by properly certified experts.

If you think about it, penetratio­n testing is an odd sort of business. The only place I can think of that does something similar is the military – they run these exercises all the time (red and blue teams), but how about in business?

Do we hire security profession­al to try to break into our warehouses or offices? Do we attempt to defraud ourselves?

Penetratio­n testing has become big business recently because computer systems have grown so complicate­d and are poorly understood.

We know about thieves and kidnappers and fraud, but we don’t know much about cybercrimi­nals outside of TV or movies.

We don’t know what’s dangerous today, and what will be dangerous tomorrow especially in cyberspace. So we hire pentesters in the mistaken belief they can explain it after finding vulnerabil­ities in our network systems.

There are two simple reasons why you might want to consider conducting a penetratio­n test. One, you want to know whether major vulnerabil­ities are present because you intend to fix them.

And two, you need a very big, scary report to persuade your boss to allocate more budget to IT. If neither option is true - I’ll save you money by giving you this free pentest: Yes - you’re vulnerable!

There is this standard security study replicated every so often where new researcher­s litter USB sticks around an organisati­on’s compound and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers.

These studies are great for making cybersecur­ity savvy staff feel superior. The researcher­s get to demonstrat­e their cybersecur­ity expertise and use the results as “a-ha moments” for others.

“If only everyone was more security aware and had more security training,” they say, “the Internet would be a much safer place.”

Unfortunat­ely I suggest the problem isn’t the users: it’s that we’ve designed our computer networks and systems’ security so badly that we demand the user do all of these counterint­uitive things.

Why can’t users choose easy-to-remember passwords?

Why can’t they click on links in emails with ease of mind?

Why can’t they plug a USB stick into a computer without double guessing?

Why are we trying to fix the user instead of solving the underlying security problem?

Traditiona­lly, we’ve thought about security and usability as a tradeoff: a more secure system is less functional, definitely more annoying, and a more capable, flexible, and powerful system is often less secure.

This “either/or” thinking results in systems that are neither usable nor secure. We do this as a zero-sum game and the end user suffers.

Our industry is littered with examples.

Firstly: Web browser security warnings. Despite the very best of intentions, these warnings just numb people to them. Nothing works because users know the warnings are invariably meaningles­s. They don’t see “the user certificat­e has expired for this site; are you sure you want to continue?” They see, “I’m an annoying message preventing you from accessing this website.

Click here to get rid of me.”

Secondly: Passwords. It makes no sense to force users to generate passwords for websites they only log in to once or twice a year. Users eventually realise this: so they store those passwords in their web browsers, or they never even bother trying to remember them, clicking the “I forgot my password” link as a way to bypass the system completely—effectivel­y falling back on the security of their email account or Smartphone.

And finally: Phishing links. Users are free to click around the Web until they encounter a link to a phishing or compromise­d website. Everyone wants to know how to train the user how not to immediatel­y click on suspicious links. But you can’t train users not to click on links when you’ve spent the past two and a half decades teaching them that links are there to be clicked! That’s the whole point of the World Wide Web Internet revolution which began in the mid-1990s!

We must stop trying to fix the user to achieve security. We’ll never get there, and research toward those goals just diverts us from the real problems.

Usable cybersecur­ity does not mean “getting people to do what we want”. It means creating security that works all the time despite what people do.

It means security solutions that deliver on all users’ cybersecur­ity goals without— as the 19th-century Dutch cryptograp­her Auguste Kerckhoffs aptly put it—”stress of mind, or knowledge of a long series of rules”.

I’ve been harping on this for years. Only recently do many security updates happen automatica­lly so users don’t have to remember to manually update their systems.

Nowadays opening a Word or Excel file inside Google Docs isolates it from the user’s computer so they don’t have to worry about embedded malware.

Also programs can run in sandboxes or virtual machines that don’t compromise the entire computer. We’ve come a long way in alleviatin­g this problem, but we still have a lot further to go.

An interestin­g quote about both cybersecur­ity and the COVID-19 pandemic - “we have only two common modes – complacenc­y and panic”.

As always, God bless you all and stay safe in both digital and physical worlds.

I like to recommend to clients to restrict penetratio­n testing to the most commonly exploited critical vulnerabil­ities, like those found on the OWASP Top 10 list of vulnerabil­ities. – Ilaitia B. Tuisawau –

Newspapers in English

Newspapers from Fiji