Is your bank storing passwords in plain text? Davey salves the concerns of worried readers, and explains how spear-phishers are exploiting Word.
Passwords are the perennial topic of the security consultant, continuing to provide food for thought and money in the bank year after year. Even with the inevitable progress of biometric alternatives, passwords and PINs form the basis of almost every login and authentication system out there.
Keeper Security is the latest vendor to send me password-related research data. Having analysed some ten million passwords from 2016’s biggest data breaches, it reckons that 17% of folk still use 123456 as a password. That’s bad, but not as bad as the sites and services that let them.
I continue to despair being faced with an outfit that restricts me not only to a string of no more than eight characters, but that also dictates the characters I can’t use (no special characters; alphanumeric only). And if you think I exaggerate, hear this: seven of the top 15 passwords on the most-breached list were of six characters or fewer. This is evidence that business still doesn’t truly understand the need for better password construction.
There’s also a need for better password hygiene: not storing passwords in plain text, for example. I mention this because a number of PC Pro readers have been in touch to ask if their banks are doing just this. You wouldn’t think so, would you? But, the reasoning goes, if I’m being asked to enter three specific characters from a much longer password as part of the online banking login process, then surely the longer password must be stored in plain text for this to work?
Needless to say, the banks aren’t going to reveal their security methodology in any great detail, but it’s a genuine concern and one I can address to some degree.
Banks will argue that this is the second stage of a user authentication process, with the first being entering your login username and password. The three-character selection is from a second password or “memorable” word. So any attacker must have already compromised the first step to get to the second. Having knowledge of that first pairing doesn’t imply knowledge of the second, and keyloggers/malware are unlikely to have access to both sets of data. Which hasn’t yet answered the plain text storage question, so I’ll do that now: I bloody well hope not. So how can they match the three characters then?
There are two main options as I see it: reversible encryption with a hardware security module (HSM), or pre-stored hashes for all permutations. I’d bank (pardon the pun) on the former, even though some would argue that banks might just as well use plain text: a malicious actor with control of the systems could recover it easily enough.
The thing is, with suitably strong security in place for key management, this is an unlikely scenario. The password itself is encrypted using something like AES and stored in the database. The user enters the three requested characters and both those characters and the encrypted password are fed into the HSM, which does the decryption and confirms – or denies – if the characters match. The password would only be in plain text within the secure HSM environment.
As I say, I don’t know for sure that this is how it’s done – some sources I’ve spoken to suggest that reversible encryption without the HSM element is used by some banks, and that’s much less secure. What I know for sure is that I’d much rather banks dropped this legacy option and instead rolled out more secure two-factor authentication systems using one-time codes...
The human factor, again
Regular readers will know that I’m a great believer in user awareness when it comes to matters of security. I don’t think it’s fair to blame users for breaches, but it’s important that every business understands that they remain a major weak spot in any security strategy unless threat awareness is part of the plan.
This can mean everything from ensuring that employees understand they’re seen as part of the solution, rather than part of the problem, through engaging them in recognising what a threat actually looks like. With social engineering right at the top of the list as far as both attack intelligence and execution are concerned, this is perhaps the most important area that user awareness needs to focus upon right now.
One of the pioneers in this sector is PhishMe, which rather annoyingly refers to itself as a “provider of human phishing defence solutions”. Nonetheless, it does a good job to make awareness training surrounding the phishing threat both engaging and effective. As I write this month’s column, PhishMe has just published the results of its UK Phishing Response Trends Report, covering phishing response strategies across a variety of business sectors.
Some of the numbers revealed in this report were surprising. For example, I’m gobsmacked that 75% of the IT professionals surveyed said that they have dealt with security incidents originating from a phishing email. Not because the figure is so high, but rather that it’s so low. In my experience, the percentage would be much nearer the full 100.
Of course, it rather depends upon how you define “incident”, but if you include employees (or, indeed, business owners) clicking on a link they shouldn’t have, or opening an unsolicited attachment, then I’d be amazed if any business could genuinely claim that this has never happened. It may not have led to a breach, but that’s a different thing altogether. More often than not, a phishing email – just like social media quiz app scams – isn’t the fuse that lights a breach explosion, but rather the first step in gathering intelligence that can lead to such a scenario.
“Businesses need to understand that they remain a weak spot unless threat awareness is part of the agenda”
I was less surprised by other findings in the report, such as email-related threats being at the top of the worry list, and almost all those asked having multiple security solutions in place to deal with email-borne and phishing threats.
Nor was I shocked by a number of “more than 500” being placed upon how many suspicious emails are picked up every week. A quick check of my own spam bucket, which also drags phishing and malware stuff into its big black hole, reveals that I receive anywhere between 50 and 100 such messages every day.
Sure, my email addresses are pretty well publicised, plus I work within the cybersecurity space so can expect to be something of a magnet for such stuff. However, I’m also a very small business consisting of just myself and one employee. That employee, I should add, is pretty clued up about the dangers that can come our way; she is also my life partner and is subjected to my rather frontline security world view at work and home.
Rohyt Belani, co-founder and CEO of PhishMe, is absolutely right when he says that technology alone “will not solve the problem” and “humanassisted technologies that stack up grey matter against hackers” are vital tools in our security strategy arsenals. “Businesses need conditioned, vigilant employees to recognise email-related threats and report them in a timely manner” Rohyt insists, adding that his goal through PhishMe is to “rapidly triage this barrage of employee-reported emails”.
Wombat Security Technologies also sent me its 2017 “Beyond the Phish” report. Data from more than 70 million users of Wombat products revealed just where user awareness is struggling to succeed. The use of shared login credentials, the implications of mobile device usage, and working safely outside of the office environment all featured strongly. All things that I regularly advise clients to engage with employees about.
Interestingly, this research suggests that the social-engineering message is getting across, with end-users performing well when it comes to recognising scam patterns in this category. The trouble is, the bad guys are on to this increasingly aware audience and are getting better at developing new methodologies to achieve their aims. New methodologies such as weaponising Word documents, for example. Hold on a moment, I hear every single one of you say right now, this is hardly a new innovation. And you’re right, if we’re simply talking about the old “open a Word doc and a malicious macro executes” trick. But this isn’t the new methodology in the case of a threat intelligence gathering operation discovered by researchers at Kaspersky Lab.
This method uses a document with no VBA macros, no embedded Flash objects and no portable executable (PE) files either. In fact, nothing to flag to security systems that this is anything but a harmless document. That it’s anything but harmless is the problem, though; it’s a spear-phishing reconnaissance tool that uses an obscure feature in Word to literally get intelligence for the bad guys.
It does this by sending a GET request to an internal link that contains relevant information about the device from which the document is opened. The researchers found that the Word documents it examined were formatted in OLE2 format, which enables the creator to embed objects that are linked to other resources within the same document, as well as multiple external resources. The trick here being that the INCLUDEPICTURE field can be used with Unicode, enabling the threat actors to trigger the GET request that fetches obfuscated URLs within the same document.
All of this is achieved with no user intervention beyond opening the document, which is where the awareness training stuff comes to the fore once more. Not opening unsolicited attachments should be a golden rule, no matter how convincing the spiel of the sender or, indeed, who the sender is. Accounts are compromised to validate the authority of the sender and leverage the trust factor. Social network postings are also used to create believable scenarios for document subject matters and their intended recipient. Spear phishing is a league apart from the blunderbuss blasting of ordinary phishing lures.
The information found to be sent to the threat actors in this particular spear-phishing campaign may not appear that valuable at first glance. It’s just data regarding the software being used by a device, after all. If you think that then you aren’t thinking like an attacker.
Data such as this is an enabler, which grants the threat actor an understanding of which exploit type will work best to successfully hack into the targeted device. Kaspersky Lab has already seen this precise method being used by the FreakyShelly phishing campaign, for example.
“Not opening unsolicited attachments should be a golden rule, no matter how convincing the spiel of the sender”
Is Kaspersky a threat to national security? (tl:dr nope)
On the subject of Kaspersky itself, you’ll probably have spotted that the US Senate has banned products from the company from being used by the federal government. If I’m being honest, not much that is happening in Trumpsville surprises me anymore. However, this move is a rather silly one in my never humble opinion.
The argument put forward by the Department of Homeland Security, and pushed with such patriotic fervour in certain political quarters, is that a Kaspersky product in a federal government environment has elevated privileges and could therefore access files that would be of use to the Kremlin. But it goes further in suggesting that Kaspersky as a company, and certain executives in that company, could somehow be pawns of Putin and a Russian government that might compel assistance in intelligence gathering.
You may think this makes sense – and I’ve received emails from PC Pro readers across the years asking me if it’s safe to install a Russian security product. My response to those readers is the same as to the Department of Homeland Security: at what point does the paranoia stop? What about all the hardware – everything from routers to smartphones – you use that’s made in China; are you concerned about that? Are you going