DAV EY WINDER
Davey explains the difference between 2FA and 2SV, and reveals why your wireless mouse – and the attached computer – could be at risk
Davey explains the difference between 2FA and 2SV, and reveals why your wireless mouse – and the attached computer – could be at risk.
Istart this month’s column with a question: which type of user authentication system is the most secure? Is it a hardware token, a code-generating application, a code delivered by text message, or a system using push notifications?
I guess that most readers will be in a similar situation to me, being faced with all sorts of additional verification being sought to access one service or another. My business bank makes me input a passkey on my smartphone, which then spits out a time-limited code; my personal bank relies on me recalling a lucky dip of characters from a passkey, having already entered a PIN; and PayPal likes to send me one of those time-restricted codes by way of an SMS text message. Then there are other online services that ask for a code generated from the authenticator app of my choice, or request that I insert a hardware token into a USB port to finalise my login.
My favourite example of additional verification requirements is the Dashlane password manager, which I currently have configured to require a long and complex passkey that’s committed to muscle memory – my fingers literally go into automatic typing mode when it’s needed. Oh, plus a time-limited code generated by an authenticator app. Following all that there’s a successful fingerprint scan required to gain access to my password vault.
You can use a resource such as Two Factor Auth ( twofactorauth.org) to find out what methods any given online service offers.
The thing is, with so many methods of user verification out there, which should we be using when given the choice? The simple answer is any of them – because any is better than none. However, let’s assume we accept that as a given; so which of these additional layers of security is the most secure?
You’ll have noticed that most involve your phone in some way. This begs the question of whether this introduces an unnecessary weak link into the secure login chain? Let me repeat: using any form of Two-Factor Authentication (2FA) or Two-Step Verification (2SV) makes the account you’re trying to access far more secure than just sticking with a bog-standard username and password.
You’ll have spotted that I’ve mentioned 2FA and 2SV; in most articles about login security you’ll see only the former. The truth is, there’s a difference between the two, and true 2FA is a more secure method of identifying the owner of an account than 2SV. However, once again, using either is better than using neither.
In brief, there are three kinds of authenticating factor: something you know, something you have, and something you are. The knowledge factor is most often your username and password combo used to initiate the login process. A common technical error is describing a “Time-based One-Time Password” (TOTP) generated by an app on your smartphone, or sent via SMS to it, as being a second factor, and systems that combine a login with such a TOTP as employing 2FA. This is actually 2SV, because the TOTP is also something you know, and malware is capable of intercepting it.
True, 2FA requires distinct factors, so a good example would be a login followed by a fingerprint hardware token (such as a YubiKey) scan. It’s also true that two distinctly separate authenticating factors are more difficult to compromise than a single one with multiple verification requirements. However, in most scenarios, I feel it’s just a game of security semantics.
Introducing multiple acronyms into the “more secure” equation only serves to confuse the would-be user, rather than clarify distinctions that mean little to most people outside of the security industry. I’d much rather people were using the wrong acronym than not using any form of additional login requirement, and I’d rather that was easy to deploy and use than adding layers of complexity into the security concept.
Which brings me nicely back to “which is more secure?”. The answer, as you probably now realise, is the hardware token, for all those 2FA versus 2SV reasons. Yet hardware tokens also tend to be the most complex to deploy and expensive, be it for the small business or home user. SMS text messaging of codes would be last on my list since it demands a phone signal, and for the determined hacker, there are numerous methods of intercepting these codes.
New to the scene last year was Google prompt. This makes the process of authentication as simple as a popup on your smartphone asking if you just tried to access something, and giving you Yes/No buttons to press. It’s easy to configure ( see right) but there remains the potential for malware to intercept the notifications.
The same is true of authentication codegenerating apps, which have been shown to be vulnerable to privilege escalation exploits, plus the encryption used to seed the codes could be broken by a determined attacker.
I’m happiest with services that let me pick how the authentication code is generated, so I can use the authenticator app of my
choosing and then ensure that I make accessing it as difficult as possible for anyone who isn’t me. As it turns out, that’s pretty easy as my phone is encrypted, the lockscreen requires a fingerprint to dismiss, and my authenticator app also requires a fingerprint to open it. None of this would prevent a determined actor in possession of my smartphone from being able to circumvent my security; but if such a clued-up actor had possession of my phone, then it’s game-over already.
MouseJacking!
While the rampage of ransomware was the attention-grabbing story of last year, the subtitle for 2016 was comprised of just three words: Internet of Things.
Four words if you precede “things” with the almost obligatory “insecure”. Many of the headlines concerning IoT technology focused on the fantastical. “Pisspoor IoT security means it’d be really easy to bump off pensioners” and “Can your kettle take down the internet?” were both real 2016 stories.
The first in The Register – which had a subtitle of “Killing pensioners, two keyboard taps at a time” – was a light-hearted look at DoS attacks against internet-connected centralheating thermostats. The second, in the Daily Mail, warned of the dangers of not changing default passwords in devices such as kettles. Interestingly, it spoke about these everyday devices being overlooked when it comes to the basic security smarts people have begun to understand with regards to laptops and smartphones.
That connected devices are of interest to hackers – and connectivity is all we’re really talking about when we drop the IoT bomb into conversation – should come as no surprise. The devices themselves are usually a conduit to the real target, rather than being the target themselves. And so it’s another set of everyday objects that have, perhaps understandably so, skipped the attention of all but the most securityminded on both sides of the legal fence: mice and keyboards.
As unlikely as it may seem, my favourite bit of technical hacking from 2016 was a class of vulnerabilities dubbed MouseJacking. A MouseJack is, per the Bastille Threat Research Team (BTRT) that discovered it, an exploit that can inject unencrypted keystrokes into a target machine from up to 225m away using cheap, non-Bluetooth radio transceiver USB dongles. Dismiss it as being just another theoretical code-injection exploit if you like, but for me this has a touch of evil genius about it. Sure, it’s a bit low-rent when compared to the state-sponsored style of a Stuxnet attack, with multiple high-value zero-day exploits sacrificed to the gods of espionage; ten quid’s worth of dongle and a total of nine vulnerabilities from brands that really ought to have known better is all it took.
Wireless keyboard manufacturers have pretty much wised up to the eavesdropping threat, and so keystrokes are sent encrypted when no wires are involved these days. A couple of years ago, writing on these very pages, I mentioned how weak wireless keyboard signals tend to be, and thus the risk of remote capture was, well, remote. I also mentioned how signal encryption was a key part of the wireless keyboard spec (every pun intended) and even specialist devices such as the KeySweeper – which captured and decrypted keystrokes – wasn’t a threat to modern kit. Microsoft started using AES encryption after 2011, and KeySweeper couldn’t hack these. So what’s changed with MouseJack? The clue is in the name. It makes the most of wireless proprietary protocols operating in the 2.4GHz “Industrial, Scientific and Medical” (ISM) band, which don’t bother themselves with all that encryption nonsense. Instead, they happily use unencrypted communications between the mouse and the USB dongle attached to the computer. Non-Bluetooth wireless mice from Amazon (Basics), Dell, Gigabyte, HP, Lenovo, Logitech and Microsoft were all found to be vulnerable to an attacker spoofing mouse movements and generating keystrokes. All the time, the target dongle thinks it’s communicating with the wireless mouse or keyboard, but is getting the code the malicious actor is sending from a replacement dongle costing around £25 instead.
The attack mode could be used to send malware to the target machine, or extract credentials data from it. That Bastille managed to link a series of vulnerabilities to circumvent the keyboard encryption is impressive; that it even managed to work with dongles that required encrypted comms (by targeting the mouse instead) even more so. The clever bit is that the exploit can spoof a wireless mouse that tricks the target PC into thinking it’s talking to a keyboard.
This spoofing element could become even more interesting over time, and as IoT grows ever bigger. Why is that, I hear you ask? Well, if it can trick a computer into thinking a spoofed mouse is a real keyboard, then what other cross-device treachery can RF-based protocol hacking come up with?
In typical IoT fashion, most of the vulnerable devices will need to be binned if security matters to the users. Firmware patches are thin on the ground in IoT territory, and that’s also the case with wireless dongles and RF mice; the transceiver chips are designed to be programmable only once and so can’t be updated. A decent list of at-risk devices can be found at pcpro.
link/271bastille, along with vendor responses and links to firmware patches where available.
Back in 2015 when I wrote about KeySweeper, I concluded that there were too many caveats to make it a real-world threat: distance, model of keyboard being used, the HeathRobinson home-built hacking device requirement, the fact that Bluetooth mitigated the risk – albeit at the cost of introducing some of its own. The best risk mitigation back then was to suggest not using a wireless keyboard unless you had no other choice. Wired keyboards tended to be more reliable and were a lot cheaper.
That’s not true anymore, and MouseJack plugs the real-world gap in terms of distance, cost of the attack dongle, and choice of likely target devices. My advice about not going wireless isn’t going to stick with many folk now, but I’d suggest you stick with Bluetooth if you’re going to snip the wires from your working life.