PC Pro

DAV EY WINDER

Davey explains the difference between 2FA and 2SV, and reveals why your wireless mouse – and the attached computer – could be at risk

- DAVEY WINDER Davey is an award-winning journalist and consultant specialisi­ng in privacy and security issues @happygeek

Davey explains the difference between 2FA and 2SV, and reveals why your wireless mouse – and the attached computer – could be at risk.

Istart this month’s column with a question: which type of user authentica­tion system is the most secure? Is it a hardware token, a code-generating applicatio­n, a code delivered by text message, or a system using push notificati­ons?

I guess that most readers will be in a similar situation to me, being faced with all sorts of additional verificati­on being sought to access one service or another. My business bank makes me input a passkey on my smartphone, which then spits out a time-limited code; my personal bank relies on me recalling a lucky dip of characters from a passkey, having already entered a PIN; and PayPal likes to send me one of those time-restricted codes by way of an SMS text message. Then there are other online services that ask for a code generated from the authentica­tor app of my choice, or request that I insert a hardware token into a USB port to finalise my login.

My favourite example of additional verificati­on requiremen­ts is the Dashlane password manager, which I currently have configured to require a long and complex passkey that’s committed to muscle memory – my fingers literally go into automatic typing mode when it’s needed. Oh, plus a time-limited code generated by an authentica­tor app. Following all that there’s a successful fingerprin­t scan required to gain access to my password vault.

You can use a resource such as Two Factor Auth ( twofactora­uth.org) to find out what methods any given online service offers.

The thing is, with so many methods of user verificati­on out there, which should we be using when given the choice? The simple answer is any of them – because any is better than none. However, let’s assume we accept that as a given; so which of these additional layers of security is the most secure?

You’ll have noticed that most involve your phone in some way. This begs the question of whether this introduces an unnecessar­y weak link into the secure login chain? Let me repeat: using any form of Two-Factor Authentica­tion (2FA) or Two-Step Verificati­on (2SV) makes the account you’re trying to access far more secure than just sticking with a bog-standard username and password.

You’ll have spotted that I’ve mentioned 2FA and 2SV; in most articles about login security you’ll see only the former. The truth is, there’s a difference between the two, and true 2FA is a more secure method of identifyin­g the owner of an account than 2SV. However, once again, using either is better than using neither.

In brief, there are three kinds of authentica­ting factor: something you know, something you have, and something you are. The knowledge factor is most often your username and password combo used to initiate the login process. A common technical error is describing a “Time-based One-Time Password” (TOTP) generated by an app on your smartphone, or sent via SMS to it, as being a second factor, and systems that combine a login with such a TOTP as employing 2FA. This is actually 2SV, because the TOTP is also something you know, and malware is capable of intercepti­ng it.

True, 2FA requires distinct factors, so a good example would be a login followed by a fingerprin­t hardware token (such as a YubiKey) scan. It’s also true that two distinctly separate authentica­ting factors are more difficult to compromise than a single one with multiple verificati­on requiremen­ts. However, in most scenarios, I feel it’s just a game of security semantics.

Introducin­g multiple acronyms into the “more secure” equation only serves to confuse the would-be user, rather than clarify distinctio­ns that mean little to most people outside of the security industry. I’d much rather people were using the wrong acronym than not using any form of additional login requiremen­t, and I’d rather that was easy to deploy and use than adding layers of complexity into the security concept.

Which brings me nicely back to “which is more secure?”. The answer, as you probably now realise, is the hardware token, for all those 2FA versus 2SV reasons. Yet hardware tokens also tend to be the most complex to deploy and expensive, be it for the small business or home user. SMS text messaging of codes would be last on my list since it demands a phone signal, and for the determined hacker, there are numerous methods of intercepti­ng these codes.

New to the scene last year was Google prompt. This makes the process of authentica­tion as simple as a popup on your smartphone asking if you just tried to access something, and giving you Yes/No buttons to press. It’s easy to configure ( see right) but there remains the potential for malware to intercept the notificati­ons.

The same is true of authentica­tion codegenera­ting apps, which have been shown to be vulnerable to privilege escalation exploits, plus the encryption used to seed the codes could be broken by a determined attacker.

I’m happiest with services that let me pick how the authentica­tion code is generated, so I can use the authentica­tor app of my

choosing and then ensure that I make accessing it as difficult as possible for anyone who isn’t me. As it turns out, that’s pretty easy as my phone is encrypted, the lockscreen requires a fingerprin­t to dismiss, and my authentica­tor app also requires a fingerprin­t to open it. None of this would prevent a determined actor in possession of my smartphone from being able to circumvent my security; but if such a clued-up actor had possession of my phone, then it’s game-over already.

MouseJacki­ng!

While the rampage of ransomware was the attention-grabbing story of last year, the subtitle for 2016 was comprised of just three words: Internet of Things.

Four words if you precede “things” with the almost obligatory “insecure”. Many of the headlines concerning IoT technology focused on the fantastica­l. “Pisspoor IoT security means it’d be really easy to bump off pensioners” and “Can your kettle take down the internet?” were both real 2016 stories.

The first in The Register – which had a subtitle of “Killing pensioners, two keyboard taps at a time” – was a light-hearted look at DoS attacks against internet-connected centralhea­ting thermostat­s. The second, in the Daily Mail, warned of the dangers of not changing default passwords in devices such as kettles. Interestin­gly, it spoke about these everyday devices being overlooked when it comes to the basic security smarts people have begun to understand with regards to laptops and smartphone­s.

That connected devices are of interest to hackers – and connectivi­ty is all we’re really talking about when we drop the IoT bomb into conversati­on – should come as no surprise. The devices themselves are usually a conduit to the real target, rather than being the target themselves. And so it’s another set of everyday objects that have, perhaps understand­ably so, skipped the attention of all but the most securitymi­nded on both sides of the legal fence: mice and keyboards.

As unlikely as it may seem, my favourite bit of technical hacking from 2016 was a class of vulnerabil­ities dubbed MouseJacki­ng. A MouseJack is, per the Bastille Threat Research Team (BTRT) that discovered it, an exploit that can inject unencrypte­d keystrokes into a target machine from up to 225m away using cheap, non-Bluetooth radio transceive­r USB dongles. Dismiss it as being just another theoretica­l code-injection exploit if you like, but for me this has a touch of evil genius about it. Sure, it’s a bit low-rent when compared to the state-sponsored style of a Stuxnet attack, with multiple high-value zero-day exploits sacrificed to the gods of espionage; ten quid’s worth of dongle and a total of nine vulnerabil­ities from brands that really ought to have known better is all it took.

Wireless keyboard manufactur­ers have pretty much wised up to the eavesdropp­ing threat, and so keystrokes are sent encrypted when no wires are involved these days. A couple of years ago, writing on these very pages, I mentioned how weak wireless keyboard signals tend to be, and thus the risk of remote capture was, well, remote. I also mentioned how signal encryption was a key part of the wireless keyboard spec (every pun intended) and even specialist devices such as the KeySweeper – which captured and decrypted keystrokes – wasn’t a threat to modern kit. Microsoft started using AES encryption after 2011, and KeySweeper couldn’t hack these. So what’s changed with MouseJack? The clue is in the name. It makes the most of wireless proprietar­y protocols operating in the 2.4GHz “Industrial, Scientific and Medical” (ISM) band, which don’t bother themselves with all that encryption nonsense. Instead, they happily use unencrypte­d communicat­ions between the mouse and the USB dongle attached to the computer. Non-Bluetooth wireless mice from Amazon (Basics), Dell, Gigabyte, HP, Lenovo, Logitech and Microsoft were all found to be vulnerable to an attacker spoofing mouse movements and generating keystrokes. All the time, the target dongle thinks it’s communicat­ing with the wireless mouse or keyboard, but is getting the code the malicious actor is sending from a replacemen­t dongle costing around £25 instead.

The attack mode could be used to send malware to the target machine, or extract credential­s data from it. That Bastille managed to link a series of vulnerabil­ities to circumvent the keyboard encryption is impressive; that it even managed to work with dongles that required encrypted comms (by targeting the mouse instead) even more so. The clever bit is that the exploit can spoof a wireless mouse that tricks the target PC into thinking it’s talking to a keyboard.

This spoofing element could become even more interestin­g over time, and as IoT grows ever bigger. Why is that, I hear you ask? Well, if it can trick a computer into thinking a spoofed mouse is a real keyboard, then what other cross-device treachery can RF-based protocol hacking come up with?

In typical IoT fashion, most of the vulnerable devices will need to be binned if security matters to the users. Firmware patches are thin on the ground in IoT territory, and that’s also the case with wireless dongles and RF mice; the transceive­r chips are designed to be programmab­le only once and so can’t be updated. A decent list of at-risk devices can be found at pcpro.

link/271bastill­e, along with vendor responses and links to firmware patches where available.

Back in 2015 when I wrote about KeySweeper, I concluded that there were too many caveats to make it a real-world threat: distance, model of keyboard being used, the HeathRobin­son home-built hacking device requiremen­t, the fact that Bluetooth mitigated the risk – albeit at the cost of introducin­g some of its own. The best risk mitigation back then was to suggest not using a wireless keyboard unless you had no other choice. Wired keyboards tended to be more reliable and were a lot cheaper.

That’s not true anymore, and MouseJack plugs the real-world gap in terms of distance, cost of the attack dongle, and choice of likely target devices. My advice about not going wireless isn’t going to stick with many folk now, but I’d suggest you stick with Bluetooth if you’re going to snip the wires from your working life.

 ??  ??
 ??  ?? BELOW Google certainly doesn’t skimp on user verificati­on options
BELOW Google certainly doesn’t skimp on user verificati­on options
 ??  ?? BELOW Many makes of wireless mice are still vulnerable, and it’s all due to the RF dongle
BELOW Many makes of wireless mice are still vulnerable, and it’s all due to the RF dongle
 ??  ??

Newspapers in English

Newspapers from United Kingdom