FOOLING FINGERPRINTS
Print recognition proves to be fallible
A RESEARCH TEAM from Michigan and New York Universities has devised a method to break into fingerprint recognition systems. The remarkable, and alarming, thing is that the feat has been achieved without access to the original fingerprint or any hacking of code or hardware. It just uses a fake fingerprint that grants access.
These “DeepMasterPrints” are created using machine learning. An AI system was trained on thousands of prints until it could create realistic human fingerprints from scratch. These combine the most common traits of a fingerprint, producing a generic archetype, like a skeleton key.
Biometric security is convenient, and we’ve been told that our fingerprints and eyes are unique. However, the systems only work on a subset of the possible data; you don’t need a full and perfect fingerprint impression to trigger access. They also only require a partial match of certain data points, so a partial print is used to partially match a stored one; close enough is good enough. Thus fakes can pass inspection, as enough features match a stored sample somewhere.
The DeepMasterPrint’s success rate depends on the level of security of the device. It varies from under 1 percent on highly secure systems to a worrying 76 percent on the lowest settings. On commercial smartphones, it works on around one in five attempts. It’s still at the proof-of-concept level—the researchers didn’t actually hack any systems with conductive printouts—but it has raised a few eyebrows in the industry, which now has another problem to worry about. Tweaking the algorithms and bumping up the resolution and size of the readers will help, but it’s worth remembering that a decent password remains the best way to secure your device.