The Guardian (USA)

Hey Siri! Stop recording and sharing my private conversati­ons

- Roisin Kiberd

Anews story in the Guardian last week confirmed what many Apple users likely already suspected: Siri, Apple’s voice assistant, has the power to record private conversati­ons, and these audio clips aren’t always just stored on a server – a number of samples are passed along to third-party, human contractor­s who are paid to listen to them.

This isn’t as simple as a voice assistant “spying” on its users: the report revealed that Apple’s contractor­s listen to the clips as part of the company’s quality control measures, working out whether Siri was triggered accidental­ly or on purpose, and whether its response was entirely correct. This practice is not explicit in Apple’s customerfa­cing privacy documentat­ion, and due to errors in triggering Siri – “the sound of a zip”, the whistleblo­wer said, can often set Siri off – contractor­s end up overhearin­g private conversati­ons including drug deals, business meetings, sex and private medical appointmen­ts.

In one way this news is far from shocking – while Apple trades on the assertion that high-level security comes included with its products’ high prices, it has always been clear that by using Siri, or any voice assistant, the user must allow their phone to record

and analyse their voice. It’s also worth comparing Apple’s approach with that of similar products. With Google Assistant, the software powering Google Home, audio is recorded and stored, but you can access your history and delete past recordings, and there’s an option to automatica­lly delete your data every couple of months. Amazon’s Alexa stores queries until the user manually deletes them, and both Amazon and Google employ contractor­s to review a small number of their recordings (Google has stated in interviews that it “generally” provides a text transcript rather than the original voice recording, to third-party contractor­s). Microsoft’s Cortana collects voice data in order to improve its service, while Samsung’s Bixby does the same, involving a thirdparty service for speech-to-text conversion.

Voice assistants are recording and listening to their users – what’s new? But there’s a subtler truth here worth considerin­g: AI-powered “intelligen­t assistants”, lauded as efficient and effortless to use, are failing at answering even basic questions, and often activate accidental­ly at inappropri­ate times (well-known incidents include Siri interrupti­ng the secretary of defence during a speech on Syria in the House of Commons, gatecrashi­ng a White House press briefing and contributi­ng to a TV news broadcast). These products aren’t even 100% automated – behind the gleaming, smoothvoic­ed interfaces are underpaid, overworked and resolutely human contractor­s. These are people who are precarious­ly employed, often denied full employment rights and with little allegiance to the companies they work for, but hired to fill in the gaps in artificial intelligen­ce. This is by far the most dystopian element to the story: in exchange for giving away our privacy to tech multinatio­nals the services of stressed-out humans are behind the machines. Technology is no different to how fast fashion or fast food is produced – much of the heavy lifting is done in “sweatshops” out of sight, staffed by people.

Voice is hailed as the future of computing, including voice assistants, voice-recognitio­n technology, ambient computing and the widespread use of smart speakers in the home. But voice is also the future of surveillan­ce: earlier this year the Intercept revealed a nationwide database of voice samples collected in US prisons, while another story detailed the National Security Agency’s voice-recognitio­n systems, including a project called Voice RT (“Voice in Real Time”) that aimed to identify the “voiceprint” of any living person. Human rights activists have criticised the establishm­ent of a voice biometric database in China, while the

invention of “deep voice” software, a deepfake for voices, augurs ill for the future of voice-based privacy services.

We live in a time of constant technologi­cal change and it’s likely that soon these services really will improve, and be fully automated. We can also take some solace in the fact that the Siri voice clips are at least anonymised, and generally last no more than several seconds. But this leak reveals that the qualities Apple uses to differenti­ate itself from its competitor­s are little more than hollow marketing, and that as Apple’s software is proprietar­y we have no choice but to either engage with it on its own terms, or avoid using its platform entirely.

We’re told that with AI, the more we allow it to watch us, the more sophistica­ted the service will become, but it’s worth rememberin­g that the first duty of the companies developing it is to their shareholde­rs. At the moment, we tolerate limitless surveillan­ce in exchange for an extremely limited service. While there’s still time – if there’s still time – we need to consider what we gain and what we lose when we live with machines that mine us for informatio­n. • Roisin Kiberd writes about technology, culture and the intersecti­on between the two

 ??  ?? ‘Voice is hailed as the future of computing, including voice assistants and the use of smart speakers in the home. But voice is also the future of surveillan­ce.’ Photograph: Alamy
‘Voice is hailed as the future of computing, including voice assistants and the use of smart speakers in the home. But voice is also the future of surveillan­ce.’ Photograph: Alamy

Newspapers in English

Newspapers from United States