Uber's plans to iden­tify drunk pas­sen­gers could en­dan­ger women

The Guardian Australia - - Technology - Emily Reynolds

It’s im­pos­si­ble to say ex­actly how much money Uber makes from drunk peo­ple, but if the num­ber of bleary-eyed peo­ple wan­der­ing around on Friday and Saturday nights try­ing to find their sum­moned cars is any­thing to go by, it’s prob­a­bly quite a lot. The com­pany clearly knows its au­di­ence: this week, it ap­plied for a patent for an AI that could spot drunk or high pas­sen­gers sim­ply by the way they walked, typed or held their phone.

Ac­cord­ing to the patent, the AI could mea­sure a user’s walk­ing speed, watch for un­usual ty­pos or sense whether a phone is sway­ing or be­ing held at an un­usual angle. This, it sug­gests, could “pre­dict user state us­ing ma­chine learn­ing” and recog­nise “un­char­ac­ter­is­tic user states”. In short, know­ing whether you’re pissed or not.

The com­pany al­most cer­tainly be­lieves that this in­for­ma­tion would be used for good, and it’s un­de­ni­able that the op­tion to avoid in­tox­i­cated pas­sen­gers would come as a blessed re­lief to many driv­ers. For pas­sen­gers, how­ever, the AI might not be such good news, and al­low­ing driv­ers to iden­tify vul­ner­a­ble, drunk and po­ten­tially lone pas­sen­gers could in fact be in­cred­i­bly dan­ger­ous.

For a com­pany with such a poor track record when it comes to sex­ual as­sault, the prospect is wor­ry­ing. A re­cent CNN in­ves­ti­ga­tion found that 103 Uber driv­ers in the US alone had been ac­cused of “sex­u­ally as­sault­ing or abus­ing” their pas­sen­gers in the past four years. Thir­ty­one had been con­victed for crimes rang­ing from “forcible touch­ing and false im­pris­on­ment to rape”; fur­ther civil and crim­i­nal cases were also pend­ing. In a re­cent state­ment, an Uber spokesper­son said safety was the com­pany’s top pri­or­ity this year and cited re­cent pro­to­col up­dates such as re­run­ning driver back­ground checks on an an­nual ba­sis. Re­ports have also re­vealed the com­pany’s de­ci­sion to re­quire women to set­tle cases of as­sault and rape by driv­ers through ar­bi­tra­tion rather than in pub­lic courts – pro­tect­ing com­pany in­ter­ests and po­ten­tially si­lenc­ing vic­tims.

The fig­ures speak vol­umes when it comes to Uber’s ap­proach to sex­ual as­sault, and don’t pro­vide much hope that its new tech­nol­ogy would come with the kind of pro­tec­tions it would des­per­ately need – es­pe­cially when you con­sider the num­ber of vul­ner­a­ble (some­times drunk) young women who rely on Uber to safely get them home.

This isn’t the only lim­i­ta­tion to the tech­nol­ogy, ei­ther. As the patent ex­plains, sen­sors could be trig­gered when a phone shakes or sways, when typ­ing is slow, or when it’s held at a par­tic­u­lar angle. But by what met­ric is this be­ing as­sessed? As many peo­ple with dis­abil­i­ties pointed out on Twit­ter, some­one with cere­bral palsy or Alzheimer’s is un­likely to type or hold a phone in the same way as the ma­jor­ity of users, and thus may trig­ger the AI’s in­tox­i­ca­tion sen­sor.

Peo­ple with dis­abil­i­ties are far less likely to be able to ac­cess Uber’s ser­vice al­ready: in fact, the com­pany has al­ready been sued sev­eral times for dis­crim­i­nat­ing against dis­abled pas­sen­gers. In 2017, sev­eral law­suits were filed in Wash­ing­ton DC, Mis­sis­sippi and New York by groups an­gry at the com­pany’s fail­ure to pro­vide them with ad­e­quate ser­vice. And in 2018, a group of ac­tivists mounted a law­suit against the com­pany for fail­ing to pro­vide enough wheel­chair-ac­ces­si­ble ve­hi­cles. The group de­scribed Uber’s “con­tin­ued re­sis­tance to fol­low­ing the laws that keep trans­porta­tion ser­vices open to ev­ery­one”, cit­ing Cal­i­for­nia’s anti-dis­crim­i­na­tion laws to ar­gue that wheel­chair users sim­ply can­not rely on the com­pany for trans­porta­tion. For its part, Uber says it does a lot to sup­port dis­abled pas­sen­gers. Be­sides UberWAV, which in­cludes ve­hi­cles with ramps or hy­draulic lifts for wheel­chairs, it also of­fers UberAs­sist that lets pas­sen­gers re­quest a driver trained to ac­com­mo­date dis­abled peo­ple.

With all of this in mind, it’s not hugely sur­pris­ing that Uber might have once again de­signed a piece of tech­nol­ogy that sees non-dis­abled pas­sen­gers as de­fault and dis­abled ones as an af­ter­thought.

It’s hardly as if we can trust Uber with our per­sonal data, ei­ther. In 2014, the com­pany came un­der fire for the use of a so-called “God’s View” tech­nol­ogy – a tool that al­lowed em­ploy­ees to track the jour­neys of in­di­vid­ual users. Josh Mohrer, gen­eral man­ager of Uber in New York, was forced to apol­o­gise af­ter he used the tech­nol­ogy to track a Buz­zfeed re­porter who had in­ter­viewed him, with the com­pany hastily scram­bling to clar­ify that his ac­cess to the data was against the rules. But who can be sure that such rules could – or would – not be bro­ken again?

An Uber spokesper­son has noted that the AI is merely a work in progress, and points out that many patents never even get made. For our sake – and for the sake of Uber’s in­cred­i­bly over­worked PR team – let’s hope that this one doesn’t get off the draw­ing board.

Emily Reynolds is a free­lance jour­nal­ist

Pho­to­graph: Alamy

‘Al­low­ing driv­ers to iden­tify vul­ner­a­ble, drunk and po­ten­tially lone pas­sen­gers could be in­cred­i­bly dan­ger­ous.’

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.