AI being created by ‘young men with no problem-solving ability’
Big tech firms are entrusting some of the most profound problems in history “to a bunch of very young men who have never solved a problem in their lives”, a leading Silicon Valley technologist has said.
Vivienne Ming will join a panel of leading thinkers on artificial intelligence in London next week for a Royal Society event, chaired by physicist and TV presenter Brian Cox. It will take questions on the impact AI will have on jobs; risks to society, and its ability to make moral and ethical decisions.
While Ming believes AI will become an ever more powerful tool, she thinks there is a problem with the training that computer engineers receive and their uncritical faith in AI. “These are very smart men. They are not malicious. But we are asking them who should I hire, how should we deal with mental illness, who should go to prison and for how long, and they have no idea how to solve these problems,” she said.
“AI is a genuinely powerful tool for solving problems, but if you can’t work out the solution to a problem yourself, an AI will not work it out for you.”
Amazon is a case in point. The tech firm once tried to recruit her as a chief scientist, telling her it would be her job to make employees’ lives better. “It became clear that Jeff Bezos’s idea of better was very different to mine,” she says. Amazon’s invention of a wristband that buzzed when factory staff reached for the wrong package did not meet with her approval.
Ming heard about the firm’s hopes to build an algorithm that could automate the hiring process, an idea she says she criticised at the time. In October, news broke that the firm had scrapped the project because it was biased against women.
The algorithm scoured CVs to rank them. Because it trained on Amazon data, it learned that male applicants fared best in the workplace. It penalised those with the word “women’s” on their CVs, as in “women’s rowing champion”, and downgraded graduates from women’s colleges. “If a company doesn’t know how to solve the problem of bias, an AI will not solve the problem for them,” said Ming.
Life experience is perhaps why Ming’s take is different. Vivienne Ming was once Evan Smith, a student at the University of California, who dropped out, became homeless, and then clawed his way back to glittering success. Battling demons he struggled to understand, he went to Pittsburgh to study neuroscience. There he met his wife, Norma Chang, who stuck with him when he told her of his wish to be a woman. The couple have two children.
Ming says she turned down offers from Uber and Netflix, taking a job at a startup called Gild. The firm found that traits such as resilience and what Ming calls a “growth mindset” – the flexibility to learn from one’s failures – predicted better software engineers, as rated by human coders.
So the firm built small AIs to crawl blogs and social media feeds for the best candidates, whether they were job hunting or not. Sometimes, a tweet carried huge weight. One read: “Celery is awesome.” Out of context it sounds “like someone who is wrong about a gross food,” says Ming. But “celery” was a reference to an obscure job queue tool written in the programming language Python. The tweet, and the passion it contained, was a “huge predictor” of the candidate’s coding skills.
PhD graduates join tech firms without the faintest idea how to solve real-world problems. A thorough grounding in ethics will help, but Ming believes it takes more than learning the rules from a book. “Ethics is like resilience, you get good at it by failing.”
She said: “I think it’s incredibly valuable for people who have suffered in some way to have a voice in this. If you come from a background like mine, you are sceptical. You realise technology increases inequality and it only gets better if we take active steps to avoid that.”
Vivienne Ming: ‘If you can’t work out a solution, AI won’t do it for you’