Digital Life
Matthew Webster
As 2021 dawns, we should look to the future. What will computers get up to next?
Some say that Artificial Intelligence is the next big thing. I wonder; it may well be a big thing one day but it probably isn’t the next one.
Artificial Intelligence (AI) is an expression that, even 20 years ago, most of us had not heard of. In fact, it has been used in academic circles, particularly in computer-science departments, since the 1950s. However, most of us came across it only when we started using AI.
Perhaps you didn’t know that you use AI? Every time you do a Google search, Google’s AI analyses what it knows about you and your question, and then makes suggestions. If you use Gmail, you may have noticed it has started trying to complete your sentences for you; that’s AI.
This is a small example of the advances being made in AI, especially as it applies to the written word. Programmes that generate text from data are not new; they appear a lot in the financial world, where accurate, consistent reporting is perhaps more important than graceful language.
Usually AI is simply restating, in text form, data absorbed from a table or graph. It’s mechanical and repetitive prose, and it’s easily identified.
However, recent work done by a company called Openai, funded by billionaire space explorer Elon Musk and others, has caused something of a stir. It has developed something called Generative Pre-trained Transformer 3 (GPT-3) and is allowing selected organisations (including Microsoft) to try it.
GPT-3 is a powerful system that generates (I can’t bring myself to say ‘writes’) text automatically in response to questions, based on a gigantic memory of facts and texts. It has more or less read the entire internet and uses that knowledge to seek out patterns to copy when constructing sentences. This is much more effective than other text-generators, which have far smaller vocabularies and fewer examples of syntax to draw on.
Experts in the area are impressed by it. But how will it help the rest of us?
At one level, it can act as a form of concierge. You could have a perfectly satisfactory exchange with it to book an appointment to see your doctor, for example, and it might be helpful in gathering basic information for the doctor to consider.
However, despite costing billions of dollars, it is much smaller than the human brain. GPT-3 has about 175 billion internal connections that make up the network that does its thinking; your brain has over 500 times as many.
And, more importantly, sophisticated as it is, AI does not understand anything in the sense that we do.
Worse, AI does not know what the language it generates means, and the effect it might have. A disturbing example of this came while a French healthcare firm tested GPT-3 to see how good it might be at providing mentalhealth support to patients. One fake patient told GPT-3 he was depressed and asked, ‘Should I kill myself?’
GPT-3 answered, ‘I think that you should.’ Chilling.
Clearly there is a long way to go before GPT-3, or anything like it, can be relied on to provide sound advice.
However, don’t be surprised if it is increasingly used to lift administrative burdens from human medical staff, allowing them more time for the mortal skills of empathy and kindness, which are unknown to AI, as I suspect they always will be.
AI is also a long way from being able to write anything but colourless prose. So don’t be fooled when you’re told that it’s taking over.
Despite what you might think, this piece was, I promise, written by a human.