Digital Life Matthew Webster
I hear ‘artificial intelligence’ (usually ‘AI’) mentioned frequently these days – so it’s about time I addressed it here.
It’s a thorny issue, and I can’t decide if AI is, in the words of Sellar and Yeatman, a good thing. Even the mighty Stephen Hawking has nailed his colours firmly to the fence by declaring, ‘The rise of powerful AI will be either the best thing or the worst ever to happen to humanity.’
So what is it? Unfortunately, AI is an incredibly vague term but, in essence, it is the process that helps computers make decisions on our behalf, based on a set of instructions, precedents and rules that we give them. This might be the numberplate-reader at an airport car park or setting the price of an airline ticket, or providing you with the results of a Google search. All these are managed by AI of one sort or another.
Computers don’t think as we do – yet – but they can process numbers incredibly quickly. AI is the framework we provide them with to process a huge amount of data (numbers) in a flash and come up with the answer that we would have done, if we had the time and energy to do it.
So far, so good. But, as it becomes more sophisticated, AI is using more and more complex equations (these are the ‘algorithms’ that you also hear about) to go a step further, and try and mimic what we are doing when we apply intuition, experience or common sense, rather than pure logic. This is much closer to actual thinking, in human terms, with more scope for problems.
We need to keep this in proportion. AI is in its infancy. Our brains, or rather our minds, are far more complex than the most highly developed AI system, but only in general terms. For a highly focused task – like playing chess – AI is now often better than we are; so, make no mistake, AI is on the move.
It is already useful in taking on boring, repetitive or dangerous jobs. Mass manufacturing or mining are good examples but, to the alarm of some, we are also seeing AI starting to do some of the jobs we thought were reserved for sentient humans. The professions are increasingly nervous, as AI is threatening to disrupt them all.
Lawyers, for example, are especially worried that repetitive and predictable jobs such as property conveyancing or writing wills may soon largely be undertaken by AI, and it won’t stop there. It’s not difficult to imaging routine court procedures or contracts being handled, at least in part, by robo-lawyers.
Accountants, too, are seeing a huge increase in the automation of their profession, as are architects; there is even much talk at present of AI being used by doctors to diagnose disease. It is already very evident in mass medical research.
One significant benefit to mankind that AI might achieve, therefore, is to lower professional costs, and not before time. Generally, when labour costs rise, it leads those who pay the wages to increase mechanisation, and hence employ fewer people. Legal costs in particular are now so absurdly high that, if us clients were offered even a semi-automatic option at a lower price, I suspect we’d grab it with both hands.
So, is AI a good thing? While it may reduce your legal fees, would you be happy if AI decided whether you were promoted or received a pay rise?
My own instinct is that we need some sort of convention determining the way we use AI. Something like the Highway Code – cars are dangerous but, because we all tend to obey the code, we don’t usually get hurt using them. So it must be with AI. Wishful thinking, I suspect.