Are machines getting smarter than humans?
Yes, they are, and it is another cue for us to answer the ethical questions that AI poses
In 2005, Ray Kurzweil wrote his seminal book, ‘The Singularity is Near’. Singularity was the term he used to describe humanity combining with Artificial Intelligence (AI) to multiply its own intelligence “billion-fold”. The term was first used by John von Neumann (yes, the von Neumann) to describe tremendous change caused by technological process. Mr Kurzweil has always maintained that we will achieve singularity by 2045. There’s another important milestone Mr Kurzweil often talks about. That’s the ability of an AI programme to achieve human levels of intelligence (as measured by the Turing test). Mr Kurzweil’s sense is that this will happen by 2029.
While 2029 is still some years away, the ability of machines to out-think and outsmart humans in some areas was once again demonstrated this week. On December 5, researchers at Deep Mind, a company now owned by Google’s parent Alphabet Inc, released a paper about Alphazero, an AI programme, which learnt all there is to chess in four hours and then wiped the floor with the world’s most powerful open-source chess engine. Repeatedly. Deep Mind was the company that moved AI from the realm of sci-fi to reality with its concept of deep learning — essentially programmes that teach themselves things. Since then, AI has made rapid progress, enabled by what is called Big Data (lots and lots of data), which helps the programme learn. Alphazero shows how far we have come. It uses a concept called reinforcement learning, which is based on the principle of rewards and punishments.
Alphazero is also perhaps a sign that the 2029 deadline will probably be met much earlier. There are still a lot of questions, mainly ethical ones, that remain unanswered about AI, and the march of programmes such as Alphazero is perhaps a cue for humankind to get down to answering them quickly.