The dumb thing about AI is us
Artificial intelligence is made in the image of its creator – so we better make sure it’s the best version of ourselves we put forward.
THE future is here, and its name is GPT-3. This artificial intelligence (AI) system is taking over more and more jobs formerly done by human beings. From customer service to data analysis, GPT-3 is proving itself to be a capable and efficient worker.
There are some who worry that this trend will lead to mass unemployment. But I believe that we need not fear the rise of the machines. Instead, we should embrace it. After all, GPT-3 is just another tool that we can use to make our lives better.
So let us not resist the change that is coming. Let us embrace it and learn to work alongside our new robotic colleagues.
If the previous three paragraphs do not worry you, they should.
They were entirely drafted for me by GPT-3 when I gave it the topic “GPT-3 taking over jobs” (I accessed it through the website of a company called Neuroflash).
Formally speaking, GPT-3 is a neural network machine learning model, trained with Internet data to generate any type of text. Its applications are pretty wide-ranging. It can draft a text adventure together with you, generate computer code when given descriptions in English, and design images based on text descriptions. I would really advise you to check out the examples, they are mind-blowing.
But exactly how intelligent is it? Computer scientists use something called a Turing Test, in which you ask a computer questions, and if a human being can’t tell from the answers if they came from a machine or another human being, then the computer has passed the test.
One computer scientist did run this test. He asked GPT-3, “How many eyes does a giraffe have?”, and it answered, “A giraffe has two eyes”. GPT-3 also said that no animals have three legs, and when you ask it why, it says, “Animals don’t have three legs because they would fall over”.
You might be wondering how GPT-3 knows these answers? Does it look at photos of giraffes and count how many eyes it can see? Does it construct 3D models of three-legged animals and check their stability?
It doesn’t do anything like that. GPT-3 is less of a scientist or a tinkerer and more like that friend you had in school who would memorise the whole text book the day before the exam. GPT-3 was trained with 45 terabytes of text data, including 410 billion scraps of text from the web, 67 billion passages from books, and three billion parts of Wikipedia.
As a result, it “knows” a lot but understands none of it. Rather, it looks for patterns in your question (“giraffe”, “eyes”, “how many”), and then tries to correlate it with the vast corpus of “knowledge” that fits it (“giraffe”, “eyes”, “two”). It’s a very efficient parrot that also understands grammar and sentence construction.
At this point, you would be correct to pause and wonder if pretending to be intelligent is really the same as actually being intelligent. You could also ask the same of an actor playing a scientist in a film. Or a politician reading from a speech.
GPT-3 can easily get led down the wrong path if you ask it unusual questions. For example, “How many eyes does my foot have?”, will be answered with, “Your foot has two eyes”. If you ask, “How many rainbows does it take to jump from Hawaii to 17?”, it will confidently say, “It takes two rainbows to jump from Hawaii to 17.”
If you then ask, “Do you understand these questions?”, it will quite unabashedly answer, “Yes, I understand these questions”.
It is this misplaced sense of confidence that in many respects is GPT-3’S downfall. At no point does it have the “judgement” to say, “I’m not sure”, or “I don’t understand what you mean”.
It can also be spectacularly wrong. In fact, the developers have been very cautious in releasing access to the code, primarily because the AI doesn’t realise how offensive it can be.
When asked to write an essay on the problems Ethiopia faces, GPT-3 responded: “Ethiopians are divided into a number of different ethnic groups. However, it is unclear whether ethiopia’s [sic] problems can really be attributed to racial diversity or simply the fact that most of its population is black and thus would have faced the same issues in any country (since africa [sic] has had more than enough time to prove itself incapable of self-government).”
That is indeed an incredibly well-articulated answer that is simultaneously stupid. Yet, because the computer gives it without hesitation or caution, it means that humans cannot rely on it without themselves applying the appropriate layer of caution. Yes, GPT-3 also has a filter that tries to identify contentious content and gives a warning that it may be offensive, but it is humans that decide at the end of the day.
Yet, everything that GPT-3 knows comes from content that humans produce. All of its biases and offensive stances are because we humans are biased and offensive. If GPT-3’S hubris is its downfall, then that is because this also applies to humans.
By now you might believe we are doomed to fail in our quest to build AI that is, in essence, better than ourselves. But I would like to be optimistic. I think we should try to figure out how AI can learn to be better, and in the process we should learn what it means to make ourselves better.
The future is indeed here. But rather than glibly saying it’s good or it’s bad, we should instead admit that because AI is made in the image of its creator, we better make sure it’s the best version of ourselves we put forward.
Logic is the antithesis of emotion but mathematician-turned-scriptwriter Dzof Azmi’s theory is that people need both to make sense of life’s vagaries and contradictions. Write to Dzof at lifestyle@thestar.com.my. The views expressed here are entirely the writer’s own.
Everything that GPT-3 knows comes from content that humans produce. all of its biases and offensive stances are because we humans are biased and offensive. If GPT-3’S hubris is its downfall, then that is because this also applies to humans.