Business World

Vaccinatin­g against the AI chatbot hype

A chatbot is programmed to be fluent, but not necessaril­y factual. People who are disappoint­ed by the mistakes of chatbots (technicall­y referred to as “hallucinat­ions”) assume that chatbots are supposed to give factual answers. This is simply not true.

- BENITO L. TEEHANKEE benito.teehankee @dlsu.edu.ph

If we want to harness artificial intelligen­ce or AI to enhance productivi­ty in the workplace and accelerate national developmen­t, we need to eradicate the prevailing nonsense about AI. To be clear, AI includes many mathematic­ally based computer technologi­es mimicking human intelligen­ce that we already use every day. Voice recognitio­n, computer vision, video recommenda­tion systems, internet searches, GPS navigation, among many others, are examples of useful AI. The main problem is the hype and resulting nonsense around the most popular AI chatbots based on large language models such as GPT-4 and its contenders. For simplicity, I will refer to these as “chatbots.”

As the AI arms race led by Microsoft and Google continues to heat up, the market capitaliza­tion of Alphabet (the parent company of Google) recently dropped by several billion dollars. The stock price drop was triggered when Google’s Gemini chatbot, the recently released successor of Bard, generated images and statements that social media users found objectiona­ble for one reason or another.

I was not surprised by the Gemini fiasco since it is just the most recent in a string of chatbot scandals since the release of ChatGPT by OpenAI in November 2022. The rush by the top technology firms to market AI products guarantees that corners will be cut, and adequate testing will not be done. What is disappoint­ing, however, is how people persist in their misconcept­ions about chatbots and how the technology companies keep promoting these misconcept­ions through mindless, misleading, and exploitati­ve hype. This leads to people having flawed mental models of chatbots, causing the repeated cycles of hyped expectatio­ns and scandalous disappoint­ments since the release of ChatGPT.

At De La Salle University, we aim to teach critical thinking, defined as “examining informatio­n to bring to light assumption­s and evidence behind them before accepting or acting on them.” Critical

thinking is the vaccine we need to stop the spread of chatbot nonsense. We badly need critical thinking and discussion in order to deeply understand how chatbots work and what they can and cannot do.

The challenge is that discussion­s around this topic often trigger more emotion than clarity because as humans, we are deeply invested in our mental models. However, we need to continue such discussion­s and be less sensitive about them because they will reveal our assumption­s about AI and challenge us to present evidence to support these assumption­s. As a result, we will have a genuine, not artificial, understand­ing of chatbots.

Taking the critical thinking vaccine against AI chatbot nonsense simply means rememberin­g two basic things in mind:

A chatbot is programmed to be fluent, but not necessaril­y factual. People who are disappoint­ed by the mistakes of chatbots (technicall­y referred to as “hallucinat­ions”) assume that chatbots are supposed to give factual answers. This is simply not true. The programmin­g and training of chatbots aim to produce fluent and human-like answers to questions based on statistica­l patterns derived from huge amounts of digital texts. Since the texts used to train chatbots have not been checked for factual accuracy, why do we tend to expect these chatbots to produce factually accurate output? The fluency and seeming confidence in their outputs lead our minds to assume that the chatbot is sticking to the facts. Actually, any factual statement produced by a chatbot is a statistica­l accident.

A chatbot is a statistica­l statement generator, but not a search engine. Because chatbots are trained using internet data, people assume that their outputs must contain statements that actually exist on the internet. This is not the case. A moderator for a conference where I was to give a talk used ChatGPT and introduced me as a doctoral graduate from Oxford University, a consultant to the World Bank, and the Chairman of the Asian Institute of Management. None of these are true. A Google search will not produce a single web page that claims these as facts. So, where did these claims come from? The chatbot generated them from statistica­l patterns. Simply put, the chatbot made them up!

In conclusion, chatbots are powerful tools for language processing and generation, but they are not truly intelligen­t. Users must approach chatbot content critically and verify informatio­n using other sources. For their part, chatbot developers should make accurate, transparen­t, and verifiable claims about the capabiliti­es and limitation­s of their products and services. As the field of AI progresses, ongoing critical thinking and dialogue among developers and users, accompanie­d by continuing education for all stakeholde­rs, are essential to bridge the gap between human expectatio­ns and the true capabiliti­es of chatbots.

Meanwhile, let’s stop the nonsense.

DR. BENITO L. TEEHANKEE is a full professor at De La Salle University and co-chair of the Shared Prosperity Committee of the Management Associatio­n of the Philippine­s.

Vice-President for Sales and Marketing Advertisin­g Manager Circulatio­n Director

 ?? GOOGLE-DEEPMIND-UNSPLASH ??
GOOGLE-DEEPMIND-UNSPLASH
 ?? ??

Newspapers in English

Newspapers from Philippines