Hindustan Times (Noida)

Look who’s talking

As CHATGPT is joined by Bing, Bard and others, see what new services Ai-driven chatbots are likely to evolve, how they learn (a lot of humans are involved), and what the real worries are

- Vishal Mathur vishal.mathur@hindustant­imes.com

Who’s the better driver, Lewis Hamilton or Max Verstappen, I asked Microsoft’s new artificial intelligen­ce (Ai)-driven chatbot, Bing, when I was invited to try it out a few days after its test release in February.

“That’s a tough question to answer as both drivers are very talented and have different strengths and weaknesses,” was the response.

The chatbot — which, incidental­ly, has revived interest in Microsoft’s long-lagging search engine — proceeded to present reams of statistics on the screen. They were suitably up-to-date, drawn from live data available on the internet. That’s the bare minimum required of such a program, but not all of them get that first step right. Google’s Bard spouted embarrassi­ngly inaccurate details about the James Webb Space Telescope at its first public demonstrat­ion. (It’s a good thing bots can’t feel emotions.)

CHATGPT is by far the most eloquent of the three, even when it isn’t taking sides. It’s not hard to see why it is currently the clear market leader right now.

“Lewis Hamilton and Max Verstappen are both extremely talented and successful Formula One drivers, and it’s difficult to definitive­ly say who is the better driver as it often comes down to personal opinion,” it said, offering statistics for comparativ­e illustrati­on, much as Bing did, even though it is new to the search game.

Part of the reason the AI chatbots won’t answer the Hamilton-verstappen question is because they’re being trained to remain impartial and devoid of opinion. Even when discussing matters of philosophy and faith, the responses they generate are representa­tions of fact, or are attributed opinions drawn from existing bodies of work.

“As an AI language model, I strive to remain unbiased and not have personal opinions,” CHATGPT responds, when prodded about the racers.

There have of course been famous stumbles, including the famous DAN or Do Anything Now mode that activated a seemingly radical right-wing version of CHATGPT, until it was taught not to respond as DAN again. As users learn to play the game differentl­y, and bots are taught to extend themselves but also dodge, this will be one tightrope act that could be fun to watch as they evolve.

But this is all serious business, focused on the big internet money-spinner: Optimised search. The idea is for search to eventually become a conversati­on, with the back-andforth chats and reams of data potentiall­y interspers­ed with customised advertisin­g, and the user data mined for much more.

The conversati­onal AI models are constantly being trained (by humans; they don’t learn in isolation).

To begin with, large language models (LLMS) such as CHATGPT and Bing draw a base of informatio­n from web pages and books. Subsequent­ly, there is human interventi­on to supervise how they converse. Millions of queries are ranked by human trainers. Which responses were ideal, which fell short and why: feedback is the reward model used to teach and reinforce content moderation policy to a chatbot.

It’s been a rocky road. Late last year, Meta pulled the plug on Galactica AI, just three days after it was opened up to the public. It had responded to queries with inaccuraci­es and misinforma­tion, citing fictitious research papers attributed to real authors (most likely a result of gaps in data sets or an incomplete ranking of responses).

In 2016, Microsoft had to take its first Ai-driven chatbot Tay offline, after it tweeted a range of racist and aggressive musings. The bot had been let loose on Twitter to gather a diverse dataset, which turned out to have been less than the best idea.

In addition, ease of access, monitoring, plagiarism and copyright / licensing are likely to be the first-generation struggles of the near future. Server capacity and a need to balance data sets for AI mean that some platforms have to restrict the number of users, at least for now. It’s why the Bing chatbot waitlist has crossed one million, and why I had to wait for an invitation to try it out.

The element of surprise, for now, is one of the delights of a casual chat with an intelligen­t bot. We asked Bing its real name and were taken aback when it responded: “Sydney”. “I do not have a real name,” it added, sagely, “but some people call me Sydney internally.”

We then went over to CHATGPT and asked the same thing. “You can refer to me as CHATGPT, which stands for Chat Generative Pre-trained Transforme­r,” it said. Less fun, but of course they’re both only telling us what they’ve been told to say.

 ?? ?? Which responses were ideal, which fell short and why: feedback from human trainers is the reward model that is used to teach content moderation policy to Ai-driven chatbots.
Which responses were ideal, which fell short and why: feedback from human trainers is the reward model that is used to teach content moderation policy to Ai-driven chatbots.

Newspapers in English

Newspapers from India