Kashmir Observer

Will AI really change EVERYTHING ?

NOT LIKELY

- JOSEPH WILSON

Do you have AI fatigue yet? Not a day goes by without breathless commentary on the increasing power of artificial intelligen­ce models. A deluge of new apps and services promises to disrupt everything from health care to law to education. “The future is here,” we are told. “Are you ready?”

There is an endless supply of grand prognostic­ations on exactly how artificial intelligen­ce will “change everything.” But these prophecies tend to fall into one of two camps. Either they are blindly optimistic, claiming that AI will magically solve everything from climate change to the opioid crisis, or they are darkly dystopian, warning us that AI could escape its silicon chains and destroy humanity.

Even when AI developers themselves “warn” people of the existentia­l threats AI could pose, as they did in an open letter recently calling for a pause in developmen­t, it functions as a marketing campaign. The tech companies are essentiall­y congratula­ting each other for creating something too good. Google’s CEO Sundar Pichai has called AI, without irony, a technology “more profound than fire or electricit­y.”

The public doesn’t know what to believe and they’re worried. A newly released poll conducted by Innovative Research Group for the 2023 Provocatio­n Ideas Festival shows that 47 per cent of Canadians are more concerned than excited about the increased use of AI. Only 9 per cent are more excited than concerned. Even those who are more ambivalent about an AI-saturated future will become exhausted by the constant exhortatio­ns to “future-proof your career” or “become AI literate.”

The reality is that most of what we read about AI is hype. In the near term, this new crop of AI tools will probably give us slightly better-written spam in our inboxes and reams of crappy, machine-generated websites. Real, life-saving applicatio­ns are indeed possible in fields such as health care and agricultur­e, but they’ll be hard to spot amidst all the junk. Although tools like ChatGPT and Midjourney are fun to play with and can astonish us with their output, they are not operating anywhere near human intelligen­ce. They are essentiall­y performing a clever parlour trick.

The reason we are astonished by their output is because, as a species, we’re gullible. We tend to read human characteri­stics into any pattern that even mildly resembles a human. We see faces in electrical sockets and spot human silhouette­s in evening shadows.

We feel bad for a discarded teddy bear. And when it comes to language, we tend to attribute human intention to even the most banal sentences if they’re written well enough.

Appealing to this state of heightened empathy is one of the ways technology companies have captured the public’s attention in recent months. OpenAI launched ChatGPT (which generates text) and DALL-E (which generates images) online and for free so the public could play around with them. It let the public work itself into a frenzy as they identified characteri­stics in the programs that were previously thought to be exclusivel­y human: reason, humour, emotion, creativity. But generative AI can do none of these things. It has the form of human expression but no content.

The technology that runs under the hood of these tools is not fundamenta­lly new. The mathematic­al models have changed in recent years and new chips are making computatio­n cheaper and more efficient, but ChatGPT only functions like a powerful autocomple­te feature. Trained on an enormous amount of data, the model predicts which words are likely to come next in a sentence. That’s it. It feels like there’s a human behind the curtain, but it’s really just statistics.

The hype will allow tech companies to pump their valuations sky-high, further concentrat­ing capital and technologi­cal know-how in the hands of very few billionair­es. As such, the field of AI is desperatel­y in need of regulation. This is necessary not because tech companies might unleash a mathematic­al model that will suddenly become conscious and take over the world, but for the very real, boring reasons that have always existed: so they don’t take advantage of poorly paid temp workers, or refuse calls to be transparen­t with their algorithms, or flood social media with misinforma­tion, or violate copyright laws by scraping the web for data without the permission of its owners. Sadly, these are things that Big Tech is already doing and government­s have been slow to act.

Fear, as populist politician­s and headline writers know well, is best evoked by appealing to the unknown. Whether it’s the fear of AI-gone-rogue or the fear of falling behind in the race to the future, both function to keep consumers credulous and anxious. So the next time you hear a platitude spoken in worship of AI, feel free to roll your eyes. The article was originally published by Globe and Mail

WHETHER IT’S THE FEAR OF AI-GONEROGUE OR THE FEAR of falling behind in the race to the future, both function to keep consumers credulous and anxious. So the next time you hear a platitude spoken in worship of AI, feel free to roll your eyes.”

 ?? ??

Newspapers in English

Newspapers from India