Lethbridge Herald

U of L professor weighs in on the possibilit­ies of AI

-

A University of Lethbridge professor is weighing in on the possibilit­ies of artificial intelligen­ce.

AI is a hot topic - some people tout it as the next best thing for increasing productivi­ty and reducing costs while others are concerned robots will be replacing workers. Every day brings a new headline, and the average person may be left to wonder whether AI is a good or bad thing.

It’s both, says Sidney Shapiro, a Dhillon School of Business assistant professor with expertise in data analysis and AI.

“Because there are a lot of unknowns, people are very worried about what it can do in the future, but they’re not looking at what it can do right now,” says Shapiro.

Right now, Shapiro says AI is in the midst of a big transition. Companies are looking at how to innovate and deliver more value for shareholde­rs, that is, make more money by automating everything. But that can backfire, as has happened with selfchecko­uts.

Overall, AI is just a tool with benefits and drawbacks and the legal system hasn’t caught up with the implicatio­ns of AI. Is AI poised to take over the world? Shapiro says that’s not likely to happen anytime soon. A house builder, for example, may find some AI tools helpful with certain aspects of the work, but humans are still needed to build the home.

“Until computers get much more powerful, it’s going to be difficult to have the vision of AI of what people want as something that completely transforms our lives,” Shapiro says in a press release.

“The reality is that there’s a lot of hype in AI right now. And that hype overestima­tes what you can do with AI. It’s a useful tool, but it doesn’t replace what we can do as people, which is come up with original ideas.”

He says AI has been around for more than 40 years in various forms, and it has typically been used as a sorting tool. Analyzing data by looking for patterns in numbers can help businesses better target their advertisin­g. For example, people in a certain demographi­c, like young parents, will likely be more receptive to ads about diapers.

What’s happened with AI more recently is the building of large language models (LLMs) like OpenAI’s ChatGPT. By analyzing patterns in words, LLMs can scour their databases and come up with the next word in a sentence or the next paragraph in an essay.

“We’re looking at the underlying patterns and a great example of this is a resumé,” says Shapiro. “There are zillions of resumés online. An LLM can generate new informatio­n based on all the different possibilit­ies it has in its database.”

While AI may have a large database to draw from, the data is limited to informatio­n that’s already known.

“What we do at a university is usually try to find new knowledge that doesn’t already exist and find new connection­s between data that nobody has thought of before,” he says. “Although it might be possible to calculate all the possibilit­ies using AI, AI is generally not very good at coming up with new things it doesn’t already know about.”

Another issue with AI is that it takes a huge amount of power, so it’s not very sustainabl­e. Every time a new question is asked, an LLM goes through a large number of possibilit­ies to pick the answer that’s most likely. If you ask ChatGPT to write a poem, it will generate one. However, Shapiro says it’s not possible to know why that particular poem was the one generated.

“The philosophi­cal question is ‘Could you take every great novel, remix them, press a button and another great novel, a new one, comes out?’” Shapiro says. “Theoretica­lly, it’s possible but that hasn’t happened yet. So, you have to ask what makes a novel great and not just a collection of words. If you’re using something like ChatGPT to write an essay, it can regurgitat­e knowledge but not very well and not within the context of what you know.”

One of the controvers­ies surroundin­g AI is where the data to train LLMs came from in the first place — the internet, newspapers or entire books?

In December, CBC News reported it conducted an investigat­ion and found at least 2,500 copyrighte­d books written by more than 1,200 Canadian authors were part of a dataset (now defunct) that was used to train AI.

Authors have also launched lawsuits against OpenAI and Microsoft, alleging their work was used to train AI systems. The training of LLMs requires large amounts of content, so data — whether from the internet or social media — is like the new oil, Shapiro says.

While ChatGPT can help generate ideas for a student essay, educators are concerned about the effects it could have on student learning and cheating. Shapiro says some professors have changed their assignment­s in response. They may ask students to write an essay themselves, then generate the essay using ChatGPT and critique the two.

“We’ve gotten into a pattern of having students write essays and we’ve gotten away from oral exams and asking students to do live presentati­ons,” he says. “Whether we are using AI in our classes or not, students will be using AI in their jobs when they graduate. The question is how we prepare students for the future so they understand the tools and can leverage them in a way that works.”

Newspapers in English

Newspapers from Canada