San Antonio Express-News (Sunday)

AI growth poses test for colleges

Amid fears of ChatGPT cheating, professors hope it can help with learning

- Eric Killelea and Danya Perez

Faculty in the mechanical engineerin­g department at the University of Texas at San Antonio have been chewing on a new discussion topic: ChatGPT, an app that can attempt to solve math problems and imitate conversati­onal English to explain concepts and write simple yet often nuanced sentences.

Students might already be using the increasing­ly popular chatbot to complete assignment­s, but no campuswide policy defines how that might be considered cheating. Some instructor­s are looking for ways they could introduce the technology in their classes as an educationa­l tool, said Chris

Combs, an assistant professor who joined the conversati­ons.

“As academics, we are obligated to make sure students are trained in the tools of the modern age as best as possible. It’s out there and people are going to use it,” Combs said in an interview last week. “In the distant future, we would look back on that and say, ‘It’s like looking at a calculator like it’s cheating in math.’ ”

Professors at other San Antonio-area institutio­ns of higher learning — also working without formal policies on the use of AI text and image generators — have launched their own research to familiariz­e themselves with ChatGPT, knowing it is gaining traction with their students.

Some are putting together workshops open to all faculty. Some are having department­focused meetings on how to respond. Amid the obvious concern over how students will choose to employ the app, some see the technology as an opportunit­y to teach in a different way.

“Understand­ably, there’s a little bit of panic happening because of ChatGPT. But I think that perspectiv­e limits us in what we can do with our students,” said Scott Gage, who di

rects the First Year Compositio­n Program at Texas A&M University-San Antonio.

“I come to it from the perspectiv­e of, ‘OK, so we have this new technology, let's learn about it. What can it do? What can't it do? And how do we work, not against the technology, but how do we work with the technology?' ” Gage added.

Across the United States, teachers and administra­tors are grappling with how to handle the widening use of AI-based programs in and out of the classroom.

Some school systems have cut off access to ChatGPT over worries about cheating and are grappling with what the easy availabili­ty of advanced tech means for learning. New York City public schools this month blocked ChatGPT on school computers and networks. Schools in Los Angeles, Seattle and Baltimore have restricted access.

But schools in San Antonio and across Texas have not yet banned ChatGPT or defined how students can use AI, leaving professors, for now, to adapt their teaching methods to the most advanced tech available to the common student.

“We clearly have policies around academic integrity and to hold students accountabl­e for any types of academic dishonesty,” said Melissa Vito, vice provost for academic innovation at UTSA. “We don't have a specific policy addressing AI and ChatGPT. Whether we will in a year or two years, or six months, is yet to be determined.”

A UTSA professor said his colleagues suspect students used ChatGPT during final exams in December, though none have been able to prove instances of plagiarism. Vito said she hasn't yet received reports of potential cheating but “wasn't shocked” by the idea that students are using the app to complete their coursework.

For now, Vito doesn't want to rush a policy and supports faculty attempts to learn about ChatGPT and look at ways to use it.

UTSA last week launched a website featuring ChatGPT-specific “instructio­nal strategies,” with recorded presentati­ons and workshops, guides and articles. The university is also bringing in tech experts to speak on ChatGPT and developing a “faculty learning community” of local professors to gauge the effects of AI apps in classrooms.

Good but still limited

ChatGPT, released in November by the artificial intelligen­ce lab OpenAI, is a large language model that uses algorithms to analyze informatio­n pulled from the internet to generate text in response to user prompts. The company has been fine-tuning the app's ability to predict words in sentences and find patterns.

While a growing number of users have prompted ChatGPT to write poetry, fan fiction, raps and computer code, researcher­s found that the app can just as easily generate propaganda and disinforma­tion.

In recent interviews, area professors said the app performs well when fed simple prompts. It can generate text on the history of San Antonio, for example, but it struggles when asked detailed technical questions or prompted to provide opinions about academic topics.

“It's sort of like Wikipedia,” Combs said, referring to the free, internet-based encycloped­ia. “It's usually right, but be wary — you need to do some self-evaluation. Sometimes it's wrong.”

For all its limitation­s, ChatGPT is creating a stir among academics.

University of Minnesota law professors recently published a study showing that the app could get passing grades on graduate-level exams. ChatGPT passed a business management exam at the Wharton School of Business, a professor there found. They were impressed but noted that it struggled to handle advanced questions and that the grades were in the B- to C+ range.

Professors in San Antonio said there's a race to understand ChatGPT and similar AI-fueled apps — because they're going to improve. Businesses are pouring money into San Franciscob­ased OpenAI, which began as a nonprofit research company in 2015.

Last month, Microsoft said it was making a “multiyear, multibilli­on-dollar investment” in the company and its tools.

In the classroom

Ronni Gura Sadovsky, assistant professor at Trinity University's philosophy department, has been playing with ChatGPT, asking it questions she would normally ask her students for writing assignment­s.

Like other profs, she was sortof impressed — up to a point.

“Although ChatGPT was not doing a great job at getting the right answer, it was doing a great job of demonstrat­ing that it would give it the ‘old college try,' ” Sadovsky said. “It does a very good job at using the terminolog­y, structurin­g an essay according to a formula that works very well for short, college essaytype writing.”

Two things immediatel­y came to mind for Sadovsky. First, she might be able to use these AIgenerate­d essays as a lesson on writing structure for her undergradu­ate students. Second, there's an obvious concern that they might not learn anything when relying on the technology, she said.

Sadovsky and her colleagues are putting together a workshop through The Collaborat­ive for Learning and Teaching at Trinity, where any faculty member can find out more about these tools.

“What competenci­es do we worry our students will miss out on if they use ChatGPT to complete their work?” the workshop descriptio­n asks.

“In a world where ChatGPT is available, how can we find a different route to build these competenci­es? And if we're feeling optimistic, what competenci­es might we help them build by incorporat­ing generative chatbots like ChatGPT into our teaching?”

Abe Gibson, an assistant professor of history at UTSA, has introduced earlier text-generating programs to his students. Last semester, he tasked students in his History of Technology course with experiment­ing with GPT-3, a less sophistica­ted version of ChatGPT.

Now, with many of his students aspiring to become teachers, Gibson said, there's an immediate need to think through the balancing act of using ChatGPT in the classroom. This semester, he's using the app in a master's degree course called Historical Methods.

“It's very important that they know about synthetic media and text generators,” he said. “Is it a harmless, potentiall­y good accelerant? Or is it an insidious tool for misinforma­tion, or something in between? That's what we'll try to figure out.”

In the rapidly accelerati­ng AI sphere, professors like Gibson said they're just trying to keep up with tech advances — and they know more are coming. Last month, Sam Altman, the CEO of Open AI, told StrictlyVC, a tech newsletter, that the company is planning to release the next version of its chatbot, called GPT-4.

“We need to meet this AI challenge head on,” Gibson said. “We need to demystify it so that we know exactly what we're dealing with and what it can accomplish and what is the best, most responsibl­e and ethical way to use this new, emerging technology.”

How to police?

Some professors said they have plenty of experience getting in front of tech that can be used to cheat.

For years, UTSA's Combs has plugged exam questions into the Chegg tech company website to see if its database of 46 million textbook and exam problems can provide the answers. If it can, he changes his questions.

By comparison, he said, ChatGPT “is like a calculator which struggles to make things personal and give opinions.” Combs believes he can craft questions to beat the app.

“It's out in the wild now,” he said, adding that it's the responsibi­lity of professors to fine-tune their assignment­s to challenge the app. “If a student can just use ChatGPT to do the assignment, maybe it's not an assignment you should be giving right now.”

Gage of A&M-San Antonio agreed. His most immediate concern is the obvious risk of plagiarism, but focusing on that would mean starting from a place of distrust, and good teaching makes a better safety net, he said.

“There's a difference between assigning writing and teaching writing,” Gage said. “If we are teaching writing, we are engaging with our students' voices, we are engaging with their identities as writers and where they are in that moment as writers, we are working with them as they develop.

“Through that type of engagement of student writers, it can become apparent and it can be detected if a student is suddenly using ChatGPT to write.”

Sadovsky and Gage said they would be interested in helping their universiti­es shape policy on how to use or restrict chatbots — and policies should emphasize responsibl­e use, they said.

“I would be very disappoint­ed if our response was just policing,” Sadovsky said. “If what we try to do is just to get better at catching whether a student's answer was crafted using ChatGPT, then I think we are not doing our job well.”

Yet administra­tors and professors also noted the need for tech that can identify plagiarism, in both text or images. Some said they have utilized ChatGPT enough to recognize when a student is using it but fear the technology's improvemen­ts eventually will make that impossible.

In a blog post Tuesday, Open AI said it launched its new AI Text Classifier tool to help educators detect if a student or an app wrote an assignment. The company stressed that the new tool “is not fully reliable” but is better than what was previous available.

Combs said Wednesday that he tried the new tool on text he wrote for an academic proposal and that “it correctly identified as not written by AI.”

But then Combs used ChatGPT to generate AI text and pasted it into the tool.

“It wasn't sure if it was AI,” he said.

 ?? Kin Man Hui/Staff file photo ?? The University of Texas at San Antonio has launched a website featuring ChatGPT-specific “instructio­nal strategies.”
Kin Man Hui/Staff file photo The University of Texas at San Antonio has launched a website featuring ChatGPT-specific “instructio­nal strategies.”

Newspapers in English

Newspapers from United States