The Atlanta Journal-Constitution
Chatbots teaching children: Is it a boon or just hype?
‘It’s still a little bit science fiction, but it’s much less science fiction than it used to be.’
Sal Khan, the CEO of Khan Academy, gave a rousing TED Talk last spring in which he predicted that AI chatbots soon would revolutionize education. “We’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen,” said Khan, whose nonprofit education group has provided online lessons for millions of students. “And the way we’re going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.” Videos of Khan’s tutoring bot talk amassed millions of views. Soon, prominent tech executives, including Sundar Pichai, Google’s CEO, began issuing similar education predictions.
“I think over time, we can give every child in the world and every person in the world — regardless of where they are and where they come from — access to the most powerful AI tutor,” Pichai said on a Harvard Business Review podcast a few weeks after Khan’s talk. (Google introduced an artificial intelligence chatbot called Bard last year. It also has donated more than $10 million to Khan Academy.)
Khan’s vision of tutoring bots tapped into a decades-old Silicon Valley dream: automated teaching platforms that instantly customize lessons for each student. Proponents argue that developing such systems would help close achievement gaps in schools by delivering relevant, individualized instruction to children faster and more efficiently than human teachers ever could.
In pursuit of such ideals, tech companies and philanthropists over the years have urged schools to purchase a laptop for each child, championed video tutorial platforms and financed learning apps that customize students’ lessons. Some online math and literacy interventions have reported positive effects. But many education technology efforts have not proved to significantly close academic achievement gaps or improve student results, like high school graduation rates.
Now the spread of generative AI tools such as ChatGPT, which can give answers to biology questions and manufacture human-sounding book reports, is renewing enthusiasm for automated instruction — even as critics warn that there is not yet evidence to support the notion that tutoring bots will transform education for the better.
Online learning platforms such as Khan Academy and Duolingo have introduced AI chatbot tutors based on GPT-4. That is a large language model, developed by OpenAI, which is trained on huge databases of texts and can generate answers in response to user prompts.
Some tech executives envision that, over time, bot teachers will be able to respond to and inspire individual students just like human teachers.
“Imagine if you could give that kind of teacher to every student 24/7 whenever they want for free,” Greg Brockman, the president
of OpenAI, said last summer on an episode of the “Possible” podcast. (The podcast is co-hosted by Reid Hoffman, an early investor in OpenAI.) “It’s still a little bit science fiction, but it’s much less science fiction than it used to be.”
The White House seems sold. In a recent executive order on artificial intelligence, President Joe Biden directed the government to “shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools,” a White House fact sheet said.
AI-assisted instruction
Even so, some education researchers say schools should be wary of the hype around AI-assisted instruction.
For one thing, they point out, AI chatbots liberally make stuff up and could feed students false information. Making AI tools a mainstay of education could elevate unreliable sources as classroom authorities. Critics also say AI systems can be biased and often are opaque, preventing teachers and students from understanding exactly how chatbots devise their answers.
In fact, generative AI tools may turn out to have harmful or “degenerative” effects on student learning,
said Ben Williamson, a chancellor’s fellow at the Centre for Research in Digital Education at the University of Edinburgh.
“There’s a rush to proclaim the authority and the usefulness of these kinds of chatbot interfaces and the underlying language models that power them,” he said. “But the evidence that AI chatbots can deliver those effects does not yet exist.”
Another concern: The hype over unproven AI chatbot tutors could detract from more traditional, human-centered interventions — like universal access to preschool — that have proved to increase student graduation rates and college attendance.
There also are issues of privacy and intellectual property. Many large language models are trained on vast databases of texts that have been scraped from the internet, without compensating creators.
There also are concerns that some AI companies may use the materials that educators input, or the comments that students make, for their own business purposes, such as improving their chatbots.
Randi Weingarten, president of the American Federation of Teachers, which has more than 1.7 million members, said her union was working with Congress on regulation to help ensure that AI tools were fair and safe.
“Educators use education technology every day, and they want
more say over how the tech is deployed in classrooms,” Weingarten said.
Automated teaching tools
This is hardly the first time that education reformers have championed automated teaching tools. In the 1960s, proponents predicted that mechanical and electronic devices called “teaching machines” — which were programmed to ask students questions on topics like spelling or math — would revolutionize education.
Popular Mechanics captured the zeitgeist in an article in October 1961 headlined: “Will Robots Teach Your Children?” It described “a rash of experimental machine teaching” sweeping schools across the United States in which students worked independently, inputting answers into the devices at their own pace.
The article also warned that the newfangled machines raised some “profound” questions for educators and children. Would the teacher, the article asked, become “simply a glorified babysitter”? And: “What does machine teaching do to critical thinking on the part of the students?”
Cumbersome and didactic, the teaching machines turned out to be a short-term classroom sensation, both overhyped and overfeared. The rollout of new AI teaching bots has followed a similar narrative of potential education transformation and harm.
Unlike the old 20th century teaching machines, though, AI chatbots seem improvisational. They generate instant responses to individual students in conversational language. That means they can be fun, compelling and engaging.
Chatbot tutor
Some enthusiasts envision AI tutoring bots becoming study buddies that students could quietly consult without embarrassment. If schools broadly adopted such tools, they could deeply alter how children learn.
That has inspired some former Big Tech executives to move into education. Jerome Pesenti, a former vice president of AI at Meta, recently founded a tutoring service called Sizzle AI. The app’s AI chatbot uses a multiple-choice format to help students solve math and science questions. Jared Grusd, a former chief strategy officer at social media company Snap, co-founded a writing startup called Ethiqly. The app’s AI chatbot can help students organize and structure essays as well as give them feedback on their writing.
Khan is one of the most visible proponents of tutoring bots. Khan Academy introduced an AI chatbot named Khanmigo last year specifically for school use. It is designed to help students think through problems in math and other subjects — not do their schoolwork for them.
The system also stores conversations that students have with Khanmigo so that teachers may review them. And the site clearly warns users: “Khanmigo makes mistakes sometimes.” Schools in Indiana, New Jersey and other states are now pilot-testing the chatbot tutor.
Khan’s vision for tutoring bots can be traced back in part to popular science fiction books such as “The Diamond Age,” a cyberpunk novel by Neal Stephenson. In that novel, an imaginary tablet-like device is able to teach a young orphan exactly what she needs to know at exactly the right moment — in part because it can instantly analyze her voice, facial expression and surroundings.
Khan predicted that within five years or so, tutoring bots like Khanmigo would be able to do something similar, with privacy and safety guardrails in place.