Times-Herald

AI learns to outsmart humans in video games

-

Video game players have competed against computerco­ntrolled opponents for decades, but they haven't been as smart and fast as GT Sophy, the new AI driver on PlayStatio­n's Gran Turismo. Visit an artificial intelligen­ce laboratory at research universiti­es and companies like Sony, Google, Meta and Microsoft and it's not unusual to find AI agents like Sophy racing cars, slinging angry birds at pigs, fighting epic interstell­ar battles or helping human gamers build new Minecraft worlds. It's all part of the job descriptio­n for computer systems trying to learn how to get smarter in games. In some instances, they are also trying to learn how to get smarter in the real world.

Speed around a French village in the video game Gran Turismo and you might spot a Corvette behind you trying to catch your slipstream.

The technique of using the draft of an opponent's racecar to speed up and overtake them is one favored by skilled players of PlayStatio­n's realistic racing game.

But this Corvette driver is not being controlled by a human — it's GT Sophy, a powerful artificial intelligen­ce agent built by PlayStatio­n-maker Sony.

Gran Turismo players have been competing against computer-generated racecars since the franchise launched in the 1990s, but the new AI driver that was unleashed last week on Gran Turismo 7 is smarter and faster because it's been trained using the latest AI methods.

"Gran Turismo had a built-in AI existing from the beginning of the game, but it has a very narrow band of performanc­e and it isn't very good," said Michael Spranger, chief operating officer of Sony AI. "It's very predictabl­e. Once you get past a certain level, it doesn't really entice you anymore."

But now, he said, "this AI is going to put up a fight."

Visit an artificial intelligen­ce laboratory at universiti­es and companies like Sony, Google, Meta, Microsoft and ChatGPTmak­er OpenAI and it's not unusual to find AI agents like Sophy racing cars, slinging angry birds at pigs, fighting epic interstell­ar battles or helping human gamers build new Minecraft worlds -- all part of the job descriptio­n for computer systems trying to learn how to get smarter in games.

But in some instances, they are also trying to learn how to get smarter in the real world. In a January paper, a University of Cambridge researcher who built an AI agent to control Pokémon characters argued it could "inspire all sorts of applicatio­ns that require team management under conditions of extreme uncertaint­y, including managing a team of doctors, robots or employees in an ever-changing environmen­t, like a pandemic-stricken region or a war zone."

And while that might sound like a kid making a case for playing three more hours of Pokémon Violet, the study of games has been used to advance AI research — and train computers to solve complex problems — since the mid-20th century.

Newspapers in English

Newspapers from United States