AI learns to outsmart humans in video games
Video game players have competed against computercontrolled opponents for decades, but they haven't been as smart and fast as GT Sophy, the new AI driver on PlayStation's Gran Turismo. Visit an artificial intelligence laboratory at research universities and companies like Sony, Google, Meta and Microsoft and it's not unusual to find AI agents like Sophy racing cars, slinging angry birds at pigs, fighting epic interstellar battles or helping human gamers build new Minecraft worlds. It's all part of the job description for computer systems trying to learn how to get smarter in games. In some instances, they are also trying to learn how to get smarter in the real world.
Speed around a French village in the video game Gran Turismo and you might spot a Corvette behind you trying to catch your slipstream.
The technique of using the draft of an opponent's racecar to speed up and overtake them is one favored by skilled players of PlayStation's realistic racing game.
But this Corvette driver is not being controlled by a human — it's GT Sophy, a powerful artificial intelligence agent built by PlayStation-maker Sony.
Gran Turismo players have been competing against computer-generated racecars since the franchise launched in the 1990s, but the new AI driver that was unleashed last week on Gran Turismo 7 is smarter and faster because it's been trained using the latest AI methods.
"Gran Turismo had a built-in AI existing from the beginning of the game, but it has a very narrow band of performance and it isn't very good," said Michael Spranger, chief operating officer of Sony AI. "It's very predictable. Once you get past a certain level, it doesn't really entice you anymore."
But now, he said, "this AI is going to put up a fight."
Visit an artificial intelligence laboratory at universities and companies like Sony, Google, Meta, Microsoft and ChatGPTmaker OpenAI and it's not unusual to find AI agents like Sophy racing cars, slinging angry birds at pigs, fighting epic interstellar battles or helping human gamers build new Minecraft worlds -- all part of the job description for computer systems trying to learn how to get smarter in games.
But in some instances, they are also trying to learn how to get smarter in the real world. In a January paper, a University of Cambridge researcher who built an AI agent to control Pokémon characters argued it could "inspire all sorts of applications that require team management under conditions of extreme uncertainty, including managing a team of doctors, robots or employees in an ever-changing environment, like a pandemic-stricken region or a war zone."
And while that might sound like a kid making a case for playing three more hours of Pokémon Violet, the study of games has been used to advance AI research — and train computers to solve complex problems — since the mid-20th century.