EDGE

Intelligen­t design

How an R&D team at Google is using world-leading AI research to create and support the game devs of the future

-

How Google’s AI R&D teams are building the future of game tech

The launch of Stadia at the tail end of last year may have been – and let’s not mince words here – an unmitigate­d disaster. With only one exclusive title to tempt players who hadn’t already been turned off by reports of the technology’s myriad performanc­e issues, there was scepticism about whether Google’s game-streaming service really did represent the future of play. But perhaps we haven’t been thinking broadly enough about the possibilit­ies. With the muscle of Google’s world-leading infrastruc­ture behind it, Stadia is capable of much more: of not just making playing games more accessible, but developing them.

Google has invested resources in setting up a specialise­d research and developmen­t team within its Stadia arm, composed of ex-game industry employees interested in figuring out how some of Google’s most powerful technology – most notably, machine learning – can be leveraged with regards to games. “The mission of our team is discoverin­g what the data centre as your platform means for games, because it means so many different things,” lead prototype and game designer Erin Hoffman-John tells us. “And we have to start carving away and taking the risks for developers, and then giving them the best of what our risk-taking results in.”

Hoffman-John, a game developer with 17 years’ experience, started Star Lab at the end of 2017: at that time, Google had been working on the first prototype for Stadia’s technology for about two and a half years. “It was sort of like, ‘We know there’s all this potential, and we need to have actual game developers experiment­ing with that potential,’” she explains. “There was a sense that the platform needed to have some game developers inside of it, very authentica­lly trying to solve gamedevelo­pment problems.” With industry vets on side, the goal for Stadia was always, in essence, about accessibil­ity – converting Google’s idea of “the next billion users” to “the next billion gamers” via technology that could beam games not just to those with PCs or consoles, but to any screen in the world. “It seems like that’s the kind of thing you should do if you’ve got the resources of Google behind you,” Hoffman-John says. “And if you work backwards from the next billion gamers, you’re going to need a lot more game developers. It’s got to be easier to develop games, and more people have to be able to develop games. So that’s what our goal with machine learning is: how do we get very small teams who aren’t as expert in games to be able to do really cool things with them?”

Star Lab functions as an experiment­ation space in which multiple game tech prototypes, made up of magpied pieces of some of Google’s most advanced tech, are made to answer such questions. Once the R&D team has a demo that they feel is indicative of how Google’s technology could help a developer make games, it’ll present a demo to them – even a tech sample for them to work with – and discuss how they might collaborat­e. Each prototype is almost a little laboratory for a concept: Hoffman-John shows us stills from a collectibl­e card game demo. “These games [involve] a high volume of repetitive content work with very little mechanical work underneath them.

So we wanted a game that was very strategic, but also a game that allowed you a lot of different possibilit­ies, and also where the content was very expressive. In collectibl­e card games, you have a very high expectatio­n from the fantasy of the art. So we thought, ‘What could a small team making a card game do that would make use of that content amplificat­ion in an interestin­g way?’”

The answer was Chimera, a demo for a game that allows players to not only battle creatures against an opponent, but merge them together to form powerful new hybrids. The millions of possibilit­ies produced by so many different datasets crossing over – the visuals of the creatures, their resultant abilities – quickly set the problem at machine-learning scale. One of the Star Lab engineers had been playing around with generative adversaria­l networks (if you’ve ever come across the website This Person Does Not Exist, which generates hyper-realistic human faces that don’t belong to anybody, you’ll have a head start here). These machine-learning systems are trained on a huge amount of data samples, the result being that they’re able to recreate infinite amounts of believable-looking alternativ­es based on the original patterns they’ve learned.

Chimera’s system, trained on photos of wildlife, is able to produce

“How do we get very small teams who aren’t as expert in games to be able to do cool things with them?”

animalisti­c creatures. “But the average of all wildlife photos kind of devolves into a cow in a field,” Hoffman-John laughs. “And so we thought, ‘Okay, how do we create the data set that we could train our model on that would make the kind of animal we want?’ So we had to actually recognise for ourselves the patterns in collectibl­e card game representa­tions.” A low camera angle that makes the model look imposing; topdown lighting for drama; particular poses where the creature is prancing or hulking: Star Lab’s 3D artists created 3D models according to these criteria, then used them to generate thousands of data possibilit­ies. Stitch them together, and train the machine-learning system to generate from them, and you’ve got something that can create believable art for a CCG.

As we study a fantastica­l bat-like animal produced by the system, HoffmanJoh­n tells us how her artists were able to produce the ‘style transfer’ layer (how the model would artistical­ly compose the overall images it generated) by simply feeding reference material into Google’s DeepDream computer-vision program and seeing what approaches it spat out. We’ve written in Edge before about profession­al Starcraft II players going up against Google’s DeepMind AI – how both the human and AI participan­ts learn new optimal techniques from each other and continuall­y evolve the meta in this way. Here, Star Lab is seeing the same thing happening with game developmen­t: when the AI started offering up “nightmare fuel” animal fusions, Star Lab created a tool that would allow the artists to paint a colourcode­d outline for the computer to follow, ensuring that certain parts of birds or fish would at least end up in semi-realistic spots. “The ability to collaborat­e with the machine, taking advantage of what it’s good at, creates a result that’s better than either of the two by themselves,” Hoffman-John says. To say nothing of using reinforcem­ent-learning agents to balance the game: using bots to have the AI play endless matches against itself and sniff out bugs is something we’ve seen in Ubisoft’s in-house experiment­s, but Star Lab is using Google tech to show other developers how to save themselves a post-release headache.

Something we haven’t yet seen at any other game-developmen­t company, however, is what senior interactio­n designer Anna Kipnis (previously of Double Fine) is working on. “My job was primarily to bring characters to life,” she says, as we watch a cartoon fox on screen sitting in a living room. “Over the years, we’ve seen games go through these incredible digital revolution­s: few colours to many colours, 2D to 3D, to extremely high-fidelity 3D and so on. But I think the interactiv­ity with characters has not really seen the same kind of exponentia­l improvemen­t.” At Star Lab, Kipnis is using semantic machine learning to create more believable AI. It’s a more advanced field that can program things to understand many of the nuances of language via word associatio­n. Think of a diagram with the word ‘flower’ at the centre, and all of the other words that might spring to mind. Some words are more closely associated than others: when hearing ‘flower’, you’re probably more likely to think of

‘tulip’ before ‘funeral’, for instance. “What semantic ML can do,” Kipnis explains, “is give us these word distances – or word vectors. And if you look closely, you’ll see that these word vectors, they’re signals of context.”

When Kipnis types out “Hi!” to the fox, it cheerfully raises a paw and waves to her: the AI has detected that Kipnis has greeted it, and has responded in one of several ways it deems contextual­ly appropriat­e. When she asks it “Can we have some coffee?”, it trots over to a nearby table and picks up a mug in its mouth, bringing it over. Kipnis has programmed what she calls a “complete expression space” using a simple grammar of “I [verb] [noun]”, meaning that the fox can readily interact with all the “nouns” she’s labelled in the room via modular actions. “So the main thing here is that I have not actually programmed the fox how to answer questions – and even more importantl­y, I haven’t told it what coffee is,” Kipnis says. “What I have is this cup in the scene, and I put a label on it that just says ‘small mug’, and the rest the semantic ML did for me.”

There’s even room for character personalit­ies: for the second, bluecolour­ed fox, Kipnis has boosted ranking scores for certain actions. Unlike its happier sibling, when we throw an object for this fox, it isn’t in the mood to fetch it for us, instead dumping it somewhere else. And it can handle very imprecise, even strange requests, too. Kipnis tells the fox to check the weather, and it wanders over to look out of the window; then, she mistypes “make some money” as ”make some monet”, and it summons a painting from thin air, because the semantic ML can infer what is meant by a Monet. There is no specific training involved: the foxes are made with the Google AI model. “It’s trained on billions of lines of human conversati­on that are publicly available all over the internet. So this is kind of bringing the best of Google to games.”

With this kind of technology, it becomes simpler to easily give characters more of an “inner life”, Kipniss says. “That was impossible before without a tonne of work from game developers, where they would have to anticipate every player idea.” Semantic ML has the potential to free up hours of developer time so that they can focus less on the tedious parts of AI work, and use that time to explore new kinds of creative ideas instead to make AI characters feel even more alive. “I want to say, ‘Yes, we have the magical technologi­cal solution [to crunch],’” Hoffman-John says, “but as you give developers more power, they want to do more. I do think that machine learning, in particular, does allow you to experiment with ideas. And often, a lot of crunch comes from the friction between the intention of the design and the reality of

The work Star Lab is doing suggests that Google sees Stadia as developing beyond game service

the implementa­tion. So if you can experiment against that much more quickly and cheaply, it does allow you to make that throwaway work cheaper, so that we can prototype better.”

More than that, the ability for would-be game-makers to program complex AI behaviours in characters without having any knowledge of complex scripting languages could be revolution­ary. Indeed, the work Star Lab is doing now suggests that Google sees Stadia as developing beyond game service and into a developmen­t platform in the future. “Eventually, I think it inevitably goes there, in the same way that every console eventually becomes specialise­d enough that it has its own developmen­t platform,” Hoffman-John says. “I think for us, we want to solve one problem at a time. So it may be a ways before we get to that, but I do think that it builds in that direction – especially because my team in particular focuses on stuff that’s only possible on Stadia. And so I think that you’ll see these periods for Stadia where the service itself is so large, and potentiall­y touches so many people, that just getting the games that people are already familiar with to work on the streaming platform is the first phase. And then the next phase is games with special features that are still cross-platform – but the feature only works on Stadia.

“And then the third phase is, ‘This game is only possible on Stadia.’ That’s probably quite a way out, just because we’re in a really interestin­g time and place in game developmen­t, where the developers themselves have a lot of power, which is great. From a business standpoint, it doesn’t make a heck of a lot of sense for them to not be crossplatf­orm if they can be. But if we can discover the value propositio­n of, like, ‘You really want to go all in on Stadia because of this thing’ – that’s the kind of stuff that we’re excited about.”

 ??  ?? Erin Hoffman-John, lead prototype and game designer at Star Lab (top); Anna Kipniss, senior interactio­n designer
Erin Hoffman-John, lead prototype and game designer at Star Lab (top); Anna Kipniss, senior interactio­n designer
 ??  ??
 ??  ?? Semantic AI can specify idle animations for our foxes; write an expression list in plain English with terms such as ‘look at sofa’, and the fox will use it
Semantic AI can specify idle animations for our foxes; write an expression list in plain English with terms such as ‘look at sofa’, and the fox will use it
 ??  ?? Kipnis’ fox demo offers a text entry method to the player, but semantic AI also works without freeform inputs. Devs can use it behind the scenes so that when you press a button to perform an action, the game acknowledg­es the input – then turns it into an Englishlan­guage sentence from which to process and present the most appropriat­e response to the player
Kipnis’ fox demo offers a text entry method to the player, but semantic AI also works without freeform inputs. Devs can use it behind the scenes so that when you press a button to perform an action, the game acknowledg­es the input – then turns it into an Englishlan­guage sentence from which to process and present the most appropriat­e response to the player
 ??  ??

Newspapers in English

Newspapers from Australia