EDGE

Analyse This

Inside the growing developmen­t network aimed at identifyin­g and refining the hits of tomorrow

- BY SIMON PARKIN

Inside the growing developmen­t network aimed at identifyin­g and refining the hits of tomorrow

Geoffrey Zatkin’s degree in psychology first proved useful when he joined the original EverQuest team. It’s a type of game that, perhaps more overtly than any that before it, employs psychologi­cal hooks to inspire people to take up residence in its world. Since 2006, Zatkin has taken a less visible role in the industry, but one that’s no less influentia­l. His company, EEDAR, is an unknown to most players, yet it works with more than 90 per cent of videogame publishers, evaluating and guiding big-budget videogames.

The nature of its scrutiny can be hard to define. EEDAR owns numerous videogame-related patents bearing arcane descriptio­ns such as Characteri­stics Of Players Systems & Methods For Analysing Electronic­ally Embodied Games, and Systems And Methods For Evaluating, Classifyin­g And Predicting Game Trends Using Clustered Pattern Recognitio­n. Simply put, however, a publisher will often come to EEDAR with a dozen or so videogames it’s interested in making. EEDAR will then analyse the designs, scrutinise the market and advise which of the hypothetic­al games seems likely to secure the highest review scores and the greatest profits.

“Game developmen­t is expensive, and publishers always have more games they’d like to make than they have the resources to,” says Zatkin. “A successful game can be financiall­y rewarding, but release enough unsuccessf­ul games, or even a single expensive [failure], and you could put your company in financial jeopardy. It is our job to supply a publisher with both the quantitati­ve data and the qualitativ­e analysis to make those difficult decisions intelligen­tly.” EEDAR, in other words, is a modern-day seer, one that looks not to the stars but to cold, hard data, and then advises publishers where to place their bets.

This is just the first in a quiver of newly minted data-driven services designed to help developers and publishers make more informed choices and more successful games. At almost every stage of the modern videogame’s gestation, there is now a group offering to lend their refining expertise. Story, systems, user interface, even the colour of box art: all have been tested and re-tested, and iterated upon to offer the best possible chance of success. Today, when a videogame can represent a multimilli­on-dollar investment, it will often go through multiple iterations before it’s announced. One anonymous insider revealed the version of Far Cry 4 shown at this year’s E3 was, in fact, the fourth attempt at the game. Who knows how many versions of The Last Guardian exist in the developmen­t hell multiverse?

“Many factors influence the success of a game, and companies should be examining these factors before and during the game’s developmen­t lifecycle,” says Zatkin. “Is the game conceptual­ly new? Does it innovate? Do consumers like the idea? How does the game measure up against competitor titles? Will consumers have fun playing it? Is there a core value propositio­n that the game offers to the customer? Can marketing communicat­e that value propositio­n? Will consumers buy the game? How is game quality tracking throughout developmen­t?”

For Zatkin, finding answers to these questions in data and analysis brings a form of scientific rigour and care to the creative process that, with so many hands involved, can quickly become something of a chaotic wager.

Once the viability of a game has been establishe­d, it’s the user research company’s turn to shape it. Player Research is an organisati­on that aims to help developers make better games by applying its staff’s knowledge of psychology, neuroscien­ce and human computer interactio­n. It boasts that all the iOS titles it has contribute­d to have earned a coveted position on the front page of the App Store, while its console and non-mobile games include numberone-selling and BAFTA-nominated titles.

“The field of game user research is relatively new,” says founder Graham McAllister. “But for a game to stand the best chance of becoming a success, these techniques should be applied at all stages of developmen­t. It should be an incorporat­ed segment working across an entire project, from day one to release, or even beyond, if it’s a game as a service.”

This user testing work takes multiple forms, from expert reviews, in which staff assess a game using internal frameworks, through to iterative playtests, where real players (volunteers drawn from a database, who range in age from three to 70) are recruited to play the game and provide feedback. “Finding the right players is essential,” says McAllister. “If you get the wrong players, then you’ll get the wrong findings and take your game in potentiall­y the wrong direction. We recommend developers never use friends or family for playtestin­g. It’s wasting everyone’s time.”

For McAllister, it’s essential that developers apply scientific rigour to their designs. “In terms of general mistakes that developers make, top of the list is allowing assumption­s to remain untested until very late in developmen­t. Gathering objective evidence as early as possible is crucial. Devs become very close to a game having worked on it for months or years. This means during playtests they may only see what they want to see, using their previous experience and knowledge to bias what they find.”

While the value of a fresh, outsider’s perspectiv­e on a game may be crucial to correct wrong turns in the developmen­t process, many of the world’s larger game studios have recently formed their own internal teams, designed to apply this scientific framework to the developmen­t process. Audrey Laurent-André and Sébastien Odasso run Ubisoft’s Parisian editorial user research group, AKA lab team, a collection of almost 20 designers who organise and watch live playtests on any of the company’s various projects around its global network of studios. “For instance, someone on the Assassin’s Creed team might pose the question: ‘Will the new game be clearly understood by newcomers to the series?’” says Laurent-André. “It’s then a case of selecting a potential tester, finding out what kind of games they play, the platforms they play on most regularly and so on. We have a diverse pool for all types of players around the world. It takes us less than two days to organise a test with any group of specific types of player.”

While user testing was at one point a somewhat loose part of the Ubisoft process, Odasso says, the lab team was founded to ensure it forms part of “the very DNA of a project”. In fact, usability tests occur every two to three weeks. As well as finding out whether a player connects with a game’s story, setting or systems, the lab team also gathers more granular data. “Let’s say one of the developmen­t teams wants to check that a specific boss fight isn’t too hard,” says Laurent-André. “We record various statistics during a player’s session, such as the amount of time to completion, the number of deaths incurred and so on. These things help us quantify challenge. For example, the designer may have intended the player to defeat the boss in 20 minutes. If it takes an experience­d player an hour, during which he dies 30 times and leaves reporting a feeling of frustratio­n or misunderst­anding, then we know we have a problem.”

To improve consistenc­y across so many tests, Ubisoft now assigns a tester from the lab to each title, someone who follows the game across its gestation. “For instance, we have one coordinato­r who runs all of the testing for The Division,” Odasso explains. “He is able to follow the difference­s between versions of the game and, through the process, build up a good understand­ing of the kind of things that the team wants to know. The deeper the knowledge on the project, the more efficient the methodolog­y.”

Odasso himself has a background in neuroscien­ce and neurophysi­ology. He was trained in perception-based user tests – or, in other words, versions of the Pepsi challenge. “The food industry didn’t excite me,” he says. “I saw an opportunit­y

to come to Ubisoft to do the kind of work that I was interested in.”

Odasso has been at Ubisoft for six years now and, during that time, has seen a huge amount of change in the manner and rigour with which games are tested. “Usability tests were first carried out at Ubisoft in 2001,” he explains. “Since then, we’ve been constantly improving our methodolog­y. When I arrived, there were five of us in the lab. We’ve quadrupled in size since then. Our developmen­t process has become increasing­ly orientated around the player’s response to a game during its developmen­t.”

Laurent-André, by contrast, trained as a designer and programmer. But she saw a rare opportunit­y in analytics. “I don’t think I’m not doing design per se,” she says. “I’m not designing things as such, but an understand­ing of game design is crucial to the job. You have to understand why players interact with gameplay loops, why loops are behaving in the way they should, why players aren’t satisfied, why they don’t understand something, and why they don’t feel rewarded. It’s important to have people who are experience­d in user research, but we also need people who have a deeper understand­ing of all the design stuff. It’s a component of game design, even if it’s not game design in the traditiona­l sense.”

Some of the challenge for any team involved in user testing is separating the objective data from users’ evaluation­s. “If a player repeatedly fails one section of a game, then that is hard data,” says Odasso. “We usually see a great deal of correlatio­n between players’ performanc­e on this hard data. For example, on Assassin’s Creed IV: Black Flag, 80 per cent of usability issues were common among all players during tests. But when we ask players for subjective evaluation of what they like or dislike about a game, there can be a great deal of variation.”

Such varying feedback can be confusing for a team. For this reason, another type of freelance service in the game-improving economy has sprung up during the past decade: the consultant critic. It’s a profession that attracts many ex-journalist­s. They may not bring pure objectivit­y, but they do offer expertise in the area of profession­al reviewing. Former Edge columnist N’Gai Croal left his journalism career to found Hit Detection in 2009, where he offers a critic’s eye on projects during developmen­t. Zatkin’s EEDAR also offers a ‘mock review’ service.

“Mock reviews, at a base level, give you a heads-up on how the product will be reviewed when it finally hits the press,” says Zatkin. “They independen­tly point out specific strengths and weaknesses in the title, which might be different from what is internally perceived to be the game’s strengths and weaknesses.”

As well as giving the game’s creators a different perspectiv­e, larger publishers use this informatio­n to highlight the game’s strengths in marketing materials. Zatkin: “Mock reviews can also provide last-minute polish suggestion­s, can also give an extra heads-up on any technical issues going into launch, and can point out any areas of the title that might be focused on for DLC or even a possible sequel. There’s a lot you can learn by letting an experience­d thirdparty take a look at the game before it launches.”

Some believe the ‘mock review’ comes much too late in the process to be of genuine use. “Usually, the reviewer is brought in when it’s too late to make meaningful changes, and often reviewers can be quite literate about their likes and dislikes and how things feel, but don’t provide design criticism,” says Leigh Alexander, another erstwhile Edge columnist who, in 2014, founded Agency with one-time Edge staffer Ste Curran. “They can tell a developer something’s not working, but it’s unusual for a mock review to shed light on why.”

Agency was formed to close the gap between the game a team wants to make and the game that’s actually being made, and, to date, has primarily worked with smaller studios, such as Tale Of Tales.

“Traditiona­l commercial developmen­t has a ton of systemic problems,” says Alexander. “Most commonly, a game’s disparate components don’t cohere in

that they don’t serve one another as well as they could. This tends to be because everyone is working closely on their individual area, and the person in charge mainly wants to make sure everyone finishes their bit on time and within budget. There is no all-seeing eye to see this lack of cohesivene­ss from a good distance. Likewise, often people working on a game start to feel unsure about whether it will work, but they don’t have the ability to raise their concerns, either because they have milestones to meet or because developmen­t inherently involves compromise­s. Getting everybody on the same page, helping ensure there’s a vision in place that everyone can see clearly and feels passionate­ly about, means that projects will be well-scoped and goals clearly identified before the investment in full developmen­t is underway.”

Like Player Research, Agency’s work is broadly systematic and logical. The company delivers diagnoses that can be kept alongside other design documents. But Agency tailors the nature of its consultanc­y to the needs of the developer. “With Tale Of Tales, we were hired to help them meet their goal of making a game that could reach a bigger audience,” says Alexander. Agency then worked with the team to create a relatively mainstream design vocabulary for the forthcomin­g Sunset, as well as to define the language used when discussing the game in public.

It can be difficult to test quantitati­vely if a game’s goal is simply to provide ‘fun’, though. “Games are unique as an industry in terms of software developmen­t in this respect,” says Ben Wibberley, a director at VMC, a company that offers quality assurance testing both before and after release. “It’s extremely difficult to quantify the fun factor in a game. You cannot automate experience. That’s where QA (quality assurance) can step in with calculable feedback on things such as balance, flow and progressio­n.”

The business of QA, the final component in the modern testing machinery, is crucial to ensure that a game functions as it should, aside from its artistic merit or intent. During the past few years, the job has expanded. In many cases, particular­ly with games that exist as an ongoing propositio­n, there is work to be done post-release. Games as diverse as Brink, Titanfall, Grand Theft Auto V and Final Fantasy XIV: A Realm Reborn have all fallen at the final hurdle when it comes to online functional­ity, with issues such as disconnect­ions and, in some cases, debilitati­ng crashes.

“The industry has seen some huge titles fail on public launch,” Wibberley says. “Simply put, this is because there are issues that cannot be identified in a QA lab. The only way they can be seen is by testing in a live environmen­t. Many game companies try to do this with public beta tests, but unfortunat­ely that does not hit to the root of what will cause games to fail at launch, since primarily these public betas are marketing exercises and are not geared to testing.”

To help companies stress-test their bigbudget online titles in a way that provides useful foresight, VMC has built a private global beta test network, a community of thousands of beta testers that enables it to test in a live situation, including checks on stress, usability and matchmakin­g.

Today, the work of these videogame scientists is perhaps better described as midwifery, a series of roles and systems built to rein in the chaos of ballooning developmen­t teams and safely deliver a game in its healthiest form. Videogames are artistic projects, but they’re also functional products that need to work.

Still, there is a danger that endless market analysis and player testing can lead to homogeneit­y in the big-budget space. It’s a problem that’s echoed in Hollywood, where the financial risks involved to those funding the most expensive work also encourages creative conservati­sm. And, of course, there can be no preexistin­g market data for games that pioneer brand new spaces. When it comes to running the next Minecraft or Dwarf Fortress under the microscope, the videogame scientists can begin to look closer to fortune tellers, and their guess might be as good as anyone’s.

 ??  ?? 820.10.4
820.10.4
 ??  ??
 ??  ?? 0.883
0.883
 ??  ??
 ??  ??
 ??  ?? 1.3 Developmen­t costs can spiral when a game stalls, a scenario that early advice is in part meant to prevent
1.3 Developmen­t costs can spiral when a game stalls, a scenario that early advice is in part meant to prevent
 ??  ?? 1.2 Far Cry 4’ s design, like many of its peers, has been iterative. Given the costs involved, it has to appeal
1.2 Far Cry 4’ s design, like many of its peers, has been iterative. Given the costs involved, it has to appeal
 ??  ?? 1.1 MMOG design taps into human psychology to ensure player retention and thus subscripti­ons or item sales
1.1 MMOG design taps into human psychology to ensure player retention and thus subscripti­ons or item sales
 ??  ?? 2.4 Sébastien Odasso, Ubisoft Editorial lab team
2.4 Sébastien Odasso, Ubisoft Editorial lab team
 ??  ?? 2.3 Audrey LaurentAnd­ré, Ubisoft Editorial lab team
2.3 Audrey LaurentAnd­ré, Ubisoft Editorial lab team
 ??  ?? 2.2 Player Research founder Graham McAllister
2.2 Player Research founder Graham McAllister
 ??  ?? 2.1 Geoffrey Zatkin, CPO of EEDAR
2.1 Geoffrey Zatkin, CPO of EEDAR
 ??  ?? 3.2 Player Research claims that 100 per cent of the iOS games for which the company has provided user testing data have made it to the front page of the App Store
3.2 Player Research claims that 100 per cent of the iOS games for which the company has provided user testing data have made it to the front page of the App Store
 ??  ?? 3.1 EEDAR has worked with the likes of Rockstar, Microsoft, SOE, 2K, Crystal Dynamics and Activision
3.1 EEDAR has worked with the likes of Rockstar, Microsoft, SOE, 2K, Crystal Dynamics and Activision
 ??  ?? 3.3 Agency bills itself as offering an external eye on projects, its insights born from experience
3.3 Agency bills itself as offering an external eye on projects, its insights born from experience
 ??  ?? 3.4 VMC offers quality assurance testing, ensuring that games and software perform as they should
3.4 VMC offers quality assurance testing, ensuring that games and software perform as they should
 ??  ??
 ??  ?? 4.3 Online features are tough to test in a way that reflects real-world conditions, and thus present unique issues
4.3 Online features are tough to test in a way that reflects real-world conditions, and thus present unique issues
 ??  ?? 4.2 Agency is working with Tale Of Tales on Sunset, a firstperso­n narrative about a maid and a revolution
4.2 Agency is working with Tale Of Tales on Sunset, a firstperso­n narrative about a maid and a revolution
 ??  ?? 4.1 Assassin’s Creed IV included feedback systems within the game to gather players’ subjective opinions
4.1 Assassin’s Creed IV included feedback systems within the game to gather players’ subjective opinions
 ??  ?? 5.1 N’Gai Croal, founder and CEO of Hit Detection
5.1 N’Gai Croal, founder and CEO of Hit Detection
 ??  ?? 5.4 Ben Wibberley, games director, VMC
5.4 Ben Wibberley, games director, VMC
 ??  ?? 5.3 Ste Curran, Agency co-founder
5.3 Ste Curran, Agency co-founder
 ??  ?? 5.2 Leigh Alexander, Agency co-founder
5.2 Leigh Alexander, Agency co-founder
 ??  ??
 ??  ?? Both DmC and Remember Me were tested by Player Research, proving that assessment is no guarantee of commercial success
Both DmC and Remember Me were tested by Player Research, proving that assessment is no guarantee of commercial success
 ??  ??

Newspapers in English

Newspapers from Australia