National Post

Parsing the numbers

What’s the difference between 3 stars and 3.5? A critic explains the science behind review scores Calum Marsh

-

On Letterboxd, a social media network for cinephiles, users can maintain a record of what they watch and log reviews and star ratings for every film they see. My profile page, which I maintain diligently, boasts a perfect little emblem of obsessive-compulsive rigour in the form of an isosceles triangle: it’s a chart that shows every one of the 549 ratings I’ve awarded in the roughly two years I’ve been using the platform, between half a star and five, distribute­d evenly on a meticulous curve.

A profession­al critic, of course, is obliged to take ratings seriously: they function as a digestible, short-form snapshot of an official verdict on whatever happens to be under review. Insofar as it is a critic’s job to advise readers on the merit of a given entertainm­ent — insofar as a review is really a recommenda­tion — star ratings are as much the critic’s responsibi­lity as analysis or wit. Now, many critics have lamented the need for ratings on top of their reviews, complainin­g that they distract from the writing and reduce the complex act of criticism to an arbitrary number that can be aggregated by Rotten Tomatoes and appended to a pithy one-sentence blurb. That may be a risk. But ratings are not the enemy of serious criticism. One simply needs, as a critic, to appreciate their power and apprehend what they mean.

Alfred Hitchcock once described the difference between surprise and suspense by asking us to picture a conversati­on between two strangers with a bomb underneath the table between them: suspense is when the audience knows about the bomb and watches the men benignly chatting, fascinated as they wait for the explosion. A star rating can work a bit like that bomb. Picture a thousandwo­rd review of a hotly anticipate­d movie on the front page of the arts section of this newspaper. The review begins with a rather abstract lede involving some obscure aspect of cinematic history. If you start reading without context, you might be confused or bored, but if you see straight away that the critic has bestowed this particular film five exceptiona­l stars out of five — or better still, zero stars out of five — you are much more likely to pay rapt attention. The stars have tipped you that something special is afoot, and you will read eagerly to see the prose explode.

A less distinguis­hed rating is admittedly less striking as an introducti­on to a review. Still, the great big numerical middle is important, and a keen critic will navigate this murky expanse with care. Pitchfork, an online music magazine based in New York and run by Condé Nast, has prevailed over the last two decades as the definitive authority on music criticism in large part for the fastidious­ness with which they are known to assign scores. They rate albums out of ten. Anything higher than a mid-6 is considered enviably high, and anything higher than a mid-7 is incredibly rare — so rare in fact that an 8.0 is often enough to launch an unknown band to indie stardom single-handedly. Only a handful of 8s are bestowed every year; 9s are reserved for best-of-the-decade or even best-of-the-genre material; one can count the number of perfect Pitchfork 10s ever accorded on two hands.

Pitchfork understand­s the power of a number if that number is properly withheld. More specifical­ly, it understand­s that retaining a high number’s value means more than merely curbing the outlaid 9s. As a critic, it can be difficult to suppress enthusiasm in that initial period when good music really thrills with the sparkle of the new; but you have to be steadfast, and indeed accept that a 6.8 is a perfectly fulsome endorsemen­t. The big numbers only matter, after all, if all the numbers do. We talk about “average” a lot but forget that most things are average by definition.

Negative review scores, too, are an art — in some ways more art than science. We tend to think of anything below fifty on a hundred-point scale as a failure, which has the inadverten­t effect of making the wide range of sub-fifty numbers seem like one indistinct morass. We can maybe appreciate the difference between a 6.2 album and a 7.4 album. But a 3.9 and a 4.5? Degrees of awful seem somehow harder to quantify. IGN is an online magazine that, like Pitchfork, is respected for the rigour of its ratings. It sometimes reviews video games that are terrible — but how terrible precisely might be difficult to say. The magazine, though, has an exhaustive rubric, and they define exactly how to tell two bad games apart. 4.0-4.9 is “bad”; 3.0-3.9 is “awful”. The difference? “While even a Bad game generally has some bright spots, an Awful one is consistent­ly unenjoyabl­e.”

These distinctio­ns may seem pedantic. And perhaps confronted with an album deemed 7.1 or a movie deemed 3.5 stars out of five, a reader will often assume the number represents an impression either favourable or unfavourab­le and neglect to think about the specifics. Neverthele­ss, it behooves the profession­al critic to respect the authority and potency of numbers. The indiscrimi­nate critic, lavishing five perfect stars on every entertaini­ng blockbuste­r that passes through the multiplex, will diminish the capacity of an attentive audience to trust their praise — much as a friend who heaps affection on every acquaintan­ce will seem a less intimate confidant than the friend who entrusts their love only sparingly.

Think hard, and stay thrifty. Make it matter when you give a five or a 10.

Newspapers in English

Newspapers from Canada