
Except, well, this is the most widely published video game magazine in the country, and the only one that still gets scoops with regularity. GI is going to be a part of the conversation as long as things stay the way they are. And as long as that's the case, it will continue to be necessary to point out the numerous ways in which this magazine fails to serve its readers.
Case in point, in the September 2009 issue: an unsigned "feature," which is really more of an op-ed, about the influence of Metacritic scores on game development. This is fertile ground for debate. I've said before that I think Metacritic is generally a useful tool, because it does a good job of providing a snapshot of the critical landscape. It's still incumbent upon gamers to dig deeper, of course, and in many cases I think a game that creates less of a critical consensus is likely to be more interesting than one that is universally beloved or condemned. But that's my opinion as a player.
GI's feature, titled "Critical Mass," is a look at it from the developers' perspective. It extensively quotes Glen Schofield, the executive producer of Dead Space, who is open about his company's relationship with Metacritic. As Schofield tells it, one outlying negative review was the difference between his game receiving an aggregate score of 90, and its eventual score of 89. (Oddly, the accompanying screenshot of Dead Space's Metacritic page shows it as an 88 -- turns out that the PS3 version got an 88, while the Xbox 360 version earned an 89.) We don't get any details about what the brass said, but Schofield says that the psychological difference between an 89 and a 90 makes getting the lower score "a big ass deal."
Schofield seems to have been referring to the 6.5 assigned to Dead Space by Official Xbox Magazine, under the byline of one Meghan Watt. According to the visceral comments on the review (pardon the pun), it seems Watt was not a freelancer but an intern. How this invalidates her review, nobody can quite say. Apparently, one intern can single-handedly ravage the fortunes of a well-funded software developer. That this is seen as an indictment of her work, and not Metacritic's, is beyond reason.
To be clear, I'm not condemning Schofield for being upset about the way the system works. He has every incentive to try to inflate his game's Metacritic score. But besides giving him space to dismiss Watt's work, this article is frustratingly light on how the scores impact business decisions. "Some believe there is a tight relationship between [Metacritic scores and sales]," says the copy, "but that isn't always the case."
Er, some data might have been nice there. What are some games that sold well despite poor scores? What are some games that scored highly and tanked at retail? They don't say. But it's critically important in determining the real-world impact of the Metacritic score. Either there's a causative relationship or there isn't. If there isn't, as the article implies, then all Game Informer has done is smear a competitor by proxy, while providing no actual insight into the gears of the Metacritic machine.
For his complaints, Schofield also acknowledges how satisfying it is to receive accolades. "You've been working two years or whatever on the game, and you want someone to tell you that you did a good job." I can understand this. It's why we're all in this business. We want good video games to be rewarded. Frankly, the fact that one aberrant review can sink a score from the 90s into the 80s ought to give that much more weight to the games that do score in the 90s. The point is to separate the wheat from the chaff. It's a good thing that not every game is scoring that highly.
That's not how Game Informer sees it.
Having conducted an interview with their buddy Schofield, and mindful of the need to ensure editorial access to future Visceral Games projects, they close with this tut-tutting:
With the importance of aggregate scoring a constant for the foreseeable future, perhaps all that can be done is for companies to get smarter about reading the Metacritic tea leaves, and media outlets to publish quality reviews so that the hard work of developers like Schofield is not in vain.
I had to read that twice to make sure I didn't hallucinate it. It threw everything that had come before it into new light. The thought of not feeding the beast, and discarding scores altogether, has apparently not crossed anybody's mind. A challenge to myopic executives is clearly out of the question, so the mild rebuke about "reading the Metacritic tea leaves" is immediately followed up with a condemnation of writers who don't see the world the way Game Informer does, and who obviously haven't spent enough time going out for drinks with developers.
"Quality reviews" is such a loaded term in this context, especially since it is so transparently directed at OXM's review. It was not, apparently, up to Game Informer's standards. But why? Because it was (mildly) negative? Nothing is factually incorrect, and all of Watt's points are fully supported by the gameplay. I happened to like the game more than she did. Still, she's absolutely right that the mission objectives are garden variety fetch quests, in which your character blindly obeys the orders given by characters over a radio. Dead Space can rightly be praised for its execution, and criticized for a lack of imagination. Balancing these two is what critics do, and they won't always agree on where the fulcrum is. That's what makes different critical voices valuable.
And so the question is: How are we defining a quality review? Is Game Informer advocating independent-minded criticism? Obviously not. This is a magazine whose ownership is in the retail business. They would prefer that critics march in lockstep, assigning top scores to the games most likely to draw customers into their stores. (Coincidentally, in this same issue, GI reviewers assign two separate 9.5 scores to Batman: Arkham Asylum, more than a week before most other places are allowed to post their reviews.* There's still time to pre-order your copy!)
Readers and gamers -- and, yes, developers and publishers -- are all going to be better off with honest and tough reviews. Nobody is well served when we elevate every decent game to instant-classic status. Putting too much stock in Metacritic scores is a surefire way to keep game development looking backward, and not forward. Games need room to experiment, and even to fail, if they are to progress. Gamers need to look harder for quirky, idiosyncratic games that may not please everybody. And reviewers need to be the ones who make all of this happen. If we decide that our job is to praise every game just because somebody worked hard on it, then we may as well give up now.
*Full disclosure here: I want this game to be a 9.5 so bad.