Tuesday, August 25, 2009

Game Informer works for Gamestop, not for you

I keep telling myself to quit criticizing Game Informer. There's not one person alive who's confused about this magazine's mission. For me to sit here beating the dead horse probably says more about me than it does about the publication.

Except, well, this is the most widely published video game magazine in the country, and the only one that still gets scoops with regularity. GI is going to be a part of the conversation as long as things stay the way they are. And as long as that's the case, it will continue to be necessary to point out the numerous ways in which this magazine fails to serve its readers.

Case in point, in the September 2009 issue: an unsigned "feature," which is really more of an op-ed, about the influence of Metacritic scores on game development. This is fertile ground for debate. I've said before that I think Metacritic is generally a useful tool, because it does a good job of providing a snapshot of the critical landscape. It's still incumbent upon gamers to dig deeper, of course, and in many cases I think a game that creates less of a critical consensus is likely to be more interesting than one that is universally beloved or condemned. But that's my opinion as a player.

GI's feature, titled "Critical Mass," is a look at it from the developers' perspective. It extensively quotes Glen Schofield, the executive producer of Dead Space, who is open about his company's relationship with Metacritic. As Schofield tells it, one outlying negative review was the difference between his game receiving an aggregate score of 90, and its eventual score of 89. (Oddly, the accompanying screenshot of Dead Space's Metacritic page shows it as an 88 -- turns out that the PS3 version got an 88, while the Xbox 360 version earned an 89.) We don't get any details about what the brass said, but Schofield says that the psychological difference between an 89 and a 90 makes getting the lower score "a big ass deal."

Schofield seems to have been referring to the 6.5 assigned to Dead Space by Official Xbox Magazine, under the byline of one Meghan Watt. According to the visceral comments on the review (pardon the pun), it seems Watt was not a freelancer but an intern. How this invalidates her review, nobody can quite say. Apparently, one intern can single-handedly ravage the fortunes of a well-funded software developer. That this is seen as an indictment of her work, and not Metacritic's, is beyond reason.

To be clear, I'm not condemning Schofield for being upset about the way the system works. He has every incentive to try to inflate his game's Metacritic score. But besides giving him space to dismiss Watt's work, this article is frustratingly light on how the scores impact business decisions. "Some believe there is a tight relationship between [Metacritic scores and sales]," says the copy, "but that isn't always the case."

Er, some data might have been nice there. What are some games that sold well despite poor scores? What are some games that scored highly and tanked at retail? They don't say. But it's critically important in determining the real-world impact of the Metacritic score. Either there's a causative relationship or there isn't. If there isn't, as the article implies, then all Game Informer has done is smear a competitor by proxy, while providing no actual insight into the gears of the Metacritic machine.

For his complaints, Schofield also acknowledges how satisfying it is to receive accolades. "You've been working two years or whatever on the game, and you want someone to tell you that you did a good job." I can understand this. It's why we're all in this business. We want good video games to be rewarded. Frankly, the fact that one aberrant review can sink a score from the 90s into the 80s ought to give that much more weight to the games that do score in the 90s. The point is to separate the wheat from the chaff. It's a good thing that not every game is scoring that highly.

That's not how Game Informer sees it.

Having conducted an interview with their buddy Schofield, and mindful of the need to ensure editorial access to future Visceral Games projects, they close with this tut-tutting:
With the importance of aggregate scoring a constant for the foreseeable future, perhaps all that can be done is for companies to get smarter about reading the Metacritic tea leaves, and media outlets to publish quality reviews so that the hard work of developers like Schofield is not in vain.

I had to read that twice to make sure I didn't hallucinate it. It threw everything that had come before it into new light. The thought of not feeding the beast, and discarding scores altogether, has apparently not crossed anybody's mind. A challenge to myopic executives is clearly out of the question, so the mild rebuke about "reading the Metacritic tea leaves" is immediately followed up with a condemnation of writers who don't see the world the way Game Informer does, and who obviously haven't spent enough time going out for drinks with developers.

"Quality reviews" is such a loaded term in this context, especially since it is so transparently directed at OXM's review. It was not, apparently, up to Game Informer's standards. But why? Because it was (mildly) negative? Nothing is factually incorrect, and all of Watt's points are fully supported by the gameplay. I happened to like the game more than she did. Still, she's absolutely right that the mission objectives are garden variety fetch quests, in which your character blindly obeys the orders given by characters over a radio. Dead Space can rightly be praised for its execution, and criticized for a lack of imagination. Balancing these two is what critics do, and they won't always agree on where the fulcrum is. That's what makes different critical voices valuable.

And so the question is: How are we defining a quality review? Is Game Informer advocating independent-minded criticism? Obviously not. This is a magazine whose ownership is in the retail business. They would prefer that critics march in lockstep, assigning top scores to the games most likely to draw customers into their stores. (Coincidentally, in this same issue, GI reviewers assign two separate 9.5 scores to Batman: Arkham Asylum, more than a week before most other places are allowed to post their reviews.* There's still time to pre-order your copy!)

Readers and gamers -- and, yes, developers and publishers -- are all going to be better off with honest and tough reviews. Nobody is well served when we elevate every decent game to instant-classic status. Putting too much stock in Metacritic scores is a surefire way to keep game development looking backward, and not forward. Games need room to experiment, and even to fail, if they are to progress. Gamers need to look harder for quirky, idiosyncratic games that may not please everybody. And reviewers need to be the ones who make all of this happen. If we decide that our job is to praise every game just because somebody worked hard on it, then we may as well give up now.

*Full disclosure here: I want this game to be a 9.5 so bad.


Tim Mackie said...

In the case of Arkham Asylum, there might be an alternative explanation, but I can't really independently verify this story from RAM Raider a month and a half ago. It's the only source I can find in my half-assed research on his claims; everything else leads back to it. And I can't find any Game Informer covers dedicated to Arkham Asylum from this year, so that would seem to be evidence against RAM Raider's story.

Of course, in the case of Game Informer, bribery is rather unnecessary, as Arkham Asylum is bound to bring people into GameStops everywhere and it's their job to make sure it does by giving it an excellent score.

Tim Mackie said...

Well, I just read that article more closely... the embargo apparently expired at the end of July. So I guess that really did have nothing to do with the Game Informer thing. More to the point, as always, is that it's Game Informer.

Mitch Krpata said...

That is hilarious, though. I've never heard of Games Master magazine before.

There's no reason to believe that the score GI gave to Batman is knowingly inflated, either. Reviews seem just as positive across the board, and GI does run some pretty negative reviews in their pages (although it must be said that they're usually for games that wouldn't be expected to sell much anyway).

The issue is that the whole point of running glowing reviews in advance of a game's release is aimed at getting people into Gamestop to reserve it. I'd be happy to be proved wrong, but I have a hard time envisioning that they would have run the Batman review in this issue if they were scoring it a 6.

Gary said...

I enjoyed this very much, Mitch. Great reading.

Gary A. Lucero said...

Magazines and reviews are entertainment.

While I think that gaming reviews back in the 1980s and 1990s (British magazines like Amiga Format and Zap, and American mags like Info and CGW are good examples) strove to be professional and well informed, most of what we find today is speculation and fanboyism.

I think that the move away from PC-centric games and to consoles are partly responsible. Hell, maybe even wholly responsible.

How often do you read something completely unsubstantiated in a gaming magazine, web site, or you hear it in a podcast? Often I'd say.

Facts are fine for other media outlets but rumor, speculation, and pure out fan boy love and hate seems to rule our world.

And it's not just Game Informer or Official Xbox Magazine. It's everywhere.

Michael Miller said...

I find metacritic is really useful to find the most negative reviews of games I'm thinking of buying. Apart from a few bloggers/reviewers I trust (Insult Swordfighting - take a bow) I find negative reviews are more informative. Caveat: 7/10 being 'neagtive' in the bizarro world of game reviews.

Anonymous said...

Batman most definitely had a GI cover this year.

Anonymous said...

hi.. just dropping by here... have a nice day! http://kantahanan.blogspot.com/

Anonymous said...

Wow, I had a similar reaction when I read that article. I read the title, and was (naively) excited to read about an interesting topic. And then I got an "article" with virtually no content, and only one source (this Schofield dude). The article should have been called: "Schofield got an 89 instead of a 90 because of one review, and this troubles him." That's the entire content of the article.

@Michael Miller: I totally agree. I always do my research by reading one or two reviewers that I trust, and reading one or two of the lower-scoring Metacritic reviews. The negative reviews really are a great help.

Anonymous said...

So wait, you're just assuming that they were talking about the OXM review? This whole thing is based on an assumption? The issue is the weight that MetaCritic gives one-off blog sites versus large, established editorial outlets. I think you should reread the article. You seem to just use this an an excuse to rip Game Informer as a magazine rather that analyze the article.

And why would any magazine seek out a covers for a game they think would suck? That seems like a bad business plan and terrible way to sell magazines.

As Gary says earlier: "Magazines and reviews are entertainment."

What about Gamespot? They are owned by CBS so then the math means that they are trying to sell games like Star Trek and CSI. Or IGN must be pushing anything Newscorp and Fox is involved with.

Mitch Krpata said...

It's not an assumption. They specifically mention a Metacritic score of 65 in regards to the Xbox 360 version. There is only one review that could be. Take a look.

Mitch Krpata said...

To address your larger point, the conflict between content that serves readers or viewers and content that makes a profit is a problem in all types of media. It's why I'd be suspicious of any reporting by NBC news about something that might impact General Electric's bottom line, for example.

Additionally, Gamespot does have its own history of dubious actions regarding its advertisers' games. Do they give suspiciously high scores to games based on CBS properties? I have no idea. It's outside the scope of what I wrote about here. Might be interesting to check, though.

Anonymous said...

Personally I found Dead Space to have received a greater benefit of the doubt because it was a new IP with a couple marginally fresh ideas. However, I've not seen so many standard video game cheap thrill "monster closet" moments since Doom 3.

In aggregate I would consider the 89 to be much more representative of the flawed gem than a 90, so metacritic did its limited job of informing. That one point is perceived as the difference between success and failure by the developer is likely only their own psychological constraint, not really one of the consumer.

The prejudged positive reviews are often balanced by the outlier negatives. The hyperbolic extent of praising big franchises in pre-release reviews can be rather ill-inducing. I'm at the point where I just bin the pre-release reviews along with previews as meaningless hype generation.

Anonymous said...

If some of these "mainstream" developers want to know what how "reviews" can really hurt, try submitting your games to Kongregate, New Grounds, or any other site with massive player reviews (and very little professional editorial) and see how long you can stomach it. You will find out fairly quickly that something so little as the length of a title sequence or the lack of an audio control can take your 6-12 months of work and render it a dud within hours.

Lyndon said...

One of the things I always find funny about metacritic is how quickly it stops being a reflection of critical opinion.

For example GTA IV is currently sitting on a 98% score but when most people talk about it now, there's this sense of it not living up to expectations. Surely some of that is a knee jerk reaction against what's popular but I still feel metacritic has a hard time staying "with it" so to speak.

Anonymous said...

This is as much about gamers and the media as it is about magazine reviews and publishers. Firstly, in modern games on Metacritic (those released in the last couple years), you see 10's and 1's. Gamers today think that it they liked the game its a 10 and if they didn't it's a 1. It's not literally like this, but I say it this way to get my point across!

Look at older games and you see a much wider range of scores. this confirms the dumbing down of gamers.

The kicker though, is that until very recently, one Metacritic listed a title on it;s site you could vote on it! Many game had a score of 9.5 based on 80 votes when the game hadn't even been released yet! Whether it was publisher staff doing this, or regular gamers who knows, but the fact is many scores have been unduly affected (mostly in the positive) by scores given prior to the release of the title!

This leads me to discount the user review scores, when they should be the most important. How Metacritic could have allowed this for years is anyone's guess! I tend to see this as just part of the fact that we still have very juvenile and amateurish gaming media that have never given gamers the respect they deserve, but have instead mostly supported the industry.

Anonymous said...

Interesting article, though its a few years old. It definitely begs the question "how much work qualifies as hard work?" A lot of hard work goes even into mediocre or poor games, so perhaps Game Informer should cease negative reviews altogether? That would of course go against their editorial policies concerning basicaly all Wii titles (games like No More Heroes, Fire Emblem, Silent Hill: Shattered Memories and numerous others receive negative notices despite myriad innovations/interesting mechanics), so in the end their statement in that article is so empty and senseless they can't even live up to it themselves.

Gary A. Lucero said...

Mitch, you definitely spend a lot more time thinking about this stuff than I do, but then again, I just play the games but you play them AND write about them professionally.

Anyway, well said. There's definitely an incestuous relationship between the gaming press and the publishers. That doesn't mean the truth isn't told, but it does mean it's not told 100% of the time.

What I personally find interesting is that reviewers often give better scores to games from beloved publishers that the reviewers have played since they were children. These aren't bad games, but they are games that are not necessarily better than other games that always receive lesser scores.

This won't be popular, but I'm talking about Japanese publishers like Nintendo and Sony. Many reviewers grew up playing these company's games and are loathe to be too critical as long as the game is a quality production.

But as someone who grew up playing Western games on the Commodore 64 and Amiga, I swing the opposite way. I am much more likely to forgive the sins of Bethesda, for example, while having no tolerance for JRPGs or platformers.

Is there a place for this sort of bias? It's one thing to review a game well so a publisher gets a higher Metacritic score, but how about a score that is higher just because the reviewer prefers Mario Bros to Fallout?