I'll be the first one to say that computers are only as smart as those that would design and write the software. That said, the New York Times recently posted an article concerning the ratings system of the ESRB and how it will move from human-based grading to computer-based grading. It isn't that the computers have some sort of A.I. that plays the entire game through an assigns a rating (wouldn't that be grand?) but rather that the games will move toward a questionnaire-based rating system.
After a developer fills out a questionnaire, the answers are plugged into the computer. The computer evaluates the the answers and produces a rating.
It doesn't seem like it was that long ago that I did a podcast segment on violence in videogames, ratings, Supreme Court hearings, etc. I think that the ratings system needs to evolve. Is this a step in the right direction? http://www.nytimes.com/2011/04/18/arts/video-games/video-games-rating-bo...
I think that we oftentimes focus on the simple answer rather than the complex answer - The simple answer is to draw a hard line that divides what is in a game, what shouldn't be in a game, who controls it, and regarding content: "when in doubt...leave it out."
Parents can ultimately choose if they want their kid to play a game - many of these parents won't take a second to even consider a rating. That's an entirely different issue.
So let's keep it simple -
1: Does the ESRB provide adequate ratings for the games available on the market?
2: Does this questionnaire-based system sound like something that will help the current ratings?
I can't really comment much on the ESRB ratings themselves because I honestly do not pay much attention to them as they aren't a part of my life at the moment.
As for the questionnaire-based system - Is this good enough? Can a rating be determined based on a developer's own evaluation? They said there would be testing to verify that the assigned ratings were accurate, but I can't help but wonder if this makes the ESRB's job easier by putting the work on the developer. It seems like some of this might make things worse. Care to comment?
This sounds like a boneheaded system to me. I guess the idea is to try to catch dishonest developers who hide or lie about obscene content in their games. I think at the heart of it, though, is the lingering idiocy that all games are intended for small children and anything adult-oriented should be left out. Then there's the problems with online content; obviously developers can't be held responsible for everybody who chooses to cuss or whatever else they want in multiplayer.
That said, it is a lot easier to watch a 2-hour movie and assign it a fair rating than it is to play through a 40+ hour videogame. It'd be so easy to slip something past a censor, such as a blowjob in small town on some obscure corner of a map with 300 locations. I mean, who is really going to play through every possibility just to see if something like that slipped in? Nobody.
So, I guess maybe there is a need for something like this, since arguably only a computer has the patience to really comb through a game like that. I'd just hope that a human being would be on hand to evaluate the stuff the computer identified as questionable rather than just automatically assign a higher rating.