So far in 2015 Resident Advisor has reviewed 570 releases they categorize as “Singles.” As a part of their review system for music, RA employs a rating scale of 0.0 to 5.0 in increments of 0.1. This degree of increment is much higher from their previous rating system. This was something they announced to a bit of fanfare and which was meant to provide readers with a more detailed understanding (numerically at least) of where a given EP sits in relationship to other releases. However, that doesn’t seem to actually be the case when one examines the numbers thus far in 2015.
Out of those 570 reviews written in 2015, Resident Advisor has rated 21 of those releases under 3.0. This roughly translates to 3% of all reviews receiving a rating under 3.0. The lowest rating, given only twice so far this year, was a 2.0. Averaging all the ratings under 3.0 for the year results in an average score of 2.88. Only 3 releases managed to net a score under 2.5. Which brings us to the question that came into my mind as I initially scanned a month of reviews, then two months, then the whole damn year. Why is anyone still using rating systems in 2015, especially if your rating system appears to be meaningless?
Many websites reviewing music have, over the years, employed rating systems. More often than not these rating systems didn’t work. What are the metrics taken into consideration when rating music? Are ratings short hand in case someone can’t be bothered to read a paragraph? And are they actually helpful? I’d argue they’re not. As it stands now, any casual or occasional reader of RA would perceive the 0-5 rating scale to mean that a 2.5 would be average. That’s a pretty standard assumption that I think we can all agree on. In the case of RA this means they’ve only reviewed above average content, barring a couple of releases, this entire year. But that’s just if you were to examine the numbers. The actual reviews themselves, the informative written words that address the music, considers the artist’s work, makes a case and argument for what is and isn’t successful, often does not match the rating given. Especially if considered in relationship to other reviews with similar ratings.
I began to notice as I was scrolling through a year of reviews that often times a release that got panned would get the same rating as a release that was favorably written about. There’s a general lack of correlation between the written word and the rating and oftentimes the rating undermines the actual work being done by the writers at RA. And for readers, all this does is create a system that is meaningless but which endlessly says to them at first glance that everything being released and covered by RA is some top notch business. Which obviously isn’t the case.
Are ratings really just a method for generating comments? Is that it? Readers have loved nothing more over the years on RA than gloating over a bit of shite music getting a terrible score or ranting endlessly about how unfair it was that “X” release got a low rating while “Y” release got a high one. I’ve certainly been guilty of both things in the past. But I don’t think that’s really the reason they exist, even though it does seem to be at least part of the equation. Back when RA was employing a larger increment range in their rating system, before the website redo, they would post the rating so that it would be visible without clicking through to the actual article. Obviously, this design had some drawbacks, one of them being that people could theoretically just scan the ratings and never actually click through to the reviews. That’s bad for business and it creates uneducated readers. If the whole point of a music review is to transmit information about music, to share something you believe to be beautiful, or awful, or ugly or remarkable then a rating doesn’t do any of that given the current methodology used. If it’s about making a compelling argument about why such and such a release does or doesn’t matter, then readers actually have to read the reviews. For me, music reviews should be about sharing. They should be about giving people ideas to work with. Sometimes it’s about providing context or history or narrative. Sometimes it’s about making a case for something that might be unpopular or that gets lost in the hype machine. Whatever the motivators are for someone to write and to read music reviews, and there are many, there is obviously one thing that is constant, reviews should be about engaging with music in a meaningful way. You know what isn’t meaningful? A numerical rating.
But this gets us back to the core question I’ve been grappling with. What is the point? What benefit are numerical ratings to readers, to artists, to labels? Especially if they’re meaningless because you’re only ever really using a small percentage of your rating range, thus neutering any potential usefulness. If your rating system is so profoundly off, with half your rating scale unused, and with such a huge number of favorable ratings given to nearly every release you’ve covered this year (and we’re talking 570, that’s not a handful or a few, that’s a ton, essentially eighty a month), do they matter? People often accuse RA and other sites of being soft on an artist/label/genre/hyped-whatever, and sure, that may be true in some cases. But if you look at many of these favorable ratings above the 3.0 cut off, you’ll discover written reviews that are in fact, not all that favorable. And that is a major problem. It compromises the legitimacy of your system. It’s time to retire the numbers. Hang them in the rafters and move on. Support the work of reviewers by letting your readership come to their own conclusions through the ideas and arguments of the people you’ve hired to write about dance music. Undermining their work by endlessly handing out the same rating for every release that gets written about, regardless of how good or bad it actually is, does no one any favors.