General, Nonsense

What’s in a Number?

rating

So far in 2015 Resident Advisor has reviewed 570 releases they categorize as “Singles.” As a part of their review system for music, RA employs a rating scale of 0.0 to 5.0 in increments of 0.1. This degree of increment is much higher from their previous rating system. This was something they announced to a bit of fanfare and which was meant to provide readers with a more detailed understanding (numerically at least) of where a given EP sits in relationship to other releases. However, that doesn’t seem to actually be the case when one examines the numbers thus far in 2015.

Out of those 570 reviews written in 2015, Resident Advisor has rated 21 of those releases under 3.0. This roughly translates to 3% of all reviews receiving a rating under 3.0. The lowest rating, given only twice so far this year, was a 2.0. Averaging all the ratings under 3.0 for the year results in an average score of 2.88. Only 3 releases managed to net a score under 2.5. Which brings us to the question that came into my mind as I initially scanned a month of reviews, then two months, then the whole damn year. Why is anyone still using rating systems in 2015, especially if your rating system appears to be meaningless?

Many websites reviewing music have, over the years, employed rating systems. More often than not these rating systems didn’t work. What are the metrics taken into consideration when rating music?  Are ratings short hand in case someone can’t be bothered to read a paragraph? And are they actually helpful? I’d argue they’re not. As it stands now, any casual or occasional reader of RA would perceive the 0-5 rating scale to mean that a 2.5 would be average. That’s a pretty standard assumption that I think we can all agree on. In the case of RA this means they’ve only reviewed above average content, barring a couple of releases, this entire year. But that’s just if you were to examine the numbers. The actual reviews themselves, the informative written words that address the music, considers the artist’s work, makes a case and argument for what is and isn’t successful, often does not match the rating given. Especially if considered in relationship to other reviews with similar ratings.

I began to notice as I was scrolling through a year of reviews that often times a release that got panned would get the same rating as a release that was favorably written about. There’s a general lack of correlation between the written word and the rating and oftentimes the rating undermines the actual work being done by the writers at RA. And for readers, all this does is create a system that is meaningless but which endlessly says to them at first glance that everything being released and covered by RA is some top notch business. Which obviously isn’t the case.

Are ratings really just a method for generating comments? Is that it? Readers have loved nothing more over the years on RA than gloating over a bit of shite music getting a terrible score or ranting endlessly about how unfair it was that “X” release got a low rating while “Y” release got a high one. I’ve certainly been guilty of both things in the past. But I don’t think that’s really the reason they exist, even though it does seem to be at least part of the equation. Back when RA was employing a larger increment range in their rating system, before the website redo, they would post the rating so that it would be visible without clicking through to the actual article. Obviously, this design had some drawbacks, one of them being that people could theoretically just scan the ratings and never actually click through to the reviews. That’s bad for business and it creates uneducated readers. If the whole point of a music review is to transmit information about music, to share something you believe to be beautiful, or awful, or ugly or remarkable then a rating doesn’t do any of that given the current methodology used. If it’s about making a compelling argument about why such and such a release does or doesn’t matter, then readers actually have to read the reviews. For me, music reviews should be about sharing. They should be about giving people ideas to work with. Sometimes it’s about providing context or history or narrative. Sometimes it’s about making a case for something that might be unpopular or that gets lost in the hype machine. Whatever the motivators are for someone to write and to read music reviews, and there are many, there is obviously one thing that is constant, reviews should be about engaging with music in a meaningful way. You know what isn’t meaningful? A numerical rating.

But this gets us back to the core question I’ve been grappling with. What is the point? What benefit are numerical ratings to readers, to artists, to labels? Especially if they’re meaningless because you’re only ever really using a small percentage of your rating range, thus neutering any potential usefulness. If your rating system is so profoundly off, with half your rating scale unused, and with such a huge number of favorable ratings given to nearly every release you’ve covered this year (and we’re talking 570, that’s not a handful or a few, that’s a ton, essentially eighty a month), do they matter? People often accuse RA and other sites of being soft on an artist/label/genre/hyped-whatever, and sure, that may be true in some cases. But if you look at many of these favorable ratings above the 3.0 cut off, you’ll discover written reviews that are in fact, not all that favorable. And that is a major problem. It compromises the legitimacy of your system. It’s time to retire the numbers. Hang them in the rafters and move on. Support the work of reviewers by letting your readership come to their own conclusions through the ideas and arguments of the people you’ve hired to write about dance music. Undermining their work by endlessly handing out the same rating for every release that gets written about, regardless of how good or bad it actually is, does no one any favors.

8 Comments

  1. NoNonsenseHomo says:

    The things you’re bringing up Tonka, I can’t even begin to tell you how many times I’ve heard these exact things in both the worlds of dance music and the art world. People pretending they’re getting paid. That their work is successful, that they’re able to have a career on it, that nothing is ever wrong. And that’s unfortunate. Because 99% of the time their revelation to me is done in private with the caveat that it never ever be retold to anyone and that the public can never know. If no one ever steps up to say that this is happening and it isn’t acceptable and you’re a business and you should run it like one and pay the people you hire appropriately in the correct period of time, then nothing will ever change. It’s time to stop with illusions of success. Be honest with where things are. The article is strong. More people should follow suit.

  2. Toby says:

    It’s simple really: in most cases, the editor chooses the score, not the reviewer.

    Tonka – your column’s on the money as per, but sadly I think it comes down to a general problem with writing about electronic music. There just isn’t any money in it.

  3. NoNonsenseHomo says:

    Toby, I was going back through my mind while writing this and for the life of me I could not think of a single instance where that wasn’t the case. But I didn’t want to blanket-statement the entire world of dance music journalism past/present in case there’s some that aren’t that way. But personal experience/anecdotal evidence would indicate you are completely right, writers are not choosing and generally don’t have a say in the rating assigned to the review they’ve written.

  4. Tonka says:

    Thanks NoNonsenseHomo, I mostly wrote it as an excuse to call out a couple of people I really don’t like in the strongest possible terms.

    Toby – Cheers!

  5. richard says:

    Back when I wrote reviews for RA, Todd Burns the editor used to score the reviews, not the reviewer. There was never an explanation given for this.

    It looks like the new RA scoring was an attempt to emulate the Pitchfork approach, e.g. 7.8 6.9, but because it’s out of five it doesn’t really work

  6. Clark P says:

    How will I know which tank top house record to buy if the two I want are only .2 points apart. Life is so hard. #vibeforyourlove

  7. frank says:

    nine words would have sufficed: resident advisor is great… for me to poop on

Leave a Reply

Your email address will not be published. Required fields are marked *

More like this...

QED-03 Now Available To Purchase!

The time has finally arrived. Anybody who wants a copy of QED-03 Degradation can hit me up directly to purchase. $15 shipped within the USA for black vinyl, $20 shipped within the USA for limited red transparent copies (out of 35 total). I take V*nmo and P*ypal. Shipping outside of the US is minimum of…

Read the full post →

Some Recent Jazz Records

Yeah I know I promised one post a week this year. It’s been a weird year, but I’m gonna pick the pace back up to one a week or so moving forward. Today I’m gonna write about some of the recent jazz records from the last few years that have been doing it for me….

Read the full post →