This post was updated on .
I guess this could be a good general thread for discussing ratings, as it seems of particular interest to many members of the blog.
More specifically Comduakis and I were discussing ratings in the "2.7.. ouch!" thread and we were having some confusion over the value of a ratings point in say 2000 versus today. This is a common topic on the blog as you know -- people are eager to compare RAW's ratings in 2012 to RAW's ratings in 2000. The conclusions typically reached are either that RAW is in big trouble because they are doing ratings *numbers* similar to WCW numbers as they were winding down or that things aren't so bad because a 2.7 rating in 2012 is equivalent to a much higher rating in 2000.
We were stuck on one particular part of the issue -- and that's the topic of my post. As far as making comparisons between yesteryear and today, that can be done more soundly now that the numbers are squared away.
The issue was that Comdukakis produced some links for Nielsen ratings from 2000 that estimated a cable audience size of over 100 million (with 1% or one cable rating point equal to a little over one million homes). Now, all the numbers I have seen place it just under 80 million for 2000. I have a few different sources with figures, but my main numbers come from a Nielsen Media Research article entitled "U.S. Households With Cable Television, 1977-2008". It's not available on the public internet, but if you have a library subscription or a university account, you can access it here at this link.
At any rate, I sent him a message and here is the text of it and my findings:
So I got a chance to sit down and look at some numbers and finally remembered the reason why our ratings audiences weren't matching up -- it's because often times when cable ratings were reported, they would normalize the cable rating so that it would match the broadcast audience household size.
For example, looking at the September 11, 2000 RAW:
Here is a link for the ratings report for that week, although I've summarized the numbers below.
So they've got RAW doing a 4.3 (4.38 million homes) for the 9PM hour and a 4.6 (4.66 million homes) for 10PM hour -- so 4.45 composite rating and roughly about 4.55 million households, calculated from the figure where one point is equal to 1,022,000 households as stated.
As I said, that was a red flag to me because my numbers for cable have the total number of households for 2000 at close to eighty million, and certainly not over a 100 million.
Then I remembered that they normalize them to a common scale for these reports -- in actuality, RAW did a composite 5.8 rating for that week. They just adjust the cable rating downwards in order to standardize the audience size at the number of TV households instead of publishing cable ratings against the number of cable households.
I've reprinted it below, but it is available on the Monday Night Wars Chart in the Wikipedia article of the same name (which by the way, does actually have some errors in it) at this link.
Just to confirm, I pulled up an old Dave Scherer ratings report I had saved on my computer from September 11, 2000 and it matches up with the Wikipedia article -- RAW scored a 5.8 off of hours 5.3 and 6.3, where Nitro pulled a 3.2 off hours 3.4 and 2.9.
Now my numbers have the cable audience for 2000 at 78.6 million. That puts a single ratings point at about 786,000 households. Taking the rating of 5.8 and multiplying it with 786k gives you the 4.55 million households.
So there we go!
I meant to add -- so I suppose the take home message here is to know your source and your data.
If you are using Wikipedia RAW numbers from that chart (or from many of the other sites on the internet) to make comparisons to 2012 numbers, you have to take into consideration the difference in audience sizes between the cable audience and the broadcast audience in that time. If you pull them from summarized reports on the other hand, it's likely that they've already been adjusted to the broadcast audience size of the day. I believe that modern cable reports are all standardized to the number of TV households (though I'd have to double check) so make sure your comparisons are 'apples to apples' and not 'apples to oranges'.
At some point I'd like to put together a website for this along with buyrates as sort of a central repository of information to make comparisons between modern shows and old shows, since it's so hard to pull all of this data together in a standardized form. There are a few other people interested in similar projects I believe, so hopefully the time can be found to pull it all together, in order to be as nerdy as we possibly can be!
Every time I want to find out a buyrate of an older or obscure PPV, I spend 2 hours on google going through shitty websites trying to find it. So you got my vote.
This post was updated on .
Here is some data for comparison purposes:
So RAW last week did a 2.72 composite rating on hours of 2.9, 2.8 and 2.5. Averaged *viewers* (not households) were 3.92 million -- with 4.15, 4.13, and 3.48 million viewers for each respective hour.
For 2012, Nielsen puts the # of TV HHs at 114.7 million, down from 115.9 million in 2011. If the 2.72 is indeed a broadcast TV standardized rating and not the actual cable number (which it appears to be) then that indicates about 3.1 million households were tuned into RAW this week, with about 3.92 million viewers. (a number which coincides with published reports)
So if you're looking at comparing a 2.72 cable rating in 1995 through 2001 versus a 2.72 rating today, it'd look something like this:
The first column indicates the cable rating for the given year, while the second column is the equivalent broadcast rating for the same year. The third column is the number of households total at that rating of 2.72
So in comparing how RAW is doing today, compared to say WCW 2001 -- Nitro averaged a 2.24 cable rating for the 2001 Nitro shows, if you exclude the final show which did a much higher than average rating. This gives them about a 1.83 million households average, while the WWE today is around right around 3 million households an episode, so they still have a way to drop to be at those kind of levels, although keep in mind that the prospective audience for the WWE is much higher today than it was for WCW in 2001, when only about 81.5 million people had access to cable and satellite TV. Going the other way, you'd need about a 3.82 cable rating in 2001 (2.95 broadcast) to reach the same number of households as a 2.72 broadcast rating reaches today.
In terms of raw numbers, then you could say RAW is doing about as well ratings wise today as it was in parts of 1998 in those weeks where it was averaging a 4.6/4.7 rating. As I mentioned before though, that's an overly simplistic assessment, given the difference in the sizes of the audience -- RAW was doing better in 1998 because they were doing those numbers with a smaller number of available television viewers. RAW was also usually the #1 program on cable TV for the week in 1998 -- now they are usually the #1 program for the night, and between 5 and 15 for the week. Plus you must remember the fact that RAW was averaging those numbers against direct and extremely popular wrestling competition each week, with both shows vying for similar demographics, so who knows how big the ratings for 1997 Nitro / 1999 Raw would be if it wasn't for Raw and Nitro competing with each other.
The other wildcard is number of actual viewers -- I'm not aware of what data they use to determine how many viewers per household are watching a particular show. For example, this last weeks RAW had about 1.26 viewers per household watching. Looking at various television ratings over the years and making comparisons, this number appears to have shrunk on average over time. I'm not sure if that's based on census data or some combination of ratings or what, but it appears to have been a lot closer to 1.6 or 1.7 viewers per household in the late 1990s, so some of those shows may have have had substantially higher numbers of viewers than shows with a similar number of households in 2012.
This post was updated on .
Two more tables that get at making comparisons in terms of households and viewers for older shows versus modern shows:
So in 2012, a 2.72 rating is equivalent to about 3.1 million households/3.9 million viewers. To achieve that same number of households and viewers in other years, you'd need the following cable/broadcast ratings (assuming 1.26 viewers per household for every year:)
Notice the big drop in 1999 -- this is because the satellite industry took off and brought many more eyeballs to USA/Spike after cable subscriptions stagnated.
The same caveat of audience size and the potential for previous years to have more viewers per household applies of course, so consider this a work in progress and not hard facts.
For example, a Raw or Nitro with a 4.72 rating in 1998 may have the same number of households as a RAW today, but that show likely had between 5.1 and 5.3 million viewers in 1998, where today the same number of households represents slightly under 3.95 million viewers.
If that is the case -- then in terms of actual viewers, a 3.52 rating in 1998 would have the same number of viewers as RAW did last week. This chart shows the other years in the event that there are 1.7 viewers per household compared to 1.26 viewers today.
Equivalent ratings to match 2012 households of 3.1 million households/3.9 million viewers, with 1.7 viewers per household for each year (except 2012, which is 1.26):
A big difference! More than a whole ratings point different in some cases. In either case, that's just not something that will be known either way until someone can figure out how actual viewers are calculated from households on a year to year basis.
Great work on all this.
My only thought on why the number of people watching per household has dropped is the increase in internet time, texting on phones, facebook, etc. I know in general younger people are watching less TV than years past with more options such as the internet, smart phones, and video games (god forbid they go outside). That may influence how many people in each household are watching a program together.
Thanks! Eventually I'd like to turn it into some kind of web application where you can just put in a rating and a year and pop out some numbers.
Yeah, those all strike me as excellent reasons for the change. It just seems odd to me that that number doesn't seem to be readily available from year to year. I guess it's just not "need to know" information, although it'd make all these calculations much handier.
Great post. Amazingly informative
Incredible, detailed work nwa88. You are a man after my own statistical heart.
I'll add this tidbit to the discussion. I think the rating and not the number of viewers is totally applicable. Market share is very important and even though the rating doesn't reflect as big a drop off in total viewers from 2000, the lower rating does show that there is a drop off in the percentage of the available audience that they're able to capture.
We use a market share application at my hotel (called Star, or STR) that is pretty much used by all hotel's in the US. You basically create a competitive set from hotels you fight for business with and it compares your weekly and monthly numbers (as well as a 28 day running tally) versus the total number of available rooms in the set. Then, based on your room revenue, rooms sold, etc, it produces a market share. You can exceed your market share (over 100) or go below based on performance.
With this said, a 2.7 rating in 2012 is not a good sign of performance and neither is the dropping rating since 2009 because even though the total viewers isn't a drastic dropped, the available viewers have increased and their share hasn't.
Yup, I do agree with that too Flair -- as long as the market is mature, which obviously cable TV is these days. I think it's an important measure for exactly the reasons you mention, as it's basically a snapshot relative to the environment the show took place in. For some reason it is often used to prove the "wrong" point or just used haphazardly.
The only time I think it really gets dicey is when the market isn't mature -- those earliest PPV buyrates are a good example of that. The main reason we saw like 7% and 8% buyrates (if those numbers are anything but completely worked) is because the audience was so small (in the millions instead of tens of millions) and concentrated in the largest metropolitan areas, where wrestling was a hotter commodity. Back then, buyrates were computed as an average of buys to customer base of each cable market, with the final buyrate just being 'an average of averages', so once those softer wrestling markets got PPV access it helped drive the overall buyrate down until the market was matured and went national in 1993 or so.
What's funny is that we don't get "buyrates" anymore, they just give us the number of buys -- but if you were to compute a buyrate based on the audience size today, these WWE shows this last year would be doing 0.45% - 0.50% buyrates save for WrestleMania -- so basically 1995 numbers of late 1999 WCW PPV numbers.
|Free forum by Nabble - Resume Templates||Edit this page|