Though the name of this blog attests to cinema, we do allow ourselves to wander in other territories, such as TV especially when we are in the midst of “golden age” so to speak. It is appropriate that the first series we look into is very popular and probably the most controversial one: Game of Thrones.
Because we are interested in data analysis and not content, you can continue reading without being afraid from encountering the demon of all narrative media, AKA the spoiler (unless ratings ruins your experience).
Why Episode 9?
Besides being the number of my favorite football player, 9 is also a pivotal episode in GOT. After 5 seasons, fans have learned to anticipate the unexpected in this episode (which is the last one before the season finale) . It’s not only the element of surprise that makes these episodes to stand out. The climax is crafted slowly with superb narrative technique and outstanding cinematic language. In this post, we want to use statistics to find out not if they are the best ones (the 5 “nines” are in the top 8 scores of the series in IMDB – out of 50 episodes), but rather how they perform internally, within the “nines”.
In order to compare between the different seasons, we will be using two measurements: Nielsen ratings and IMDB user score. But we will not use the absolute number of viewers*, but rather use two calculations derived from it, creating a more standardized measurement. This modification is required, due to the constant grow in its fan base:GOT almost quadrupled itself: from 2.22 Million to 8.11. These new measurements will enable us to compare the ratings, even though the absolute numbers are on a different scale. Without these adjusted measurements, a chart describing the change in the episodes would be a positive sloping line, to take one example.
The first is an episode’s rank within the season, determining how it performed with respect to the other episodes in the season . The second is a score based on the seasons average rating, giving each episode relative score to the season’s overall performance. This number is more variant than a rank that is confided to a number between 1-10.
Do viewers like what they see?
This chart depicts the change in score and rating (adjusted) from season to season. We see that for seasons 3-5 the rating and score are following the same trend (though the level of the change differs). But season two is an exception, and while it had a decrease in ratings, the score was improving. This can have two explanations: The first stems from the difference between the time each measurement is recorded. The Nielsen rating a real time measurement that can not change over time. While the IMDB user rating can change years after an episode was aired. It could have been interesting to compare episodes score closer to the time they were aired. The second reason is not related to the data, but to the nature of art : as the series progressed, the people involved were more skilled (writers, directors, actors and even special affects teams). Season 4 being the exception, (which also happens sometimes to artists – it is hard to maintain the same level of creativity all the time). Also, season 5 is where the show stopped following the books devoutly. It might be that by season 4, the books were more of a constraint and less of an inspiration.
A seasonal success
1 is the highest rank and 10 the lowest
Here, by comparing an episodes rating and score for the same season we see again the hindsight of the scoring has an affect on an episode’s score rank, when in season 2, the gap between the rank of the rating and that of the score is the max. For other seasons, we see that none have a correlation, but they all are between 1 to 3 levels.
A bad influence
Being a good episode, that receives good reviews or hype can’t change the ratings. What it can do, is lure more people to watch the next episode. Buzz can do that. The reason can be because they want to see the outcome of the previous episode (and we all come back like addicts, even though we know that the closure may arrive somewhere in the following season) or simply because they want more of that good stuff. It is interesting and insightful to compare the relative change in rating between episode 9 to 10 in each season. And because it is a percentage of the change between two episodes, its already standardized.
5 is the highest rank and 1 the lowest
This chart compares for two ranks within the episode nine group: its score and change percentage. We would expect for a perfect correlation between the two, meaning the line would follow the bar’s change. But as we can see this is not the case. An explanation can be found in the shocking nature of these episodes and the reaction to them (season 3 being a good example of that – as the data suggest). People needed time apart from it, and the week between episodes might simply not be enough.
All of these analysis are partial, since the show continues to run. It will be interesting to go back to these figures in the end of every season.
* Those who are familiar with the Nielsen rating, know that the group of 18-45 is more important than the general demographic. But because there isn’t a major difference between the two (in their relativity one to another) we have decided to take the general population (and the fact that we aren’t targeting to any audience).
** There is no dispute, that the fact the following episode is the last of the season has its effect on ratings.
But this is a common attribute across all seasons, hence its impact is insignificant.