Deciding which films to watch: Are critics' ratings useful?
Deciding which films to
watch
Should we go by film
critics and their ratings?
The only true
measure of a film’s worth to a viewer is the utility it produces for her. This
is a deeply personal and subjective measure and will necessarily vary between individuals.
Remember that friend who is a sucker for rom-coms, however cheesy? And those
others, bright folks by all other accounts, who repeatedly fall for neuralizers
and infinity stones. By extension, the same holds for film critics. Whatever
some of them may believe, they are not higher order beings, and are likely to succumb
to the same subjective influences as all the rest of us. I am sure many of us
have unwittingly subjected ourselves to the torture of watching films from a
critic’s “top 10” selection. Still, there is this pervasive belief that critics
are somehow able to identify the films that you’ll like, as against viewer
ratings which are produced by “ordinary”, “uninformed” folk, and are therefore
useless.
So, I wanted
to know if critics’ ratings of films were any different from viewer ratings. To
have a representative yet manageable sample of films, I chose all English
language films which had a wide theatrical release in the US in the year 2016,
which grossed at least a million dollars at the US box office (data from https://en.wikipedia.org/wiki/2016_in_film,
The Numbers https://www.the-numbers.com/,
and IMBD). I excluded foreign language films, and English movies which had a
limited theatrical release. For viewer ratings I chose the IMDB ratings (https://help.imdb.com/article/imdb/track-movies-tv/ratings-faq/G67Y87TFYYP6TWAV#),
because they are based on by far the largest number of raters. For critics’
ratings I used the iconic Tomatometer ratings (https://www.rottentomatoes.com/about),
and converted them to a score out of 10 (adjusted Tomatometer rating) to make
them comparable with the IMDB ratings. I chose the year 2016 (and not a more
recent year) as the IMDB ratings for the most anticipated movies take a while
to stabilize from an initial high, because of what we can call the “Fan-boy
effect”.
Ordinary viewers are moderate, but critics
go overboard!
There were
182 films included in this analysis. Twenty sixteen was a good year for movies,
with both popular favourites (Captain
America: Civil War, Suicide Squad,
Zootopia), and critically acclaimed films
(Arrival, Moonlight, and of course La
La Land!). The average IMDB rating was 6.4, consistent with the average
rating of 6.3 for all movies in the 2010s (https://www.economist.com/graphic-detail/2018/11/24/tvs-golden-age-is-real).
The average critics’ rating was 5.5.
Figure 1: Distribution of IMDB
ratings of films in 2016

Figure 2: Distribution of the adjusted Tomatometer ratings of films in 2016
The figures
above suggest that viewers were more measured in their assessment, while critics
tended to be more scathing in their criticism, and overly lavish in their
praise, rating some films as much as 5 points lower and some others 3.5 points higher. Critics and viewers tended to agree only about the “average” movies
(average rating 6-7). This is better depicted in the plot below.
Figure 3: Agreement between viewer
and critics’ ratings
Critics tended to rate “bad” movies
lower and “good” movies higher than viewers
This
disagreement in ratings could be explained by at least two factors. First, the
IMDB ratings are a “weighted” average of all viewer ratings, with the weights
being allocated by a proprietary algorithm, with the stated objective of
preventing “vote stuffing”. (https://help.imdb.com/article/imdb/track-movies-tv/ratings-faq/G67Y87TFYYP6TWAV#)
Therefore ratings below 3 are uncommon and the highest rated movie ever has an
IMDB score of only 9.2 (Shawshank Redemption).
The highest rating in 2016 was a paltry 8.1 (Hacksaw Ridge, and Lion)
In contrast, the Tomatometer is a simpler measure reflecting the number of
positive or negative reviews and may produce ratings of 0 or 100%. Second,
critics have to be opinionated to set
themselves apart from each other. They wouldn’t last very long if they consistently
produced average ratings (the most likely worth of a film on average, in the
real world).
Ratings don’t explain success
In the
absence of an objective metric to rate the quality of films, I plumped for the
only available alternative: Box office takings. Ratings do not appear to
predict a film’s performance at the box office to a great degree. Audience
ratings explain about 10% of the variance in gross box office revenues. Critics’
ratings do worse, explaining only about 5% of revenues. Not surprisingly, the
only metric which had anything at all to do with success at the box office, was
the number of viewer ratings.
Figure 4: Number of viewers rating a
film and box-office revenues
So, it
appears that film critics tend to be “unfiltered” in their criticism or praise
of films, and you are less likely to agree with their ratings, especially if
they have a very strong opinion either way. Aggregated audience ratings may be
better. If you wish to watch movies which were successful at the box office,
films with over 20,000 viewer ratings may be your best bet. Having said all
this, films like all art, evoke deeply subjective responses. So, if you really want to watch a particular film, you
should by all means go ahead and watch it, whatever the critics (or other
viewers) may have to say about it.



A very rare scientifically written article on film appreciation. I am sure it will find a place on Film Journals too !
ReplyDeleteRather well studied stuff ! Done well
ReplyDelete