by Marshall Flores
Hello friends, and welcome back for another edition of Awards Daily’s Statsgasm! On this final day of 2013, we’re going to be taking a one-episode detour away from the art and science of AD’s prediction models. Instead, our focus today will be on the instant-runoff voting (IRV, also known as ranked choice voting) system AMPAS uses when determining Oscar nominations. In the effort to better understand the mechanics of this long-winded, complicated process, we will be running our own simulation of the Best Picture nominating process. This will also allow us to test a hypothesis that has been floating around the blogosphere recently – that a stronger film year with more potential BP contenders will result in fewer BP nominees.
As we all know, the Academy moved to a preferential ballot beginning in 2009 with regards to Best Picture voting, where voters would now rank multiple films on their lists. These ballots would then be tabulated under the IRV framework. AMPAS also initially doubled the Best Picture field from 5 to 10 nominees for a couple of years, before settling on a rule that requires every BP nominee to have #1 support from at least 5% of voters. Consequently, the number of BP nominees can now also vary from 5 to 10 films.
Now, I’ve read a number of articles attempting to explain AMPAS’s version of IRV, but frankly I think the nuances are lost on a general audience when IRV is discussed abstractly. As with regression analysis and the other statistical concepts I’ve introduced during Statsgasms’s run, I feel the best way to explain the BP nominating process is to actually see it in action. Longtime ADer (and fellow numbers nerd) Rob Y has been running voting simulations for the past few years using input directly from ADers, and they’ve been an excellent demonstration of the IRV system. I hope that our simulation over the next two days will be as effective and clear of a primer.
In order to simulate the BP nomination process, we will be using end-of-the-year top 10 lists from critics and bloggers that I’ve been compiling for the past few weeks as proxy ballots. I have collected and parsed 505 of these lists in total. Admittedly, it’s been an exhausting and tedious process that has certainly tested my resistance to carpal tunnel and being cross-eyed, but it has also been an enlightening one. 🙂 And yes, I know that critics and bloggers aren’t representative of AMPAS voters in many respects. Again, this is just a exercise to gives us an idea of the nuts and bolts in formulating a BP lineup.
But before we jump into the simulation, let’s see how the critics and bloggers are ranking 2013’s films on their top 10s. The following is a table displaying the overall top 20 films of these lists, ranked using a weighted score (where a #1 placement = 10 points, #2 = 9 points, #3 = 8, etc.)
Prospective BP frontrunners 12 Years a Slave, Gravity, and American Hustle are all ranked in the top 10, and as we can see, 12 Years and Slave and Gravity are *way* ahead of the others when it comes to #1 votes. Other contenders such as Inside Llewyn Davis, Her, and Nebraska are represented here as well. But the one thing to always remember is preferential balloting primarily rewards passion, i.e. #1 placements. We’ll soon see how much of this top 20 ends up making the BP lineup when our simulation is finished.
====
[NOTE: Since today, tonight and tomorrow are a holiday worldwide, we’ve decided to split this week’s episode of Statsgasm into two parts. We’ll ring out the old by looking back at Marshall’s summary of the Academy’s preferential balloting process. Then we’ll ring in the new tomorrow, looking forward past herculean task Marshall has undertaken to compile 505 separate Top 10 Lists to tabulate those ‘ballots’ and deliver the results in the Mother of All Top 20 Charts we see above. A few hours from now Part 2 will thrust us straight into the simulated belly of the balloting beast. – Ryan]
Thanks Ryan and Marshall. I tried the direct links and the front page and none of the links seem to work, just searching until timing out. I finally decided to try on a laptop and it works there though, so if seems to be a problem with mobile only?
Exciting to see Before Midnight so high.
Hi guys, Part 2 should be near the top on the AD front page. If it’s still not showing up for you, try clearing your browsers cache.
Here’s a direct link to Part 2.
Same here, edkargir. I’ve been eager to read part 2 but the link hasn’t been working
Part 2 is not working.
Hi Jack, I would invite you to Part 2 of this episode in which I properly run the simulation https://www.awardsdaily.com/blog/statsgasm-episode-4-best-picture-nomination-voting-simulation-pt-2/
This is inaccurate. The AMPAS system is NOT a points system, where a first choice is worth one, a second choice is worth 9 and so on. See some articles about the system at http://oscarvotes123.blogspot.com/2011/06/academy-of-motion-picture-arts-and.html
All other major categories nominate five using a more straightforward version of this system.See: http://oscarvotes123.blogspot.com/2011/01/for-your-consideration-how-oscar.html
For the Best Picture Oscar, it’s “one winner instant runoff”, as spelled out at http://www.instantrunoff,.com
If this theory is proven and in a weaker year we get more nominees and in a stronger year – fewer, then it will turn out that this whole nomination process is a bad idea. They should either go back to 5 or to 10 nomineea.
Hi Rob! Yes, the table above just indicates the Top 20 films using a weighted score. It’s not indicative of the full results, which will be released in Part 2 tomorrow. Stay tuned! 🙂
Oooops, I just read that you only included the top 20 titles.
Marshall,
Based on what you obtained, if the critics’ top 10 are used as if they are nominating AMPAS ballots, this is what will happen:
To obtain a nomination after round 1, a title would need at least 9.1% of the ballots or 46 votes. 12 Years a Slave and Gravity both meet this.
Then the titles receiving less than 1% (5 number ones or less) would have their ballots redistributed to their number twos. That would mean Blue Jasmine, All Is Lost, and Captain Phillips would not be nominated and their 8 votes would be redistributed to the next on their list. Any title receiving 5% or more (or 26 votes) would receive a nomination. Instantly Inside Llewyln Davis, Her, and Before Midnight would receive a nomination.
Either Leviathan or The Art of Killing could cross the 5% rule as they need less than 8 votes to reach 26, but it would be unlikely as the distribution of number 2 titles probably do not vary from the number 1 titles too much, and either title would need the majority of number 2’s from those 8 ballots.
So based on your work, these would be the nominees (from the critics’ top 10 lists):
12 Years a Slave
Gravity
Her
Inside Llewyln Davis
Before Midnight
Good to see Short Term 12 make the the top 20, hope it makes the cut
Thanks for the comments, Rufus. But I do think you’re being a little presumptuous in my intentions in running this simulation.
As I explicitly state, the purpose of this two-part episode is just to explicitly demonstrate the variant of instant-runoff system that AMPAS uses in BP voting. We don’t have any polling data on AMPAS voter preferences, just a bunch of anecdotes, so I am using the critics/bloggers top 10s as proxies, like how Steve Pond uses BFCA polling data in his simulations.
The simulation is not meant to be 100% representative of how AMPAS will ultimately behave. That being said, I still firmly believe it’s a decent first approximation of what could possibly happen. And although I am certainly not a political scientist by trade, I think I still have a decent understanding of the eccentricities of IRV.
All the BP voting simulations I’ve seen have arrived at an 8 film BP lineup, but AMPAS so far has nominated 9 films. Last year was considered a strong year in its own right and they still nominated 9.
We’ll see what happens, then. 🙂
Now as for your proposed hypothesis that more contenders will lead to fewer films. I think this is only true if a small portion of films will still garner a sizable amount of support over the others. Since voting percentages are lost during the reapportionment of their excess support, the more films that are overwhelmingly included, the harder it will be for second-tier films to get included, and if the pool of second-tier films is quite large (as it appears to be this year) then that will dilute the overall support any second-tier film can get.
I thought this last year and thought that this would result in fewer than the nine films that were eventually nominated. But last year there wasn’t a trio of films that towered over the others, there were just a lot of films in the mix. But this year, I think Gravity, American Hustle and 12 Years will garner the lion’s share of support. And then after that there is definitely a wide array of second-tier films to split the remaining voters.
I think there will only be 5 or 6 nominees this year. Gut instinct mixed with a little thought. But, mind you, just a little.
You properly point out that AMPAS voters are not the same pool of voters as bloggers and critics. You sidestep the issue as to whether this is relevant. I think the differences are critical to the experiment you appear to be setting up.
While we have no firm knowledge of any numerical specificity of the AMPAS results, one thing that’s more or less held is true that many AMPAS voters have not seen a wide array of films. In fact, judging from the inclusion of certain nominees the last couple of years it appears that many of the voters only see films that are “buzzed about:” in the awards blogosphere.
In other words, we’re dealing with a markedly smaller field of films to choose from. As Ryan Adams has pointed out numerous times, there are really only 20 to 30 films that AMPAS truly considers in any given year.
I haven’t run any statistical models but intuitively this kind of spoils your basic thesis. Since bloggers and critics include a wider array of films than AMPAS, your results are virtually incomparable.