Publisher Theme
I’m a gamer, always have been.

Statsgasm: Final Predictions for the 86th Academy Awards, Pt 1

(NOTE: Updated and Expanded)

by Marshall Flores

Welcome to the final episode of Awards Daily’s Statsgasm for the 2013 Oscar season. We are now in the home stretch, and it’s time for everyone to fill out their ballots and enter their pools. If you haven’t done so already, do enter AD’s Predict the Winner Contest – as always, there will be prizes and bragging rights for the best predictor!

Before we unveil the final slate of predictions made by our Statsgasm models, we would like everyone to keep in mind three things:

1. Correlation is *not* causation. AD’s forecasting models are first and foremost built to predict outcomes as accurately as possible, not explain them. We really can’t make strong assumptions or inferences on how AMPAS voters behave because, outside of a few anecdotes and polls from voters self-selecting to talk under anonymity, we simply don’t have any good data on them (what they like, what they know, what they consider when voting, etc.). And we’ll probably never know any of that.

That being said, the strongest predictors in our models tend to be guild/BAFTA outcome-related precursors, and the reasons for this do make sense intuitively: these groups, with their overlapping membership with the Academy, reflect AMPAS to a limited degree while also acting as its official “gatekeepers.” So we could apply a certain amount of causative analysis in some cases.

2. We *don’t* expect AD’s models to bat 21 for 21, and neither should you. Trends are often broken, history gets made, statistics is never having to say you’re certain. These models make forecasts that may be correct over the long run, with a large enough sample. But in the long run we’re all dead, and the Oscars are a yearly one-and-done deal. I know I’m making predictions in a few categories that are contrary to what the models are indicating.

But I’m regularly checking and validating the models, researching new methods that could help build better mousetraps, considering other predictors which could be significant. All while trying to keep it as simple as possible. As Billy Beane says to his Oakland Athletics in Moneyball, “It’s a process, it’s a process, it’s a process…”

Similar prediction models (devised by smarter guys than I) have batted for 75-80% – on par with the typical Oscar pundit. I’ll be more than happy if our models can match that overall performance in their rookie season. Anything above par is just sweetener.

3. With limited data, math can only do so much – Intuition, creativity, a strong passion for film + Oscars history are keys to making this type of applied statistical analysis enjoy any successl in the long-term.

As far as I’m concerned, I had the best teacher one could ever have, a teacher who absolutely taught me everything I know about the Oscars and more. That teacher is no other than our patron saint of lost causes here at Awards Daily, the Dr. Frankenstein who more or less invented the Internet Oscar-watching industry when she founded nearly 15 years ago. Hopefully enough of her insights and perception have been imbued into these otherwise soulless constructs. This work simply wouldn’t exist without her.

Tribute aside, let’s take a look at our final batch of predicted winners, starting with the major and feature categories. I will provide some commentary as well:

Best Picture:


The BP model reflects the narrative of this season being a hotly contested contest between Gravity and 12 Years a Slave, with American Hustle lurking around as a possible spoiler. Gravity does have a lead in the model, a result of its DGA win (which remains the strongest BP predictor historically). However, its lead is by no means an insurmountable one – if I were a betting man, I’d definitely take the approximately 3:1 odds against winning for 12 Years a Slave.

On a more personal note, either 12 Years a Slave or Gravity would be incredibly worthy, excellent BP winners that should survive the test of time. Both are ground-breaking masterpieces. Concurrently, *not* winning BP will absolutely not affect either film’s ultimate legacy in my eyes. The Oscars may be a zero-sum popularity contest, but our own love of cinema should not have to be.

Best Director:


Statsgasm is predicting that there’s an 83.73% chance that Ang Lee will no longer be the only non-Caucasian Best Director winner after Sunday night.

Like with Best Picture, either Alfonso Cuaron or Steve McQueen winning will be a momentous, history-making event for incredible, masterful work – something that we should all be celebrating. There is still a lot left to be desired regarding the Oscars and diversity, but progress, slow and frustrating as it may be (especially with an insular old boys club like AMPAS), is progress.

Best Actress:


All hail the great Cate Blanchett – Statsgasm’s biggest lock of the night among the major categories.

Best Actor:


What was initially supposed to be a very competitive category at the start of the season shifted rather quickly to Matthew McConaughey’s favor in the span of a couple of weeks with his Globe, BFCA, and SAG wins. As such, he’s the odds-on favorite. Not a lock by any means, but he does have a solid lead, and McConaughey winning would cap off quite the career revival he’s been having over the past two years. Dallas Buyers Club being MIA at BAFTA shouldn’t really affect his chances, as BAFTA hasn’t actually had all that great of a track record in Best Actor since 2000, being a total non-factor in Denzel Washington’s and Adrien Brody’s upsets, missing on Sean Penn twice, missing on Colin Firth in 2009.

In fact, I’ve had multicollinearity issues if I attempt to include the BAFTA in this particular model, i.e. the model would end up estimating BAFTA with a negative coefficient, which basically means that winning the BAFTA somehow *reduces* the odds of winning the Oscar. So I’ve elected to leave it out completely. I suspect many of you will be aghast by this revelation, but this is yet another example of how statistical analysis has its limitations. In the end, quite a bit depends on the abilities and judgement of the model builder.

Statsgasm’s Best Actor model does factor in previous acting nominations and wins, which gives Bruce Dern and Leo DiCaprio a slight boost while also penalizing Christian Bale’s chances. As for Chiwetel Ejiofor, I do believe he has an outside chance of “pulling a Brody”, and it would be an equally amazing moment. But there are some key differences between then and now; most importantly, Brody was against 4 previous winners and The Pianist was riding a serious jolt of momentum with its surprise BAFTA wins in Best Film and Director.

Supporting Actress:

Supp. Actress

Although the model is giving Jennifer Lawrence a sizable lead over Lupita Nyong’o on account of her Globe + BAFTA wins (a very potent combo that has batted 1.000 for Oscar in this category since 2000), this is the first category in which my prediction (Nyong’o) will diverge from the model’s. But I also wouldn’t completely rule out the possibility that Lawrence and Nyong’o end up splitting support from younger voters, giving Squibb an avenue to the statuette if older voters unite behind her.

Look for this to be a *huge* early inflection point with regards to the BP race on March 2nd – if Nyong’o wins, 12 Years a Slave will be on the traditional path to victory.

Supporting Actor:

sup  actor

Jared Leto has a very sizable lead on BAFTA surprise winner Barkhad Abdi. Much of this lead is a result of his Globe win, which is the strongest predictor in this category and is estimated to improve the odds of winning the Oscar by a factor of 43.

Adapted Screenplay:


This will very likely be a guaranteed miss by the model – 12 Years a Slave should have this in the bag despite not winning either the WGA (which it was ineligible for) or the BAFTA. But then again, most were predicting 12 Years to win the BAFTA too, only to see Philomena ride a wave of popularity and end up winning.

In terms of pure adaptation, as someone who read Solomon Nortrup’s harrowing memoir in my AP US history class in high school, 12 Years gets my vote as the winner, hands down. That being said, it’s a fine set of nominees.

Original Screenplay:


The model accurately reflects the dead heat between American Hustle and Her among the experts and pundits. Her has beaten Hustle head-to-head twice this season at the Globes and the WGA, but AMPAS is a far larger body with a sizable acting bloc that has adored Hustle by all accounts. That reason alone nudges me into predicting Hustle, despite my (strong) preference for Her.

If you’re stuck on making a decision here, save yourself a headache and just flip a coin. 🙂

Animated Feature:


Documentary Feature:


A very tight race through and through. All 5 are excellent nominees, and there were plenty more that were unfortunately left out of the lineup, especially Stories We Tell and Blackfish. My preference is for the absolutely horrifying and surreal Act of Killing, but I do feel Act, The Square, and Dirty Wars will all split votes from one another, enabling the more populist and undeniably rousing 20 Feet from Stardom to win.

Foreign Language Film:


I will be perfectly honest: the precursors in this category have been very poor overall in predicting Oscar since 2000. As such, this is the one model out of the 21 Statsgasm models in which I am not all that confident in regarding its construction.

Still, The Great Beauty did win the Globe and the BAFTA, defying the consensus picks to boot. It’s a marvelous, dazzling film, and I’m comfortable in sticking with it. But The Hunt and (especially) Broken Circle Breakdown have excellent shots at winning as well.

That’s all for now. Return tomorrow for Part 2 of our final Statsgasm predictions in the tech categories.

Happy predicting!

Email: marshall(dot)flores(at)gmail(dot)com
Twitter: IPreferPi314