By Dave Guthmann
OK, it’s time to fill out your Oscar picks, and this year you want to win that pool with your friends. Most years, you’ve picked the movies you liked the most, but that approach never worked. So you decided to look online to see what more knowledgeable folks have to say. If you can just find a “cheatsheet” listing each of likely winners, you’ll finally win a few bucks, and, more importantly the esteem of your friends.
Or maybe you’ve been looking online for years, but you’ve never found the right cheatsheet. There’s some blowhard in your pool who almost always wins, and you KNOW he/she apparently landed on a really good online source.
Well, I’ve scoured the Worldwide Web and looked in every nook and cranny. I found over 4500 Oscar “ballots” (i.e., sets of predictions) over the last 18 years. I put them in a database, did some behind-the-scenes work to make sure they are comparable, and can now tell you which sorts of prediction are most likely to help you win your contest.
First though, we need an explanation of how I grouped these prediction sources. Here are the categories:
Betting Odds Sites: Betting on the Oscars has been legal in England for quite a while, and Las Vegas often offers their odds just for fun. Now, betting on the Oscars is legal in New Jersey. So there are many online sources for the latest odds on the Oscar candidates.
Bloggers: I had a few tough calls to make, but bloggers are generally non-professional writers, i.e., not associated with a media company. More times than not, they offer their picks just for fun.
Editors: This is a fairly specialized category: folks on staff at Awards Daily or Gold Derby. Gold Derby annually publishes the picks of its editors, and the picks of Awards Daily’s editors can be found in on that site’s annual Big Bad Prediction Chart.
Experts: These folks have somehow gained a reputation as one of the best at Oscar forecasting. They may fall into one of these other categories (e.g., a journalist); however, if they are a member of Gold Derby’s expert group, one of the Gurus o’ Gold, or included in Awards Daily’s Big Bad Prediction Board, I considered them an expert.
Journalists: Often film critics, these are the news media folks who offer their annual predictions. Most write for newspapers or magazines, but some appear on TV or radio.
Staff Consensus: Some websites, newspapers, and magazines publish what they describe as a consensus of their staff. I wondered if this aggregated knowledge would be more helpful in predicting winners. So it’s a separate category.
Stat Geeks: For a lack of a better name, these are predictions compiled through computer algorithms. Using data from past years, e.g., how many Golden Globe winners went on to win an Oscar, these folks let their computers figure out what known facts best predict the Oscar winners.
Surveys: There aren’t a whole lot of these around, but you’ll occasionally find a website’s user survey of who folks think will with the Oscars. Folks who respond to the survey are probably not experts, but the thought is that the majority of respondents will favor the more likely winners.
Top Users: This is a category invented by the folks at Gold Derby. They select 24 Gold Derby users for two categories each year: All Stars and Top 24. Those 48 users have a strong Oscar prediction track record, especially in the previous year.
Here are how many Oscar Ballots I found in each group, and the percentage of Oscar picks each group correctly predicted:
Source of Predictions |
Ballots |
% Correct |
Betting Odd Sites | 88 | 75.8% |
Stat Geeks | 98 | 73.3% |
Staff Consensus | 116 | 71.2% |
Editors | 97 | 70.8% |
Experts | 618 | 69.8% |
Top Users | 173 | 69.7% |
Journalists | 883 | 68.0% |
Surveys | 36 | 67.9% |
Bloggers | 2131 | 65.6% |
Yes, the professional odds makers are good at what they do. Otherwise, they would lose the house’s money. There’s a lot of incentive to be accurate in their business.
The stat geeks also seem to do very well. It’s quite possible, given strong statistical tools, to take the “guess” out of Oscar predicting. While many forecasters sometimes give into a little personal bias (“That actress is due”), computers just look at the cold hard facts.
The next four groups were pretty close, though the staffers and editors seem to do a bit better than the experts and top users.
Finally, down at the bottom of the list, you can see that journalists, surveys, and bloggers lag behind. Following one of those sources will likely not win your Oscar contest.
Ah, but those percentages can be a bit unfair. Not all forecasters predict all 24 categories. It’s fairly common for folks to ignore the three short subject categories and just predict 21 categories. Or others only pick 8 or so categories, ignoring both the short categories and all the technical categories.
Let’s say Priscilla correctly picked 18 of 24 categories. That’s 75%. But Elvis decided to pick only ten categories and got 8 right. That’s 80%. Is Elvis a better predictor than Priscilla? Probably not. In most years, those short categories and the sound categories are especially difficult. And, in general, the technical awards are tougher than the acting categories. (There’s also a problem comparing across years—some years are just a lot easier to predict than others.)
So, without going into a lot of detail, I found a way of adjusting prediction percentages to avoid those problems. Basically, I adjust each correct or incorrect pick based on how difficult that category was for all forecasters. In the year Olivia Colman won Best Actress, picking Glenn Close for Best Actress is penalized a lot less than incorrectly picking Lady Gaga. And picking Parasite to win over 1917 is rewarded much more than picking the heavy favorite Joaquin Phoenix to win Best Actor.
I made one more addition before showing you these adjusted percentages. As mentioned above regarding the Staff Consensus group, a lot has been written about how groups of people often forecast better than individuals (e.g., Superforecasting: The Art and Science of Prediction). So, I also looked at the adjusted percentage of correct predictions for these four sources:
Contest Consensus: Over the last 18 years, I’ve compiled data for almost 10,000 participants in 164 Oscar prediction contests. The size of the groups ranged from 5 to 1,000 participants. In each, I looked at the most common predictions in each category, and created a consensus for each of those contests.
Editor Consensus: Gold Derby posts the rankings created by a consensus of its editors. For Awards Daily, I had to compile those ranking by comparing the predictions of the editors that appear in the Big Bad Prediction Chart.
Expert Consensus: Gold Derby and Gurus o’ Gold provide a ranking of nominees in each category based on the consensus of the experts they polled. Awards Daily also lists the consensus pick among its experts. Over the years, other groups of experts were polled (Oscar Central), and I compiled a consensus winner (i.e., nominee that the most experts predicted) if the original website didn’t already do it.
Top User Consensus: Gold Derby also posts the rankings created by a consensus of their Top Users.
So with my new groups, and using the adjusted percentage of correct predictions, the following chart takes a fairer look at the available online resources for assembling your Oscar ballot:
Source of Predictions |
Adjusted % Correct |
Expert Consensus | 75.9% |
Editor Consensus | 75.3% |
Top User Consensus | 74.6% |
Contest Consensus | 73.7% |
Betting Odds Sites | 73.2% |
Experts | 70.8% |
Stat Geeks | 70.8% |
Editors | 70.6% |
Top Users | 70.2% |
Staff Consensus | 69.8% |
Surveys | 68.3% |
Journalists | 66.7% |
Bloggers | 65.7% |
After adjusting the percentages, the Betting Odds sites dropped from the top spot. The reason is simple: many oddsmakers do not take bets on the tougher, more obscure categories. The unadjusted percentage (75.8%) was overinflated because they didn’t include tough categories like Documentary Short. They’re still pretty good at what they do, but we now know you can find better predictions.
The Stat Geeks also didn’t do nearly as well after adjusting the percentages. Most of these computer algorithms don’t predict all categories—especially those tough short subject categories. In fact, one computer wizard annually predicts only four categories: Best Picture, Best Director, Best Actor, and Best Actress. You might notice that Stat Geeks tend to brag a bit about their outstanding percentages, but they clearly benefit by avoiding the tougher categories.
Your best bet: take a look at the consensus of experts published at Gold Derby, Gurus o’ Gold, or Awards Daily. Yes, the typical expert will give you a solid accuracy rate of 70%. There are some smart people making Oscar predictions. But putting all your eggs in the basket of just one of those experts is not the best way to win your Oscar contest.
The other consensus predictions also did pretty darn well—still better than a typical expert. Even in the 164 Oscar “amateur” contests, where the 10,000 users averaged only about 55% correct, using a consensus, i.e., the most popular predictions in each category, is a fairly strong approach. In fact, an online Oscar contest called “Poor Stuart’s Can You Beat the Crowd?” is based on the assumption that you’re a winner if you get more right than the consensus of the contest’s participants. In 2014, NO participant was able to do that when the consensus picked 23 out of 24 winners!
Why does following the consensus work so well? Even the best forecasters often like to go out on a limb in a category or two. Say Priscilla had a pretty solid set of predictions in 2020, but she had a “hunch” that Antonio Banderas would win Best Actor. Fortunately, most folks didn’t have the same whim, and the heavy favorite Joaquin Phoenix was the choice of the consensus. In general, the consensus systematically ignores the riskier picks of individual forecasters.
What’s wrong with making risky choices? Any forecaster would like to be the one who predicted Ex Machina would win Best Visual Effects in 2016. (Actually, NO forecaster predicted that winner!) But folks who tend to pick underdogs like Ex Machina in 2016 are also likely to make high risk picks in other categories, e.g., perhaps Bryan Cranston for Best Actor in that same year. In the long run, luck won’t save your bacon, and you’ll miss many more risky picks than you’ll get right.
The chart with adjusted percentages, again, shows journalists and bloggers trailing the field. Following the advice of your local newspaper, or a blog you like to follow, is probably better than picking your personal favorite films, but not really by all that much.
If you want to win, take the advice of the experts. And I don’t mean one expert, but a consensus of several of them. It might not win a large contest with 1000s of participants—some incredibly lucky risk-taker will randomly pick several upset winners. But in your small pool of friends, using this approach is definitely your best shot to win the big prize—your competitors’ admiration.