By Dave Guthmann
I recently posted an article ranking the best online Oscar predictions to help you win your Oscar pool. The conclusion: use some sort of consensus of Oscar experts.
That implies that playing the favorites is the best possible strategy.
But will copying every expert consensus pick across the ballot win your pool? Remember that the expert consensus averages 75.9% correct. Is that high enough to finish first in your Oscar pool?
In addition to all those predictions by experts, journalists, bloggers, etc., I also have collected almost 9,000 entries from 92 Oscar contests/pools that took place since 2007. (I decided not to use some earlier contests because, well, the contestants did fairly poorly. I’m guessing the influx of online information, e.g., expert predictions, has helped improve the accuracy of Oscar contest picks.)
The winners of these contests had an average percentage of 79.9%. Now, of course, that’s just an average—the winning percentage varied from a low of 58.9% (in a tiny little pool of 6 people) to 97.3% (in a large contest with almost 700 contestants). Using an expert consensus would have easily won that tiny pool, but it wouldn’t have been close in the large contest.
As you can imagine, you typically need higher scores as the size of the pool grows. Larger numbers of contestants means more smart folks in the ranks. The winners of small pools (20 or fewer participants) averaged 77.1% correct. Winners of medium size pools (21 to 75 participants) averaged 79.4%. But, the winners of large pools (76 or more participants) averaged 83.0% correct. Even a really good expert consensus cheatsheet is probably not enough to win the big contests.
In large contests, you rarely see winners who didn’t make several risky picks. For example, the big college basketball March Madness contests get literally millions of entries. The winning contestant finished on top because they fortuitously picked a large number of the many upsets that occur every year. A few years back, a youngster had a nearly perfect entry. Given all the upsets that year, our young winner apparently ignored what the experts said and relied on old-fashioned luck. And he got it—in droves! Meanwhile, millions of other upset-minded predictors finished with horrible scores.
This is where the prediction business gets screwy. If you pick any upsets (as opposed to picking all the favorites), you mathematically decrease your chance for a high score. The more upsets you pick, the more likely you will score poorly. Though it would be exceptionally unlikely, there’s always a better mathematical chance that an all-favorites ballot will be 100% correct than ANY ballot with one or more risky picks.
So, by now, poor reader, you’re no doubt confused. I’m telling you that picking all favorites gives you the best mathematical chance, but I also say it’s extremely unlikely, particularly in a larger contest, for an all-favorites entry to win. (Winning one of the big March Madness pools is a lot like winning the lottery.)
I think we can solve this issue with a little semantics magic. Let’s define a “risky pick” as any pick based on very little support information. It’s a risk because it’s based on a hunch or a whim, maybe you liked the movie a whole lot, a friend told you it was really good, that actress is due, etc. Meanwhile, the experts aren’t picking it because, well, there’s really no evidence suggesting it will win.
A “hidden gem,” on the other hand, is a non-favorite that has some valid data/information suggesting the nominee is a logical choice, and, importantly, some experts have reached the same conclusion.
So here’s an easy rule: never make “risky picks,” but do look for “hidden gems.”
Now let’s figure out how to know the difference.
For each Oscar category over the last 14 years, I looked at every pick made by 8,792 Oscar Pool participants in 92 contests. That’s 178,080 Oscar predictions. For each of those picks, I determined what portion of “experts” (including all the experts, editors, consensus groups, etc. referred to in my previous article) predicted that nominee. The percentages ranged from 0% (none of the experts thought that particular nominee would win) to 100% (they all picked that nominee).
The table below shows that contest winners tend to pick more favorites (i.e., ones where 71% or more of the experts agree it’s the right pick) than the folks who finish out of the money—16.3 compared to 12.5. The non-winners are much more likely to pick a nominee that less than 10% of the experts picked: 5.6 compared to 1.3.
Average Number of Picks (On a 24-Category Ballot) Made by: | ||
% of Experts Who Made This Pick | Contest Winners | Everyone Else |
91-100 | 10.7 | 8.7 |
81-90 | 2.7 | 1.9 |
71-80 | 2.9 | 1.9 |
61-70 | 1.6 | 1.0 |
51-60 | 1.0 | 0.7 |
41-50 | 1.1 | 0.8 |
31-40 | 0.8 | 1.0 |
21-30 | 1.2 | 1.0 |
11-20 | 0.6 | 1.0 |
1-10 | 0.8 | 3.2 |
0 | 0.5 | 2.4 |
While contest winners do make picks that aren’t heavy favorites (7.6 of their picks go to nominees supported by less than 70% of the experts), they tend to stay away from those picks that virtually no expert is backing. (There are rare exceptions. One contest winner picked Golden Compass for Best Visual Effects when no expert thought it would win. And no one else in her 60-person pool thought it would win. Now that’s something to brag about!)
Overall, contest winners make picks experts agree with 68.9% of the time. The non-winners make picks experts agree with 52.8% of the time.
The preceding suggests that lining up with the experts is very important to picking a winning ballot, but it also leaves room for picking some nominees that aren’t strong favorites, i.e., hidden gems.
Let’s look at how well contestants did on their picks, depending on what portion of experts sided with each pick.
The table below shows how well contestants’ picks match up with the expert predictions. The first column represents the range of expert agreement for each of the 178,080 picks. The second column is the number of picks contestants made in each range. And the third column shows how often those picks were correct.
% of Experts Who Made This Pick | Number (and %) of Picks of This Type Made by Contestants | % the Pick Was Correct |
91-100 | 68,678 (38.6%) | 94.1 |
81-90 | 15,369 (8.6%) | 81.7 |
71-80 | 13,411 (7.5%) | 71.5 |
61-70 | 7,317 (4.1%) | 46.6 |
51-60 | 5,270 (3.0%) | 41.7 |
41-50 | 6,422(3.6%) | 46.3 |
31-40 | 7,193 (4.0%) | 43.6 |
21-30 | 6,931 (3.9%) | 37.0 |
11-20 | 6,397 (3.6%) | 23.6 |
1-10 | 22,712 (12.8%) | 6.0 |
0 | 18,370 (10.3%) | 0.8 |
Total | 178,070 (100.0%) | 58.5 |
For example, 68,678 contestants’ picks were nominees that experts were in at least 91% agreement that that nominee would win. Those “sure thing” picks represent 38.6% of all the picks made by contestants. And when they picked those strong favorites, their pick was correct 94.1% of the time. Stick with nominees when the experts agree that much!
On the other extreme, 10.3% of all contestant picks were for nominees no expert had picked. I’m guessing these contestants filled out their ballots off the top of their head and really had no idea what the experts were saying. Well, that’s not a good thing in terms of winning an Oscar pool—these picks are right less than 1% of the time!
But it’s in the middle of the chart where things get interesting. When contestants pick a nominee 61 to 70% of the experts agree on, that nominee doesn’t even win 50% of the time. In fact, the four groups spanning 31% to 70% expert agreement are pretty similar in terms of their chance of winning. This suggests that in categories where expert opinion is relatively mixed, systematically picking the highest ranked nominee doesn’t really give you an advantage. Let’s say there’s a nominee 55% of the experts think will win, and another nominee 45% think will win. The odds suggest that 2nd-ranked nominee could be the hidden gem you’ve been looking for.
And for that matter, I wouldn’t completely rule out nominees with between 11% and 30% expert agreement. Those nominees don’t have a high likelihood of winning, but they do win more often than the experts expect. Do a little more research on some of these films and see if you can find a hidden gem on occasion.
But, please, don’t take any nominee where less than 10% of the experts are picking it. Those are the longshots that will clobber your Oscar ballot.
So now all you have to do is figure out how to identify those hidden gems!
Here are the two tasks I’d suggest:
1. Pay attention to which nominees the experts are picking. This isn’t really hard—just look at one of the consensus picks (Gold Derby, Awards Daily, Metacritic, Gurus o’ Gold) and take note of what portion of the experts are picking nominees in each category. If 70% or more of the experts are picking a nominee, go with that choice. If 10% or fewer pick a nominee, rule it out. If the nominee is anywhere in between 10% and 70% pick the favorite, proceed to task 2.
2. Use some of the tricks I describe in the soon-to-be published part 2 of this article! (Hopefully, I can finish it before the weekend!)
To be continued…