By Dave Guthmann
In Part I of How Much Risk Do You Need to Take to Win Your Oscar Pool, I argued that you should stay clear of high risk picks, especially those where less than 10% of the experts are predicting them. But I also argued that you should look for “hidden gems,” i.e., nominees that aren’t favorites, but there’s a strong case they can win.
So how many viable hidden gem candidates are out there? Over the last 18 years, 432 categories have been decided. Of those categories, experts were in at least 70% agreement on the likely winner for 313 (72%) of them. In most of those instances, their chance of winning is high enough that you probably won’t find a hidden gem (though a few 10-30% likely nominees do occasionally pay off). So there were, in effect, 119 viable categories over the 18 year period where an upset seemed possible. That’s about 6 or 7 categories a year where a viable gem might be hidden.
In those 119 categories, an underdog won 64 (54%) times. Of those underdogs, 9 would not have been a viable hidden gem candidate—they were winners that less than 10% of the experts thought would win. That means only about 55 successful hidden gem picks could have been made over the 18 year period—only about 3 a year.
Here’s worse news: 16 of those 55 candidates were nominated for one of the three short categories. Picking those is not easy. I offer some advice below, but suffice to say you will really only have 39 viable hidden gem candidates that you’ll really know much about it. Yep, you’re now down to perhaps 2 a year.
And if you really want to be bummed, remember that I mentioned you should be very cautious about picking nominees with 10% to 30% support among the experts. Well, 36 of the 55 hidden gems are that much more hidden. You’ll need really good information to pick one of those. So how many remain if you don’t count the short categories and the 10-30% longshots? Only 15—not even 1 a year.
Tough task? You bet.
But finding those gems is how you’ll win your pool. There are ways to find gems even among the short categories and the 10-30% longshots. Here are some possible methods I use for finding those hidden gems.
- Use the track record of individual forecasters in specific categories to see if they’re picking any non-favorites.
All those Oscar predictions in my database show that some forecasters have had a particularly successful track in certain categories. I’m not exactly sure why that is. Maybe some forecasters have some inside information about certain categories, e.g., they know lots of voting actors who are willing to speak. Maybe they have been an expert on documentaries for several years and have a knack for knowing the direction of the Academy. Maybe they’ve made a point of watching all the shorts over the years just so can finally figure out all that out. Or maybe some luck is involved.
Regardless, below are my go-to experts (or expert groups) for each of this year’s 23 categories. I didn’t include any forecasters I categorized as a “blogger” or a “journalist” in my earlier analysis. Some of those folks initially landed in my top 5 for some categories, but I made the hard decision that only well-regarded forecasters that are easy to find should be included.
No numbers, percentages, etc. are shown—the numerical differences between these folks are minute. But each of these folks (or groups) landed at the very top in terms of accuracy, and each have made predictions in this category for at least 8 years. FYI, they are listed in alphabetical order.
You might just use one of these folks as your guide in each category. But, personally, I like to think of the five experts as a sort of “panel of judges.” The majority wins. If the majority picks someone other than the favorite, then that’s very likely a hidden gem. (If there’s a tie, I go to the 6th most accurate expert on my list.)
Picture | Director |
Mark Johnson (Awards Daily) | Thelma Adams (Yahoo) |
Tariq Khan (Fox) | Tariq Khan (Fox) |
Scott Mantz (Collider) | Paul Sheehan (Gold Derby) |
Metacritic Consensus of Experts | Slant Magazine (Slant Magazine) |
Tom O’Neill (Gold Derby) | Ben Zauzmer (Hollywood Reporter) |
Most of the folks in the above charts, and the ones that follow, can be found simply by looking at one of Gold Derby’s lists, the Awards Daily Big Bad Prediction Chart, or the Gurus o’ Gold final consensus chart.
Actor | Actress |
Chris Beachum (Gold Derby) | Greg Ellwood (Playlist) |
Marcus James Dixon (Gold Derby) | Guardian Staff Consensus |
Daniel Montgomery (Gold Derby) | Kevin Polowy (Yahoo) |
Brian Whisenant (Awards Wiz) | Nathaniel Rogers (Film Experience) |
Ben Zauzmer (Hollywood Reporter) | Ben Zauzmer (Hollywood Reporter) |
Special honorable mention is needed for the Actress category. As I mentioned earlier, I didn’t include any non-expert bloggers or journalists in these lists. However, one “blogger” deserves a mention in the Actress group. Nick Rhodes, keyboardist for the 80s rock group Duran Duran, has offered Oscar predictions for many years on the duranduran.com website. Over the last 18 years, Rhodes has predicted the Best Actress winner EVERY one of those years. Nothing equals that record—in any of these categories. He’s not on my expert panel for this category, but his pick will definitely be of interest—especially this year when Best Actress is such a close race. (I just checked and Nick says this year’s winner will be Frances McDormand. So now we know the winner…oops…I mean his prediction.)
Supporting Actor | Supporting Actress |
Chris Beachum (Gold Derby) | Thelma Adams (Yahoo) |
Adam Chitwood (Collider) | Bovada Betting Odds |
Marcus James Dixon (Gold Derby) | Kyle Buchanan (NY Times Carpetbagger) |
Tom O’Neill (Gold Derby) | Gold Derby Expert Consensus |
Ben Zauzmer (Hollywood Reporter) | Dave Karger (IMDB) |
Glenn Whipp (LA Times) |
There’s an extra expert for the Supporting Actress category—I couldn’t toss one of those experts because they’ve each been perfect, or near perfect, over many years.
Adapted Screenplay | Original Screenplay |
Erik Anderson (Awards Watch) | Erik Anderson (Awards Watch) |
Bovada Betting Odds | Bovada Betting Odds |
Adam Chitwood (Collider) | Greg Ellwood (Playlist) |
Marcus James Dixon (Gold Derby) | David Montgomery (Gold Derby) |
Gold Derby Expert Consensus | Ben Zauzmer (Hollywood Reporter) |
Glenn Whipp (LA Times |
Animated Feature | Documentary | International Film |
Riley Chow (Gold Derby) | Adam Chitwood (Collider) | Mark Johnson (Awards Circuit) |
Metacritc Survey | Pete Hammond (Deadline Hollywood) | Metacritic Expert Consensus |
Sasha Stone (Awards Daily) | Mark Johnson (Awards Circuit) | David Poland (Twitter) |
Anne Thompson (Indiewire) | Metacritic Survey | Sasha Stone (Awards Daily) |
Brian Whisenant (Awards Wiz) | Tom O’Neill (Gold Derby) | Susan Wloszczyna (Gold Derby) |
FYI, the Metacritic survey and the Metacritic Expert Consensus are two different things. The survey combines users’ predicted winners, while the Expert Consensus is a tally based on a group of experts they identify.
Cinematography | Costume Design | Film Editing |
Daniel Montgomery (Gold Derby) | Adam Chitwood (Collider) | Kyle Buchanan (NY Times Carpetbagger) |
Matt Noble (Gold Derby) | Pete Hammond (Deadline Hollywood) | Marcus James Dixon (Gold Derby) |
David Rothschild (Predictwise) | Metacritic Expert Consensus | Greg Ellwood (Playlist) |
Peter Travers (Rolling Stone) | Metacritic Survey | David Poland (Twitter) |
Keith Simanton (IMDB) | Tom O’Neill (Gold Derby) | Sasha Stone (Awards Daily) |
Makeup | Production Design | Visual Effects |
Scott Feinberg (Hollywood Reporter) | Riley Chow (Gold Derby) | Chris Beachum (Gold Derby) |
Matt Noble (Gold Derby) | Dave Karger (IMDB) | Adam Chitwood (Collider) |
David Rothschild (Predictwise) | Metacritic Expert Consensus | Scott Feinberg (Hollywood Reporter) |
Keith Simanton (IMDB) | Sasha Stone (Awards Daily) | Tariq Khan (Fox) |
Peter Travers (Rolling Stone) | Anne Thompson (Indiewire) | Sasha Stone (Awards Daily) |
Brian Whisenant (Awards Wiz) |
Score | Song | Sound (Previously Sound Editing) |
Riley Chow (Gold Derby) | Pete Hammond (Deadline Hollywood) | Tariq Khan (Fox) |
Tariq Khan (Fox) | Metacritic Survey | David Poland (Twitter) |
Metacritic Survey | Tom O’Neill (Gold Derby) | Sasha Stone (Awards Daily) |
Daniel Montgomery (Gold Derby) | Michael Patterson (Telluridenews) | Peter Travers (Rolling Stone) |
Brian Whisenant (Awards Wiz) | Keith Simanton (IMDB) | Susan Wloszczyna (Gold Derby) |
Documentary Short | Animated Short | Live Action Short |
Awards Daily General Consensus | Adam Chitwood (Collider) | Richard Brody (New Yorker) |
Scott Feinberg (Hollywood Reporter) | Edward Douglas (Coming Soon) | Kyle Buchanan (NY Times Carpetbagger) |
Gurus O’ Gold Consensus | Greg Ellwood (Playlist) | Riley Chow (Gold Derby) |
Tariq Khan (Fox) | Gold Derby Expert Consensus | Kevin Polowy (Yahoo) |
Nathaniel Rogers (Film Experience) | Sasha Stone (Awards Daily) | Paul Sheehan (Gold Derby) |
- Read the explanations of why a forecaster is going out on a limb.
I’ve learned quite a bit over the years by reading forecasters’ explanations of why they reached their conclusions. A good writer will support his/her claims with good data. I may not always be convinced after even a thorough analysis of an Oscar category, but it will at least help clarify your muddled mind when you’re trying to decide between two close choices.
The only problem is that these articles mix in a lot of “should win” talk, e.g., “Titanic will win Best Picture, but Beverly Hills Ninja should win.” After all, many Oscar predictors are film critics (or wannabe film critics), so picking their personal favorite only comes naturally. But it really has nothing to do with the “will win” conversation found in the same article. Ignore the editorializing and focus on the prognosticating.
- Take note of any last minute changes.
Take note if there’s a recent trend of folks jumping on a particular nominee’s bandwagon. This worked really well for me a few times. A good example: Julie Christie was the heavy favorite for Best Actress at the 2008 Oscars. (The experts had her on 82.2% of their ballots.) Then, as late as the day before the Oscars, a major Oscar expert switched to Marion Cotillard. And then a few more experts switched. Why? Was there word on the “street”? Did Price Waterhouse somehow screw up? I don’t really know. But the switch helped me win my Oscar pool. I was the only one that got that category right.
That said, be careful with this technique. It also burned me a few times. Switching to Emmanuelle Riva for Best Actress in 2012 didn’t work out, and I’m pretty sure I made other dumb last minute changes.
- Find differences in the expert consensus picks.
Let’s say the Gold Derby expert consensus says Nominee A will win, while the Gurus o’ Gold say it will be Nominee B. This conflict is ideal for finding a hidden gem. You want to find good forecasts that go against the grain. But you should use the other techniques described in this article to confirm you want to go for the underdog—a split between major groups only means more research is needed.
- Do your own research by looking at various factors associated with past winners.
Here’s where I reveal my dirty little secret: I’m one of those “stat geeks” I described in the first article in this series. (Gad, you would have never guessed, right?) Every year, I compile all the new data and run my little program to predict the winners in those categories that don’t have “short” in their name. (I’m collecting data for the short categories, but the computer just gives me a dirty look when I try to predict a winner.)
My computer model is not bad, generally picking the same group of films that will likely end up on one of the expert consensus lists. I’m not sure it can beat the famous Ben Zausmer’s algorithm, but it’s in the ballpark. I don’t follow my computer’s “advice” strictly—I usually change a couple of categories based on the more-human like techniques described elsewhere in this article.
So, below are what I found are the best predictors for each non-short category. I won’t bore you with the details about how much weight I assign to each of the predictors—I’ll just tell you they’re listed in order of importance. They’re listed here for your own use, e.g., at least consider looking at the recent history on the top predictor if you’re having trouble picking between two nominees.
Picture | Director |
Producers Guild Win | Directors Guild Win |
National Society of Film Critics Win | Director in the Film |
Number of Critic Award Wins | Director From Non-English Country |
SAG Ensemble Win | Number of Critic Award Wins |
Crime or Espionage Theme |
Actor | Actress |
Screen Actors Guild Win | Screen Actors Guild Win |
Golden Globe Drama Win | BAFTA Win |
Higher IMDB User Score | Golden Globe Win |
Critics Choice Win | Significant Physical Change in Role |
National Society of Film Critics Win | Las Vegas Film Critics Win |
Supporting Actor | Supporting Actress |
Screen Actors Guild Win | BAFTA Win |
Golden Globe Win | Screen Actors Guild Win |
BAFTA Win | Golden Globe Win |
Plays a Drunk or Junkie | Chicago Critics Win |
Chicago Critics Win | Dallas Critics Win |
No Prior Acting Wins |
Adapted Screenplay | Original Screenplay |
Writers Guild Win | Writers Guild Win |
Producers Guild Win | Golden Globe Win |
Important Social Issue | BAFTA Win |
Dallas Critics Win | Critics Choice Win |
Satellite Award Win | Dallas Critics Win |
Higher IMDB User Score | Not a Big Budget Film |
Animated Feature | Documentary | International Film |
Dallas Critics Win | Editors Guild Documentary Win | Golden Globe Win |
Editors Guild Animation Win | Writers Guild Documentary Win | BAFTA Win |
Number of Visual Effects Guild Wins | Los Angeles Film Critics Win | Disability/Dying Theme |
Los Angeles Critics Win | Sports Theme | Not R or MC-17 Rated |
National Board of Review Win |
Cinematography | Costume Design | Film Editing |
Critics Choice Win | Critics Choice Win | Editors Guild Drama Win |
Cinematography Guild Win | Producers Guild Win | BAFTA Win |
Las Vegas Film Critics Win | Story Takes Place Prior to 1950 | Has Several Fast Action Scenes |
Most Nominations in This Category | Number of Critic Award Wins | Show Business Theme |
Major Use of Computer Graphics | Bigger Budget |
Makeup | Production Design | Visual Effects |
BAFTA Win | Critics Choice Win | BAFTA Win |
Online Film/Television Critics Win | San Diego Critics Win | Most Nominations in This Category |
Most Nominations in This Category | Los Angeles Critics Win | Satellite Award Win |
Guild Period Makeup Win | Musical Theme | |
Most Nominations in This Category |
Score | Song | Sound (Previously Sound Editing) |
Number of Critic Award Wins | Golden Globe Win | BAFTA Win |
Golden Globe Win | Online Film/Televison Critics Win | Sound Editors Guild Effects Win |
Uses Untraditional Film Music | Not a Traditional Film Song | Also Nominated Best Film Editing |
Most Nominations in This Category | In a Disney or Pixar Film | Lots of Explosions |
Has No Previous Nominations |
- With much apprehension, use your gut for categories like Best Documentary and the three short categories.
We’re often moved by films the same way the voters are. This seems particularly true (at least for me) in the last few years when powerful films like 12 Years a Slave, Moonlight, and Parasite have been honored as Best Picture.
So when I see a film that particularly moves me—especially in the Documentary or short categories—I tend to think the Academy voters will feel the same way.
It’s worked for me a few times. For example, I thought a Best Live Action Short nominee, The Stutterer, really stood out in 2016. Only 21.7% of the experts picked that film, compared to 36.2% for the frontrunner Ave Maria. And it worked in 2018 when I picked Heaven is a Traffic Jam on the 405 for Best Documentary Short. It was closer to the frontrunner Edith+Eddie—31.9% compared to 38.7%.
I suppose there are a few others I can’t think of now, but the reality is that all serious Oscar fans have at least a few of these in their memory. We tend to remember the ones we nailed more than the ones we screwed up. Even so, I still remember these painful misses: Kubo and the Two Strings over Zootopia (Animated Film), Garden Party over Dear Basketball (Animated Short), and For Sama over American Factory (Documentary).
So proceed with caution with this technique. I’d at least validate your gut feeling pick with one of the other methods listed in this article.
- Be lucky, my friend.
Sometimes 50-50 picks really are just coin flips, i.e., no matter how much you study each side of the coin, you will still be wrong half the time.
Finally, I’ll finish with just a few don’ts:
- Don’t pay attention to any votes by secret academy voters.
In a sort of annual event, Hollywood Reporter, Gold Derby, the Los Angeles Times, Entertainment Weekly, and/or Indiewire clue us in on “secret Oscar ballots.” These are reportedly actual anonymous Oscar voters. Maybe 10-20 “ballots” usually show up each year. So, maybe, just maybe, you can add up all the results and consider this a representative sample of the entire voting Academy. Right?
Wrong. Not only is 10-20 a very, very poor sample of almost 9000 Academy voters, but these folks in no way represent other Oscar voters. Some use the exercise as a way to gripe about something or other, e.g., “Little Women is un-American.” Others are intended to shock readers by how few films they’ve seen. Who knows—maybe some are studio hacks encouraged to “secretly” promote their studio’s nominees.
Whatever they are, they are not even intended to be predictions. Ignore them completely.
- Don’t be swayed by how “sure” a forecaster is of his/her pick.
Phrases like “sure thing,” “a lock,” and a “shoo-in” are signs that the forecaster is overstating his/her case. None of those things are true in the Oscar prediction business. Just ask Glenn Close after she lost Best Actress to Olivia Colman.
When reading prediction articles, watch for substantive evidence that supports the forecaster’s case. Just making your case louder doesn’t make it better, EVEN IF IT’S IN ALL CAPS!
- Don’t use your personal judgement if you can help it.
I know, I just said above that it’s possible to watch a film and get a sense of whether the Academy will vote for it. Categories like Documentary and the shorts tend to go to films with “feel good” endings, or “aha” moments. But what feels good to you might not strike the average Academy voter the same way. And that “aha” moment for you, may be old hat for a grizzled veteran voter. I’d only use this kind of gut feeling if other information supported a win for the nominee.
If personal judgement has any value, it might be for ruling contenders out. Have you ever seen one of those abstract animated shorts that just bored the daylights out of you? Well, that might be an indication that a lot of voters were also bored and it probably won’t win. (Yeah, right.)
- Don’t pay attention to who’s “due.”
Gad, I hear this one every year. (Quite often in any conversation about Glenn Close.) Of course, there’s no rule that says you automatically become a winner after a certain number of tries. Peter O’Toole, Richard Burton, Deborah Kerr, and Thelma Ritter all had at least six acting nominations without ever winning one before they died. Amy Adams also has six, but at least she’s alive and has a chance.
The great cinematographer Roger Deakins was particularly “due” when he finally won an Oscar on his 14th try. Of course, folks were talking about him being “due” for most of those prior unsuccessful tries.
Thomas Newman is a composer nominated 15 times without a win. Like Roger Deakins, he’s hoping he’ll eventually be sufficiently “due” to be a winner.
And then there’s Greg P. Russell, who has been nominated 17 times for Sound Mixing. And now that they’ve eliminated that category, poor Greg will never win it!
I’ve actually tracked this in my spreadsheets. There is no such thing as being “due.” Oscars are given for lots of odd reasons, but no bloc of voters gets together and decrees it’s so-and-so’s turn.
- Sorry, but don’t believe you’re going to suddenly dominate all contests just because you read this or any other article.
You’ll have a great shot at winning your 5-10 person pool—maybe even one a little larger (especially if your competitors mostly go on gut instinct). But this Oscar prediction stuff is not easy. Winning bigger contests is usually a matter of luck. Hey, if you win, you can thank me. But if you lose, please blame lady luck.
Finally, below are my 2021 picks based on this methodology. They may not be perfect, but they’re a whole lot better than picking nominees alphabetically.
Picture: Nomadland
Director: Chloe Zhao (Nomadland
Actress: Carey Mulligan
Actor: Chadwick Boseman
Supporting Actor: Daniel Kaluuya
Supporting Actress: Youn Yuh-Jung
Adapted Screenplay: Nomadland
Original Screenplay: Promising Young Woman
Animated Feature: Soul
Documentary: My Octopus Teacher
International Film: Another Round
Cinematography: Nomadland
Costume Design: Ma Rainey’s Black Bottom
Film Editing: The Trial of the Chicago 7
Makeup/Hairstyling: Ma Rainey’s Black Bottom
Production Design: Mank
Score: Soul
Song: Speak Now (One Night in Miami)
Sound: Sound of Metal
Visual Effects: Tenet t
Animated Short: If Anything Happen I Love You
Documentary Short: A Concerto is a Conversation
Live Action Short: Feeling Through (Two Distant Strangers is the smarter pick, but this is one of those rare times I let my experience watching the movie persuade me.)