February 25, 2016
Data Marketing

How Data-Driven Marketing is Like Predicting the Oscars

The Academy Awards are less than a week away— which means movie enthusiasts are making their final picks for this year’s Oscar pools.

Will Revenant director Alejandro Iñárritu becomes the first back-to-back Best Director winner in more than six decades? And will Leonardo DiCaprio, winless in four previous acting nominations, finally earn his gold statue?

We’ll know soon enough. But in the meantime, some people are putting serious math behind their predictions. Like sports,elections and other contests that command public interest, the Oscars have inspired a cottage industry of data-driven prognosticators. According to leading Oscar-math experts, here are the leaders headed into this weekend’s big race:

  • Best Picture - The Revenant1
  • Best Director - Alejandro Iñárritu , The Revenant2
  • Best Actor and Actress - Leonardo DiCaprio, The Revenant3; Brie Larson, Room4.
  • Best Supporting Actor and Actress: Sylvester Stallone, Creed5; Kate Winslet, Steve Jobs6.

If you’re a marketer trying to turn consumer data into strategic insights, there are lessons to be learned from these Oscar watchers’ efforts.

1. Key performance indicators are mandatory.

Oscar prediction techniques vary tremendously (more on this later), but almost all of them involve at least one common ingredient: correlations to past winners.

Critical reception might seem like an important factor, for example, until you consider that this year’s top-reviewed filmCarol, wasn’t even nominated for Best Picture, and that the Academy’s Best Picture winner hasn’t matched critics’ “best reviewed” film this decade.

Popularity might also seem important. But despite its status as highest-grossing film in U.S. history, Star Wars: The Force Awakens earned nominations for only technical awards7, and Inside Out, the film that might boast the best balance of critical raves and box office bonafides, was relegated to the Animated Film category.

Put simply, if you’re going to use data to make a prediction, your data needs to reflect whatever you want to predict. If you want to predict this year’s winners, your formula needs to include past years’ winners.

What does this teach us about data-driven marketing? At some level, all marketing metrics are an attempt to explain movement in a core KPI, such as revenue or user growth. We often assume things like high Tweet volume are relevant to these KPIs— but when we do, we’re acting like those Oscar fans who think box office is a prevailing factor with the Academy. If you want to use social media data to devise revenue-growing strategies, you need to understand how your data correlates to revenue.

2. A lot of the data won’t matter.

Though past winners are important, they’re not equally important. If you want to predict this year’s winners, it’s not very useful to know that How Green Was My Valley won Best Picture over Citizen Kane in 1941. Since then, most - maybe all - of the Academy’s voting membership has changed, and the way votes are counted has been altered several times.

It is useful to know Directors Guild and Producers Guild winners have been highly predictive of recent Best Picture winners. Past data is only useful if it says something about the present.

What does this teach us about data-driven marketing?Only about 20% of consumer social data has explanatory and predictive value. Spam bots, marketing messages, and endlessly recycled song lyrics are legion. Only a sliver of the data involves organic consumer conversations relevant to a brand. If you don’t know whom your analytics represent and how the data’s been filtered, you’re like an Oscar fan who thinks old data is just as relevant as new data.

3. Methodology matters.

As mentioned, Oscar prediction approaches vary wildly. Some experts, such as Sasha Stone at Awards Daily, regularly survey the historical landscape but sometimes pick with their guts. Others experts, such as Nate Silver’s team at FiveThirtyEight or The Hollywood Reporter’s Ben Zauzmer, build mathematical models that rely on some combination of past winners at not only the Academy Awards but also other awards shows.

The models measure the various other shows for predictive value and assign them a weight. This weight would indicate, for example, that The Big Short winning the Producers Guild award is more important than it losing at the Golden Globes. The models tend to score much better than average, but they’re not without flaws. FiveThirtyEight’s model shows that Idris Elba should be preparing an acceptance speech for his performance in Beasts of No Nation, but he controversially went un-nominated.

These flaws leave room for improvement, and a new breed of creative stats experts -- profiled in an excellent FiveThirtyEight series -- is pushing the methodologies in diverse new directions. Some intriguing examples:

  • A Yale chemist’s model predicts The Revenant will win— but not necessarily because it won Directors Guild and Screen Actors Guild prizes. Rather, her model looks for online language patterns used to describe past Oscar winners, then builds models to predict this year.
  • An Ernst & Young team’s technique attempt to model the “worldview” of Oscar voters by analyzing publications those voters are likely to read, evaluating the publications’ perspectives, and conducting a text analysis to find nominated films that appeal to those perspectives. This model favors The Big Short, whose satiric take on the financial crisis seems like a match for left-leaning Hollywood— but the model also predicts that Bryan Cranston will win the  Best Actor statue for his performance in Trumbo, which would be a colossal upset.
  • Another model analyzes not only past data from the Oscars and other awards but also variables such as whether a film is nominated in other categories. A Best Picture win without a Best Director nomination is highly rare but not impossible, as Argo showed a few years ago. Without the slew of industry awards that Argo accrued, the odds become even slimmer— so don’t count on The Martian to surprisingly walk away with the big prize.

The variety of techniques and results demonstrates why it’s important to understand methodology. The Ernst & Young model, for example, defines “worldview” in terms of only two Los Angeles-area publications. It’s something of a stretch to assume these publications truly reflect Hollywood, let alone the somewhat more geographically-dispersed Academy voting body. It’s also unclear how precisely the textual analysis can define a “worldview.” It’s easy to see how Trumbo, a Hollywood-centric biopic, could over index under this sort of methodology.

In short, some methodological approaches are accurate for some categories but not for others. Some approaches might yield insights into other questions, such as media impact on Oscar voting, but not necessarily into the likely winners themselves.

What does this teach us about data-driven marketing?Don’t just know what your analytics solution seems to measure, or even what the vendor says it measures. Many metrics have value, but only if they're viewed in the right context. You don’t have to be a statistician, but you have to understand how your data methodology creates insights.

Most analytics solutions are useful at measuring something—but it’s not always the thing you want to measure. Remember, the only thing worse than the wrong answer is the right answer to the wrong question.

Takeaway Questions:

  1. When I build a strategy around a data insight, what KPI is that insight connected to?
  2. How does my analytics methodology distinguish data that matters from data that doesn’t?
  3. How do I know if my analytics solution is measuring online noise, abstracted metrics, or revenue-relevant data?

--

1: With a win at the BAFTAs (the British Academy) and leading Director and Actor contenders in Iñárritu and DiCaprio, The Revenant is the best-positioned in most statistical models.

Still, many of those models indicate this is a relatively close year. As Ben Zauzmer notes, The Revenant lacks several important predictors, such as a Writers Guild nomination. With its Producers Guild win, The Big Short could play spoiler. Spotlightalso can’t be counted out; it lacks some of the other films’ passionate advocates but is relatively broadly admired— which could make a difference, given that the Academy’sarcane vote-counting methods typically penalize more divisive films.

2: With the Directors Guild win, Iñárritu is the clear favorite, though Mad Max: Fury Road director George Miller has some statistical heat from early critics awards. Industry awards are usually better Oscar predictors than press awards, however.

3, 4: In most Oscar forecasters’ views, this race really isn’t close.

5,6: Supporting actor is a mess from a statistical viewpoint. The winner of the Screen Actors Guild award would normally be the favorite, but as mentioned, the Academy somehow forgot to nominate Idris Elba. Stallone is a sentimental favorite, but don’t be surprised by an unexpected result. Supporting actress is also a bit murky: Most models are split between Winslet and Alicia Vikander in The Danish Girl. We went with Winslet on a coin toss.

7: Something to watch for: With five nominations, Star Wars: The Force Awakens will have several chances to win, with Best Score and Best Visual Effects being particularly likely. But if it strikes out, it will be the first time a film managed to set a new record for “biggest film in U.S. history” while failing to win an Oscar.