The other important thing I want to say is that if the Comey quote is true, he should have actually listened to good election predictions that showed the number was over 70 percent. So that becomes an argument for further forecasts.
Well, what is a “good” forecast? If we go back to 2016, as you say, Nate Silver’s prediction gave Trump a 30 percent chance of winning. Other models put Trump’s odds at over 1 percent or low single digits. The feeling is that because Trump won, Nate Silver was “right” because of that. But of course we can’t really say that. If you say something has a 1 in 100 chance and it happens, that could mean you’re underestimating it, or it could just mean the 1 in 100 chance.
This is the problem with figuring out whether election prediction models are properly aligned with real world events. If we go back to 1940, we only have 20 presidential elections in our sample size. So there is no real statistical justification for a precise probability here. 97 vs 96 – it’s insanely difficult with our limited test size to know if these things are calibrated correctly to 1 percent. This whole exercise is much more uncertain than I think the press would lead the consumers of polls and forecasts to believe.
In your book, you talk about Franklin Roosevelt’s pollster, who was a genius at polls early on — but even his career went up in flames later, right?
This man, Emil Hurja, was Franklin Roosevelt’s pollster and election predictor. He invented the first kind of aggregate of polls, the first tracking poll. A very fascinating character in the story of the poll. He’s insanely accurate at first. In 1932, he predicts that Franklin Roosevelt will win by 7.5 million votes, even though other people predict Roosevelt will lose. He wins with 7.1 million votes. So Hurja is better calibrated than the other pollsters at the time. But then he flops in 1940, and later on he’s actually as accurate as the average pollster.
When investing, it is difficult to beat the market over a long period of time. Likewise, polling requires you to constantly rethink your methods and your assumptions. Although Emil Hurja has been called “the wizard of Washington” and “the Crystal Gazer of Crystal Falls, Michigan” early on, his record deteriorates over time. Or maybe he got lucky early on. In retrospect, it’s hard to know if he really was this genius predictor.
I bring this up because – well, I’m not trying to scare you, but it could be that your biggest mistake is somewhere in the future and yet to come.
That’s kind of the lesson here. What I want people to think about is that just because the polls were biased in one direction for the past few elections, doesn’t mean they will be biased in the same way in the next election for the same reasons. The smartest thing we can do is read each poll with an eye on how that data was generated. Are these questions well formulated? Does this poll reflect Americans in their demographic and political trends? Is this outlet a reputable outlet? Is there something going on in the political environment that could cause Democrats or Republicans to pick up the phone or answer online surveys at higher or lower rates than the other party? You should consider all these possible outcomes before accepting the data. And so that’s an argument for treating polls with more uncertainty than the way we’ve treated them in the past. I think that’s a pretty obvious conclusion from the last election. But more importantly, it’s more true how pollsters arrive at their estimates. They are ultimately uncertain estimates; they are not a basic truth about public opinion. And that’s how I want people to think about it.