Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm on the nerd side too, but I was quite surprised. Sampling isn't the only source of error in election polling - people also change their minds. In this election, people decided who they were going to vote for historically early - the lowest portion of people in the history of exit polls indicated they decided in the week before the election. In theory, this effect should reduce the width of the probability distribution, and yet, the polls were off again, in the same direction as they are always off, by a little bit more than average.

A little scrutiny here is worthwhile. In 2016, for example, the polls were off in part because pollsters weighted the black demographic in proportion to their voting behavior in 2008 and 2012 (when a black candidate was running, boosting turnout). The actual 2016 black turnout looked more similar to 2004 or 2000 - other elections where a black candidate wasn't running. Though I haven't dived into the 2020 polling data, it wouldn't surprise me if a similar effect were at play.

It's also become noticeable harder to do polling. Once upon a time, everybody had a landline, so random dialing worked really well. Now, some people have more phones than others, and people are increasingly less willing to talk to pollsters, increasing the error rate.

These are all important things to talk about - when you dismiss it as "oh yeah, polls are never perfect," you prematurely shut down the conversation. Don't let perfect be the enemy of the good. The polls could be better and it's important to talk about how.



There is also "what polls directly measure" and "what we infer from them".

Besides the usual adjustments for likely voter models, we also do things like infer how people will vote in House and Senate elections based on Presidential polling, because we do a lot more Presidential polling than down ballot, because money.

So one early thing that seems to have happened this year (and again, we are still early in the process to be making final verdicts in "what happened" retrospectives) is that people split their ballot more than anticipated, which means that the inferences we were making about down ballot races based on Presidential performance were off.

So not quite a polling error, but an example of uncertainty that may not have been correctly taken into account in forecasting.


<< These are all important things to talk about - when you dismiss it as "oh yeah, polls are never perfect," you prematurely shut down the conversation. Don't let perfect be the enemy of the good. The polls could be better and it's important to talk about how.

I think the complaint is that here the polls were not just imperfect. They were at best, misleading raising questions over methodology ( lessons, apparently, not learned in 2016 ), wishful thinking, and their usefulness, or, at worst, attempts at influencing desired income.

I can absolutely agree that polling is not an exact science, but now it is twice in a row that polling science has grossly miscalculated the mood of the nation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: