Monday, January 24, 2011

how to do an exit-poll

There are several ways to do an exit poll, but they all come down to asking people what they voted, right after they went into the voting booth. The first succesfull modern exit poll was conducted in 1967 to predict the governor's election of Kentucky.

One of the difficulties in exit polling, is that some people might not want to say whom they vote for, especially if this person is politically controversial. This might be one of the reasons why Geert Wilders, and the PVV in general always underperform in Dutch exit polls. The second difficulty is selecting a number of polling stations. Good exit polls do this either randomly, or (even better) choose stratified sampling. Stratified sampling is particularly important when voting behavior has a strong regional component. For example, a random selection of polling stations in the Netherlands, might exclude by chance any localities in the 'bible belt' , where people often vote for the SGP leading to a under-represntation of voters for the SGP. Stratifying on past voting behavior in polling stations can increases statistical power, making sure we need fewer polling stations to achieve the same margin of error.

In the past, exit polls were conducted like this. Slowly, market research firms have first switched to telephone surveys, and later Internet surveys to do their exit poll. Both TNS NIPO and relied on their panel to predict the election results. This once again shows how people who voluntarily join access panels can not be used to produce good statistics for the general population.
Wisely, the Dutch news stations (ANP, NOS, RTL) chose to do a proper, old-school exit poll in 2010. See this post for details (in Dutch).

So what, one might ask? Why worry about the crappy polls? We can just ignore them, and then focus on the polls that do a good job? Alas, people are heavily influenced by polls in the media in the period leading up to elections. More on this, and strategic voting, next time

Monday, January 17, 2011

predicting elections

Opinion pollers do a lousy job of predicting elections. For a good read, see for example the prediction of the New Hampshere primary in 2008, when all polls predicted Obama to win, but it was Clinton who won (albeit by a slim margin).

In the Dutch context, there are three main polling firms, that each do equally well (or badly). Out of a hundred and fifty parliamentary seats, mispredicted 20, while TNS-NIPO and Synovate shared the honor of only missing the target by 16 seats in the 2010 parliamentary election. These polls were conducted the day before the election, and some of the pollers said that people might have changed their vote at the last minute. That may very well be, but even the exit poll on the night of the election was wrong. was 17 seats off and TNS NIPO 15. Only Synovate did a lot better, and only missed the true result by 3 seats. I will discuss why this is in a next post, but it is just a matter of speed and low costs versus quality.

And we have known for a long, long time how to do exit polls. Although there was public outcry in the UK, when the exit poll predicted the liberal democrats not to win the elections, it was spot on. If we know how to do it, then why don't we?

Monday, January 10, 2011

first past the post

Dear all,

With a new year come new year's resolutions. I have been working as a survey methodologist for about the last 5 years. I teach and I do research. Teaching gives instant rewards, or at least instant feedback. I like that. Doing research is however a different matter. It is a slow and sometimes agonizing process of muddling through (for me).

Studies remain in review forever, sometimes don't make it at all into a publication, while some of my ideas or views just never make onto paper at all. I hope this blog fills that gap.

I will write in English, but might occasionally do so in Dutch if I feel like it. As far as content goes, I'm not sure where all of this will lead. I might post very academic-like things very frequently, but could also publish every once in a while.

As a survey methodologist my view is that data matter. Policy makers, and academics use data too often without really knowing how the data were gathered, and whether they are trustworthy. Over the past five years it is my experience that data quality is often low, leading to badly informed or even wrong decisions. Data quality is far more important that fancy statistical models or cool graphs. Hopefully you will enjoy my adventures in the jungle of improving survey data quality.

Your singalong survey methodologist,