Friday, August 24, 2012

Electoral volatility due to different questions?

Poll volatility 

Below, you find the summed changes in parliamentary seats over all parties in consecutive opinion polls for the four main polling firms in the Netherlands in the lead up to the 2012 elections. I will update the table below in the next weeks.
This overview follows from my earlier post on Dependent Interviewing. Maurice de Hond (peil.nl) is the only survey pre-loading earlier voter preferences into survey questions. I expect this to lead to less volatility in voter preferences for Maurice de Hond, as compared to the other polling firms.

Update September 12th: With the final polls out on election day, it seems that the polls of Maurice de Hond are indeed most stable over time, and in my previous post I argues this was because of the fact that he uses Dependent Interviewing in his question on " what would you vote if there were elections today". Still, I would have expected a larger effect. Let's see tomorrow which polling firm did best. My bet: Synovate, because they are using the most sound (although still not perfect) methodology of polling people. More on that tomorrow...



 Maurice de Hond (peil.nl)
Intomart/de stemming
 Synovate
 TNS-NIPO
 week 23 (03-06)
4
-
 8
-
 10-06
8
-
-
-
 17-06
6
-
 8*
-
 24-06
4
-
-
 14**
 01-07
10
-
 12*
 8
 08-07
12
-
 12
 12
 15-07
6
 8
 10
 12
 22-07
2
 10
-
 6
 29-07
4
-
 8*
 6
 05-08
2
-
-
 10
 12-08
10
10**
 8*
 16
 19-08
8
  8
12
  8
 26-08
6
20
8
16
03-09
22
20
12
14
10-09
26
10
16
20










 average change
8,7 12,2*** 10,3*** 11,8

* two-week difference
** three-week difference
*** rounded down due to inclusion of multi-week changes

Wednesday, August 15, 2012

Dependent Interviewing, and stability in opinion polls

I was re-reading one of the papers I wrote as part of my dissertation on survey data quality in panel surveys. The paper deals with the effects of the introduction of an interviewing technique called Dependent Interviewing in the British Household Panel Survey. I wrote this paper together with Annette Jackle, and if you are interested after reading the next bit, you can download a working paper version of it here.

Dependent Interviewing uses data from respondents from earlier interviews in survey questions. Instead of asking respondents every year the question "what types of income do you receive", you can also ask them:

"last year, you told us that you receive income from your private pension plan, the state pension, as well as income from renting out a house. Is this still the same?"

There are of course multiple ways in which you can use information like this, and the BHPS actually uses Dependent Interviewing in a slightly more sophisticated way, but the basic idea is in my opinion quite intuitive. Why would you ask the same questions time and time again, when you already know so much about respondents?

The paper we wrote documents the effects on data quality, and specifically investigates what the effect of Dependent Interviewing is on measures of household income. In short, the effects are not huge, but it turns out that Dependent Interviewing is especially effective for poorer households. These households depend for a large part on different kinds of government transfers, and these are easily forgotten or underreported. When the effects of Dependent Interviewing are taken into account, the poorer households become a little richer, and so, all in all, poverty is actually a little lower than was previously estimated.

Perhaps interesting to Dutch readers, one of the main pollers in the Netherlands, Maurice de Hond, is also using Dependent Interviewing in his surveys (on all questions!). I am a member of his panel, and when I complete a survey, I only have to change answers if I want to, and otherwise just confirm my answers from the previous waves.

I see why Maurice de Hond has chosen to do this. Electoral preferences are very volatile, and panel surveys on voter preferences are perhaps too volatile. But I have serious doubts whether Dependent Interviewing here solves volatility. It rather creates articficial stability. In the first week of july, Maurice de Hond polled an average weekly change of seats of 10. Ipsos Synovate (see my earlier posts on why I trust them most), 12. Actually a small difference. There are many newspapers following and criticising the actual poll results. I'll try to keep you updated on volatility across the polls, meanwhile trying to answer the question whether one should trust stable polls, or volatile polls.