Monday, February 28, 2011

studying mode effects

Mode effects - the fact that respondents respond differently to a survey question, solely because of the mode of interviewing - are hard to study. This is because mode-effects interact with nonresponse effects. An Internet survey will atract different respondents than a telephone survey. Because of this, any differences that result from this survey, could be either due to differences in the type of respondents, or because of a mode effect.

There are three basic methods to study mode effects. The most common way to study mode-effects is:

1. to experimentally assign respondents to a survey mode. Then, the results from the survey are compared: the response rate, the demographic composition of the samples, and finally differences in the dependent variables. Sometimes, demographic differences between the samples are corrected using a multivariate model, like weighting. For an overview: see the results of this google scholar search. 

This type of design is popular, but in my view it has a great drawback: we know Internet samples and telephone surveys do only cover a part of the population. Landline telephone coverage is rapidly declining, while Internet use remains limited to about 80 per cent of the general population in Western countries. There are two alternative approaches, that deal with this issue.





2. One can make respondents switch modes during the interview. For example from the telephone to the Internet, or from face-to-face to paper-and-pencil. Although this approach sounds very simple, relatively few studies have been conducted in this manner. See Heerwegh (2009) for a nice example.
More experimental studies are defnitely welcome and necessary if we want to understand how problematic mode effects are.

3. The third way of studying mode effects relies on more sophisticated statistical modeling to separate different sources of survey error. The most relevant errors in mixed-surveys are coverage, nonresponse and response errors (i.e. the mode effect). Separating these could be done using a) validation data b) repeated measurements using the same or different modes or c) matching.
I am not aware of any mixed-mode studies that have used validation data to study mode effects, and as the mode effect occurs for attitudinal questions mainly, it is hard to find such data. The other two approaches both offer more practical ways of assessing mode effects. I will discuss both the modeling approach using longitudinal data and matching more extensively in next posts.

Monday, February 21, 2011

mode effects

One of the most interesting issues in survey research is the mode effect. A mode effect can occur in mixed-mode surveys, where different questionnaire administration methods are combined. The reasons for mixing survey modes are multifold, but usually survey researchers mix modes to limit nonresponse, reach particular hard-to-reach types of respondents, or limit measurement error. It is more common today to mix modes than not mix them, for some good reasons:

1. nonresponse to survey requests is ever increasing. In the 1970s it was feasible to achieve a 70% response rate without too much effort in the U.S. and the Netherlands. Nowadays, this is very difficult. In order to limit costs and increase the likelihood of a response, survey organisations use a mix of consecutive modes. For example, it starts by mailing a cheap questionnaire by mail, perhaps with an URL included in the mail. Nonrespondents are then followed up by more expensive modes: they are phoned, and/or later visited at home to make sure response rates go up.

2. there are few survey modes that are able to reach everyone. In the 1990s, almost everyone had a landline phone, now only 65% does so. Internet penetration is at about 85%, but does not seem to get higher. In order to reach everyone, we have to mix modes. On top of that, certain types of respondents may have mode preferences. Young people are commonly believed to like web-surveys (I'm not too sure of that), while older people like phone or face-to-face surveys.

3. for some questions, we know it is better to ask them in particular modes. Sensitive behaviors and attitudes, like drug use, committing fraud, or attitudes towards relationships, are better measured when the survey is anonymous (i.e. when no interviewer is present). For questions that are difficult and require explanation the opposite is true: interviewers are necessary for exmple to get a detailed view of someone's income.

Mixing survey modes seems to be a good idea from all these angles. One problematic feature however is that people react differently when they answer a question on the web or on the phone. This is because it makes a difference whether a questions is read out to you (phone), or whether you can read the question yourself. Also, it matters whether an interviewer is present or not, and whether you have to tell your answer or whether you can write it down. These differences between survey modes lead to all kinds of differences in the data: the mode effect. Although differences between survey modes are well documented, the problem is that mode effects and other effects are confounded: the different modes attract different people. People on the phone might be less likely to give a negative answer due to the interviewer being present, but it could also be that phone surveys attract older people, who are also less likely to answer negatively. The fact that measurement errors and non-measurement errors interact in mixed-mode surveys makes it very difficult to estimate how problematic mode effects are in practice, and whether we should be worried about them. In my next post I will outline some ways how mode-effects could in my view be studied and better understood

Thursday, February 10, 2011

how to use Internet panels for polling

Before people believe I'm old-fashioned, I do think that Internet-surveys, even panel surveys are the future of survey research. John Krosnick makes some good points in a video shot by the people from www.pollster.com

checklist for quality of opinion polls

1. Is it clear who ordered and financed the poll?
2. Is there a report documenting the poll's procedures?
3. Is the target population clearly described?
4. is the questionnaire available and has it been tested?
5. what were the sampling procedures?
* the sample should be drawn for the target population. If it only contains for example people with Internet access, be careful
6. What is the number of respondents?
7. Is the response percentage sufficient?
* it is difficult to say what percentage is sufficient. Higher response percentages do not automatically lead to better data quality. 10 or 20 % is however too low.
8. Have the data been weighted?
9. Are the margins of error being reported?

For more info, check the website of Jelke Bethlehem (in Dutch), or download the checklist here, with an explanation (in Dutch)

the influence of polls on voting behavior

Many opinion pollers do badly when it comes to predicting elections. This is mainly because they let their respondents self-select them into their polls. So what, who cares? The polls make for some good entertainment and easily fill the talk-shows on television. If everyone knows they cannot be trusted, why care?

We should care. In the Dutch electoral system - with poportional respresentation - every vote counts. If only a small percentage of voters lets their vote depend on the polls of the election result, this can result in shifts of several seats in parliament. It is unclear how many voters decide how to vote based on the opinion polls, but it is a fact that there are many voters who consider voting for two or more parties, and many who do vote strategically. The Dutch Parliamentary Election Study (DPES) in 2006 found that 18% of voters indicated that they let their vote be influenced by the election polls. This amounts to a total of 27 parliamentary seats: almost the number of seats of the largest party in the current parliament.

As long as voters choose strategically in different ways this may not matter. If someone votes strategically to make sure a new government has the the greens in it, but someone else votes strategically for labour to make sure his or her favourite candidate becomes prime mininster, the net effect of strategical voting might be zero or very small. There is evidence however, that this is not the case. People like to vote for winners. This is called the bandwagon effect. Whenever labour does well in the (biased) opinion polls, more voters will consider voting for them. This may in the end lead to the fact political parties (and pollers) have a lot of interest to do well in polls. In fact, it may be tempting to publish fraudulent polls. This seems to be increasingly common in the United States, where they call them "push polls" . Publish fraudulent polls on purpose to make public opinion shift in your favor.

So, what to do about it? First, I think it would be fair not to publish any opinion polls some time before election day, as is done in France for example (albeit only for two days). Second, journalists and newsreaders should be very critical towards opinion polls, and only publish them when some basic quality criteria have been assessed and met. The Dutch Organisation on Survey Research has taken the initiative to develop a checklist for journalists. I will put it online soon.