mode effects

One of the most interesting issues in survey research is the mode effect. A mode effect can occur in mixed-mode surveys, where different questionnaire administration methods are combined. The reasons for mixing survey modes are multifold, but usually survey researchers mix modes to limit nonresponse, reach particular hard-to-reach types of respondents, or limit measurement error. It is more common today to mix modes than not mix them, for some good reasons:

1. nonresponse to survey requests is ever increasing. In the 1970s it was feasible to achieve a 70% response rate without too much effort in the U.S. and the Netherlands. Nowadays, this is very difficult. In order to limit costs and increase the likelihood of a response, survey organisations use a mix of consecutive modes. For example, it starts by mailing a cheap questionnaire by mail, perhaps with an URL included in the mail. Nonrespondents are then followed up by more expensive modes: they are phoned, and/or later visited at home to make sure response rates go up.

2. there are few survey modes that are able to reach everyone. In the 1990s, almost everyone had a landline phone, now only 65% does so. Internet penetration is at about 85%, but does not seem to get higher. In order to reach everyone, we have to mix modes. On top of that, certain types of respondents may have mode preferences. Young people are commonly believed to like web-surveys (I’m not too sure of that), while older people like phone or face-to-face surveys.

For some questions, we know it is better to ask them in particular modes. Sensitive behaviors and attitudes, like drug use, committing fraud, or attitudes towards relationships, are better measured when the survey is anonymous (i.e. when no interviewer is present). For questions that are difficult and require explanation the opposite is true: interviewers are necessary for exmple to get a detailed view of someone’s income.

Mixing survey modes seems to be a good idea from all these angles. One problematic feature however is that people react differently when they answer a question on the web or on the phone. This is because it makes a difference whether a questions is read out to you (phone), or whether you can read the question yourself. Also, it matters whether an interviewer is present or not, and whether you have to tell your answer or whether you can write it down. These differences between survey modes lead to all kinds of differences in the data: the mode effect. Although differences between survey modes are well documented, the problem is that mode effects and other effects are confounded: the different modes attract different people. People on the phone might be less likely to give a negative answer due to the interviewer being present, but it could also be that phone surveys attract older people, who are also less likely to answer negatively. The fact that measurement errors and non-measurement errors interact in mixed-mode surveys makes it very difficult to estimate how problematic mode effects are in practice, and whether we should be worried about them. In my next post I will outline some ways how mode-effects could in my view be studied and better understood

Peter Lugtig
Associate Professor of Survey Methodology

I am an associate professor at Utrecht University, department of Methodology and Statistics.