Saturday, January 12, 2013

Designing mixed-mode surveys

This weekend is the deadline for submitting a presentation proposal to this year's conference of the European Survey Research Association. That's one the two major the conferences for people who love to talk about things like nonresponse bias, total survey error, and mixing survey modes.

As in previous years, it looks like the most heated debates will be on mixed-mode surveys. As survey methodologists we have been struggling to combine multiple survey modes (Internet, telephone, face-to-face, mail) in a good way. And we no longer can rely on just one survey mode to do a good survey. Many people don't have Internet, or landline phones, and face-to-face modes are becoming too expensive.

When we start mixing survey modes, we run into all kinds of problems. Different people respond in different survey modes, leading to differences in the variables we're interested in (selection effects). And we also know that people respond differently to the same survey question when asked in two different modes (measurement or 'mode'  effect).

We have been trying to assess the size of  selection and mode effects in mixed-mode surveys for a long time. I have written about this issue earlier, and although we have made progress, I don't think we will ever be able to 'correct' for differences between survey modes. That is because the sizes of selection and mode effects will always differ from survey to survey, depending on the topic of the study, and the population studied.

So, I suggest we try something different. We don't we try to make survey modes themselves similar? If you think of combining the Internet, with face-to-face (two very different modes), these could be some ways to make the surveys equivalent.

- Approach people in the same way. For example, use advance letters in both modes, and then call respondents to either make an appointment (Face-to-face), or ask for an e-mailaddress and do an Internet interview. Stupid it may sound, but mixed-mode surveys or experiments often change the entire protocol, so we have no idea what is caused by what.
- Use showcards in a face-to-face survey for sensitive questions and let the respondent self-complete them, like they do on the Internet
- Use a Virtual Reality interviewer on an Internet survey for difficult questions, to help the respondent answering questions.
- Use an audio recording feature on the Internet, so respondents can give answers in the same way as they do in Face-to-face surveys
- Use short and simple questions that can easily be asked in any survey mode in more or less the same manner. This implies minimizing the number of response categories to only a few for any question.

Surely, this approach is not fault free, and I do see lots of practical issues. I do think however that this is a better way to move forward than just trying fix up the mess with statistics after the data collection. These correction methods will always be necessary, but relying on them too much puts too much faith in statistics.