Why we should throw out most of what we know on how to visually design web surveys
In 2000, web surveys looked like postal surveys stuck onto a screen. Survey researchers needed time to get used to the idea that web surveys should perhaps look differently from mail surveys. When I got into survey methodology in 2006, everyone was for example still figuring out whether to use drop down menus (no), how many questions to put on one screen (a few at most), let alone whether to use slider bars (it’s not going to reduce breakoffs).
Mixed-mode research is still a hot topic among survey methodologists. At least at about every meeting I attend (some selection bias is likely here). Although we know a lot from experiments in the last decade, there is also a lot we don’t know. For example, what designs reduce total survey error most? What is the optimal mix of survey modes when data quality and survey costs are both important? And, how can we compare mixed-mode studies across time, or countries, when the proportions of mode assignments changes over time or vary between countries?
Some colleagues in the United Kingdom have started a half-year initiative to discuss the possibilities of conducting web surveys among the general population. Their website can be found here One aspect of their discussions focused on whether any web survey among the population should be complemented with another, secondary survey mode. This would for example enable those without Internet access to participate. Obviously, this means mixing survey modes.
This weekend is the deadline for submitting a presentation proposal to this year’s conference of the European Survey Research Association. That’s one the two major the conferences for people who love to talk about things like nonresponse bias, total survey error, and mixing survey modes.
As in previous years, it looks like the most heated debates will be on mixed-mode surveys. As survey methodologists we have been struggling to combine multiple survey modes (Internet, telephone, face-to-face, mail) in a good way.
Instead of separating out mode effects from nonresponse and noncoverage effects through statistical modeling, it is perhaps better to design our mixed-mode surveys in such a way so that mode effects do not occur. The key principle in preventing the mode effects from occurring, is to make sure that questionnaires are cognitively equivalent to respondents. This means that no matter in which survey mode the respondents participate, they would give the same answer.
Mixed mode surveys have shown to attract different types of respondents. This may imply that they are succesful. Internet surveys attract the young and telephone surveys the old, so any combination of the two can lead to better population estimates for the variable you’re interested in. In other words, mixed-mode surveys can potentially ameliorate the problem that neither telephone, nor Internet surveys are able to cover the entire population.
The bad news is that mode-effects (see posts below) coincide with selection effect in mixed-mode surveys.
Mode effects - the fact that respondents respond differently to a survey question, solely because of the mode of interviewing - are hard to study. This is because mode-effects interact with nonresponse effects. An Internet survey will atract different respondents than a telephone survey. Because of this, any differences that result from this survey, could be either due to differences in the type of respondents, or because of a mode effect.
One of the most interesting issues in survey research is the mode effect. A mode effect can occur in mixed-mode surveys, where different questionnaire administration methods are combined. The reasons for mixing survey modes are multifold, but usually survey researchers mix modes to limit nonresponse, reach particular hard-to-reach types of respondents, or limit measurement error. It is more common today to mix modes than not mix them, for some good reasons: