mixed mode

Summer school 'Advanced Survey Design'

The course in survey design takes student beyond the introductory courses offered in BA and MA programmes, and discusses the state-of-the-art of one of the most important data collection techniques: surveys. The course focuses on the methodology of how to do surveys, and the use statistical techniques to analyse and correct for some specific survey errors. It combines short 1-hour lectures with exercises on most of the topics discussed. We assume course participants are proficient in working with R.

Push-to-web and the role of incentives

With Vera Toepoel , and people at NIDI and the German Institiute for Demographic Research we have in 2018 conducted a large experiment to tranfer the Gender and Generations Survey to a push-to-web survey. Several papers will document what we did, and what particular designs worked and did not work.These papers will come out later in 2021, and I may publish some more posts about this design. In this post I want to focus on the role of incentives in push-to-web surveys.

The traditional web survey is dead

Why we should throw out most of what we know on how to visually design web surveys In 2000, web surveys looked like postal surveys stuck onto a screen. Survey researchers needed time to get used to the idea that web surveys should perhaps look differently from mail surveys. When I got into survey methodology in 2006, everyone was for example still figuring out whether to use drop down menus (no), how many questions to put on one screen (a few at most), let alone whether to use slider bars (it’s not going to reduce breakoffs).

Satisficing in mobile web surveys. Device-effect or selection effect?

Last week, I wrote about the fact that respondents in panel surveys are now using tablets and smartphones to complete web surveys . We found that in the LISS panel, respondents who use tablets and smartphones are much more likely to switch devices over time and not participate in some months. The question we actually wanted to answer was a different one: do respondents who complete surveys on their smartphone or mobile give worse answers?

Which devices do respondents use over the course of a panel study?

Vera Toepoel and I have been writing a few articles over the last two years about how survey respondents are taking up tablet computers and smartphones. We were interested in studying whether people in a probability-based web panel ( the LISS panel ) use different devices over time, and whether siwtches in devices for completing surveys are associated with more or less measurement error. In order to answer this question, we have coded the User Agent Strings of the devices used by more than 6.

Interested in new mixed mode research project?

Mixed-mode research is still a hot topic among survey methodologists. At least at about every meeting I attend (some selection bias is likely here). Although we know a lot from experiments in the last decade, there is also a lot we don’t know. For example, what designs reduce total survey error most? What is the optimal mix of survey modes when data quality and survey costs are both important? And, how can we compare mixed-mode studies across time, or countries, when the proportions of mode assignments changes over time or vary between countries?

Mixed mode surveys: where will be in 5 years from now?

Some colleagues in the United Kingdom have started a half-year initiative to discuss the possibilities of conducting web surveys among the general population. Their website can be found here  One aspect of their discussions focused on whether any web survey among the population should be complemented with another, secondary survey mode. This would for example enable those without Internet access to participate. Obviously, this means mixing survey modes.

Designing mixed-mode surveys

This weekend is the deadline for submitting a presentation proposal to this year’s conference of the European Survey Research Association. That’s one the two major the conferences for people who love to talk about things like nonresponse bias, total survey error, and mixing survey modes. As in previous years, it looks like the most heated debates will be on mixed-mode surveys. As survey methodologists we have been struggling to combine multiple survey modes (Internet, telephone, face-to-face, mail) in a good way.

matching to correct for self-selection bias in mixed-mode surveys

Mixed mode surveys have shown to attract different types of respondents. This may imply that they are succesful. Internet surveys attract the young and telephone surveys the old, so any combination of the two can lead to better population estimates for the variable you’re interested in. In other words, mixed-mode surveys can potentially ameliorate the problem that neither telephone, nor Internet surveys are able to cover the entire population. The bad news is that mode-effects (see posts below) coincide with selection effect in mixed-mode surveys.