why panel surveys need to go 'adaptive'

Last week, I gave a talk at Statistics Netherlands (slides here ) about panel attrition. Initial and nonresponse and dropout from panel surveys have always been a problem. A famous study by Groves and Peytcheva ( here ) showed that in cross-sectional studies, nonresponse rates and nonresponse bias are only weakly correlated. In panel surveys however, all the signs are there that dropout in a panel study is often related to change. Those respondents undergoing most change, are also most likely to drop out. This is probably partly because of respondents (e.g. a move of house could be a good reason to change other things as well, like survey participation), but it is also because of how surveys deal with such moves. Movers are much harder to contact (if we don’t have accurate contact details anymore). Movers are often assigned to a different interviewer. This will all lead to an underestimate of the number of people who move house in panel studies. Moving house is associated with lots of other life events (change in household composition, change in work, income etc.). In short dropout is a serious problem in longitudinal studies.

The figure below shows the cumulative response rates for some large-scale panel studies. The selection of panel studies is a bit selective. I have tried to focus on large panel studies (so excluding cohort studies), which are still existing today, with a focus on Western Europe.

Cumulative nonresponse rates in large panel surveys (click to enlarge)

The oldest study in the figure (PSID) has the highest initial response rate, followed by studies which were started in the 1980s (GSOEP), 1990s (BHPS), and early 2000s (HILDA). The more recent studies all have higher initial nonresponse rates. But not only that. They also have higher dropout rates (the lines go down much faster). This is problematic.

I think these differences are not due to the fact that we, as survey methodologists, are doing a worse job now as compared to 20 years ago. If anything, we have been trying use more resources, professionalize tracking, offer higher incentives, and be more persistent. In my view, the increasing dropout rates are due to changes in society (the survey climate). A further increase of our efforts (e.g. higher incentives) could perhaps help somewhat to reduce future dropout. I think this is however not the way to go, especially as budgets for data collection face pressures everywhere.

The way to reduce panel dropout is to collect data in a smarter way. First, we need to understand why people drop out. This is something we know quite well (but more can be done). For example, we know that likely movers are at risk. So, what we need are tailored strategies that focus on specific groups of people (e.g. likely movers). For example, we could send extra mailings in between waves only to them. We could use preventive tracking methods. We could put these into the field earlier.

I am not the first to suggest such strategies. We have been tailoring our surveys for ages to specific groups, but have mostly done so at an ad-hoc basis,  never systematically. Responsive or adaptive designs try to use tailoring systematically , for those groups that most benefit from tailoring. Because we know so much about our respondents after wave 1, panel studies offer lots of opportunities to implement responsive designs.

Avatar
Peter Lugtig
Associate Professor of Survey Methodology

I am an associate professor at Utrecht University, department of Methodology and Statistics.

Related