attrition

why panel surveys need to go 'adaptive'

Last week, I gave a talk at Statistics Netherlands (slides here ) about panel attrition. Initial and nonresponse and dropout from panel surveys have always been a problem. A famous study by Groves and Peytcheva ( here ) showed that in cross-sectional studies, nonresponse rates and nonresponse bias are only weakly correlated. In panel surveys however, all the signs are there that dropout in a panel study is often related to change.

Are item-missings related to later attrition?

A follow up on last month’s post . Respondents do seem to be less compliant in the waves before they drop out from a panel survey. This may however not neccesarily lead to worse data. So, what else do we see before attrition takes place? Let have a look at missing data: First, we look at missing data in a sensitive question on income amounts. Earlier studies ( here , here, here ) have already found that item nonresponse on sensitive questions predicts later attrition.

Do respondents become sloppy before attrition?

I am working on a paper that aims to link measurement errors to attrition error in a panel survey. For this, I am using the British Household Panel Survey. In an earlier post I already argued that attrition can occur for many reasons, which I summarized in 5 categories. 1. Noncontact 2. Refusal 3. Inability (due to old age, infirmity) as judged by the interviewer, also called ‘other non-interview’. 4. Ineligibibility (due to death, or move into institution or abroad).

Personality predicts the likelihood and process of attrition in a panel survey

Studies into the correlates of nonresponse often have to rely on socio-demographic variables to study whether respondents and nonrespondents in surveys differ. Often there is no other information available on sampling frames that researchers can use. That is unfortunate, for two reasons. First, the variables we are currently using to predict nonrespons, usually explain a very limited amount of variance of survey nonresponse. Therefore, these variables are also not effective correctors for nonresponse.

Longitudinal interview outcome data reduction: Latent Class and Sequence analyses

Frauke Kreuter once commented on a presentation I gave that I should really be looking at sequence analysis for studying attrition in panel surveys. She had written an article on the topic with Ulrich Kohler ( here ) in 2009, and as of late there are more people exploring the technique (e.g. Mark Hanly at Bristol, and Gabi Durrant at Southampton ). I am working on a project on attrition in the British Household Panel, and linking attrition errors to measurement errors.

Imagine we have great covariates for correcting for unit nonresponse...

I am continuing on the recent article and commentaries on weighting to correct for unit nonresponse by Michael Brick, as published in the recent issue of the Journal of Official Statistics ( here ). The article by no means is all about whether one should impute or weight. I am just picking out one issue that got me thinking. Michael Brick rightly says that in order to correct succesfully for unit nonresponse using covariates, we want the covariates to do two things:

measurement and nonresponse error in panel surveys

I am spending time at the Institute for Social and Economic Research in Colchester, UK where I will work on a research project that investigates whether there is a tradeoff between nonresponse and measurement errors in panel surveys. Survey methodologists have long believed that multiple survey errors have a common cause. For example, when a respondent is less motivated, this may result in nonresponse (in a panel study attrition), or in reduced cognitive effort during the interview, which in turn leads to measurement errors.

Is panel attrition the same as nonresponse?

All of my research is focused on the methods of assembling and analysis of panel survey data. One of the primary problems of panel survey projects is attrition or drop-out. Over the course of a panel survey, many respondents decide to no longer participate. Last july I visited the panel survey methods workshop in Melbourne, at which we had extensive discussions about panel attrition. How to study it, what the consequences are (bias) for survey estimates, and how to prevent it from happening altogether.