responsive design

why panel surveys need to go 'adaptive'

Last week, I gave a talk at Statistics Netherlands (slides here ) about panel attrition. Initial and nonresponse and dropout from panel surveys have always been a problem. A famous study by Groves and Peytcheva ( here ) showed that in cross-sectional studies, nonresponse rates and nonresponse bias are only weakly correlated. In panel surveys however, all the signs are there that dropout in a panel study is often related to change.

Are item-missings related to later attrition?

A follow up on last month’s post . Respondents do seem to be less compliant in the waves before they drop out from a panel survey. This may however not neccesarily lead to worse data. So, what else do we see before attrition takes place? Let have a look at missing data: First, we look at missing data in a sensitive question on income amounts. Earlier studies ( here , here, here ) have already found that item nonresponse on sensitive questions predicts later attrition.

Do respondents become sloppy before attrition?

I am working on a paper that aims to link measurement errors to attrition error in a panel survey. For this, I am using the British Household Panel Survey. In an earlier post I already argued that attrition can occur for many reasons, which I summarized in 5 categories. 1. Noncontact 2. Refusal 3. Inability (due to old age, infirmity) as judged by the interviewer, also called ‘other non-interview’. 4. Ineligibibility (due to death, or move into institution or abroad).

Nonresponse Workshop 2013

One of the greatest challenges in survey research are declining response rates. Around the globe, it appears to become harder and harder to convince people to participate in surveys. As to why response rates are declining, researchers are unsure. A general worsening of the ‘survey climate’, due to increased time pressures on people in general, and direct marketing are usually blamed. This year’s Nonresponse workshop was held in London last week.

AAPOR 2013

The AAPOR conference last week gave an overview of what survey methodologists worry about. There were relatively few people from Europe this year, and I found that the issues methodologists worry about are sometimes different in Europe and the USA. At the upcoming ESRA conference for example there are more than 10 sessions on the topic of mixing survey modes. At AAPOR, mixing modes was definitely not ‘hot’. With 8 parallel sessions at most times, I have only seen bits and pieces of all the things that went on.