Nonresponse Workshop 2013

One of the greatest challenges in survey research are declining response rates. Around the globe, it appears to become harder and harder to convince people to participate in surveys. As to why response rates are declining, researchers are unsure. A general worsening of the ‘survey climate’, due to increased time pressures on people in general, and direct marketing are usually blamed.

This year’s Nonresponse workshop was held in London last week. This was the 24th edition, and all we talk about at the workshop is how to predict, prevent or adjust for nonreponse in panel surveys.

Even though we are all concerned about declining nonresponse rates, presenters at the nonresponse workshop have found throughout the years that nonresponse cannot be predicted using respondent characteristics. The explained variance of any model rarely exceeds 0.20. Because of this, we don’t really know how to predict or adjust for nonresponse either. We fortunately also find that generally, the link between nonresponse rates and nonresponse bias is wea k. In other words, high nonresponse rates are not per se biasing our substantive research findings.

At this year’s nonresponse workshop presentations focused on two topics. At other survey methods conferences ( ESRA , AAPOR ) I see a similar trend:

1. Noncontacts: where refusals can usually be not predicted at all (explained variances lower than 0.10), noncontacts can to some extent. So, presentations focused on:
- increasing contact rates among ‘difficult’ groups
- Using paradata, and call record data to improve the prediction of contact times, and succesful contacts.
- Using responsive designs, where the contact strategies is changed, based on pre-defined (and often experimental) strategies for subgroups in your populations ( adaptive designs ), and paradata during fieldwork using decision-rules ( responsive designs) .

2. Efficiency: Responsive designs can be used to increase response rates or limit nonresponse bias. However, they can also be used to limit survey costs. If respondents can be contacted with fewer contact attempts, this saves money. Similarly, we can limit the amount of effort we put into groups of cases for which we already have a high response rate, and devote our resources to hard-to-get cases.

There are many interesting studies than can be done into both these areas. With time, I think we will see that succesful stratgies will be developed that limit noncontact rates, nonresponse and even nonresponse bias to some extent. Also, survey might become cheaper using responsive designs, especially if the surveys use Face-to-Face or telephone interviewing. At this year’s workshop, there were no presentations on using a responsive design approach for converting soft refusals. But I can see the field moving in that direction too eventually.

Just one note of general disappointment with myself and our field remains after attending the workshop (and I’ve had this feeling before):

If we cannot predict nonresponse at all, and if we find that nonresponse generally has a weak effect on our survey estimates, what are we doing wrong? What are we not understanding? It feels, in philosophical terms, as if we survey methodologists are perhaps all using the wrong paradigm for studying and understanding the problem. Perhaps we need radically different ideas, and analytical models to study the problem of nonresponse. What these should be is perhaps anyone’s guess. And if not anyone’s, at least my guess.

Avatar
Peter Lugtig
Associate Professor of Survey Methodology

I am an associate professor at Utrecht University, department of Methodology and Statistics.

Related