Nine years ago I started blogging. I have been quiet the last few years when it comes to blogging. Perhaps I will pick this up again, perhaps not. What changed quite a bit is how I work as a scientist. I am using R now as my default software for analysis, and have also started to use GitHub for version control, as the cool kids nowadays do. Anyways, my website was long due an overhaul.
As a survey methodologist I get paid to develop survey methods that generaly minimize survey errors, and advise people on how to field surveys in a specific setting. A question that has been bugging me for a long time is what survey error we should worry about most. The Total Survey Error (TSE) framework is very helpful for thinking which type of survey error may impact survey estimates
But which error source is generally larger?
Back after a long pause. Panel surveys traditionally interview respondents at regular intervals, for example monthly or yearly. This interval is mostly chosen for practical reasons: interviewing people more frequently would lead to a large respondent burden, and a burden on data processing and dissemination. For these practical reasons, panel surveys often space their interviews one year apart. Many of the changes (e.g. changes in household composition) we as researchers are interested in occur slowly, and annual interviews suffice to capture these changes.
Last week, I wrote about the fact that respondents in panel surveys are now using tablets and smartphones to complete web surveys . We found that in the LISS panel, respondents who use tablets and smartphones are much more likely to switch devices over time and not participate in some months.
The question we actually wanted to answer was a different one: do respondents who complete surveys on their smartphone or mobile give worse answers?
This week, I have been reading the most recent issue of the Journal of Official Statistics , a journal that has been open access since the 1980s. In this issue is a critical review article of weighting procedures authored by Michael Brick with commentaries by Olena Kaminska ( here ), Philipp Kott ( here ), Roderick Little ( here ), Geert Loosveldt ( here ), and a rejoinder ( here ).
This morning, an official enquiry into the scientific conduct of professor Mart Bax concluded that he had committed large-scale scientific fraud over a period of 15 years. Mart Bax is a now-retired professor of political anthropology at the Free University Amsterdam. In 2012 a journalist first accused him of fraud, and this spring, the Volkskrant, one of the big newspapers in the Netherlands reported they were not able to find any of the informants Mart Bax had used in his studies.
Social scientists (and psychology in particular) have in recent years had somethings of a bad press, both in- and outside academia. To give some examples:
- There is a sense among some people that social science provides little societal or economical value. - Controversy over research findings within social science: for example the findings of Bem et al. about the existence of precognition , or the estimation of the number of casualties in Iraq war (2003-2007).
Longitudinal surveys ask the same people the same questions over time. So questionnaires tend to be rather boring for respondents after a while. “Are you asking me this again, you asked that last year as well!” is what many respondents probably think during an interview. As methodologists who manage panel surveys, we know this process may be rather boring, but in order to document change over time, we just need to ask respondents the same questions over and over.
Mixed-mode research is still a hot topic among survey methodologists. At least at about every meeting I attend (some selection bias is likely here). Although we know a lot from experiments in the last decade, there is also a lot we don’t know. For example, what designs reduce total survey error most? What is the optimal mix of survey modes when data quality and survey costs are both important? And, how can we compare mixed-mode studies across time, or countries, when the proportions of mode assignments changes over time or vary between countries?
All of my research is focused on the methods of assembling and analysis of panel survey data. One of the primary problems of panel survey projects is attrition or drop-out. Over the course of a panel survey, many respondents decide to no longer participate.
Last july I visited the panel survey methods workshop in Melbourne, at which we had extensive discussions about panel attrition. How to study it, what the consequences are (bias) for survey estimates, and how to prevent it from happening altogether.