Monday, December 8, 2014

Satisficing in mobile web surveys. Device-effect or selection effect?


Last week, I wrote about the fact that respondents in panel surveys are now using tablets and smartphones to complete web surveys. We found that in the LISS panel, respondents who use tablets and smartphones are much more likely to switch devices over time and not participate in some months.
The question we actually wanted to answer was a different one: do respondents who complete surveys on their smartphone or mobile give worse answers?

To do this, we used 6 months of data from the LISS panel, and in each month, coded the User Agent String. We then coded types of satisficing behavior that occur in surveys: the percentage of item missings, whether respondents complete (non-mandatory) open questions, how long their answers were, whether respondents straightline, whether they go for the first answers in a check-all-that-apply questions, and how many answers they click in a check-all-that apply question. We also looked at interview duration, and how much respondents liked the survey.

We found that respondents on a smartphone seem to do much worse. They take longer to complete the survey, are more negative about the survey, have more item missings, and have a much higher tendency to pick the first answer. On the other questions, differences were small, sometimes in favor of the smartphone user.

Click to enlarge: indicators of satisficing per device in LISS survey
Is this effect due to the fact that the smartphone and tablet are not made to complete surveys, and is satisficing higher because of a device-effect? Or is it a person effect, and are worse respondents more inclined to do a survey on a tablet or smartphone?

In order to answer this final question, we looked at device-transitions that respondents take within the LISS panel. In the 6 months of the LISS, respondents can make 5 transitions from using 1 device in the one month, to another (or the same) device in the next. For 7 out of 9 transitions (we have too few observations to analyze the tablet -> phone and phone -> tablet transitions), we can then look at the difference in measurement error that is associated with a change in device.

Click to enlarge. Changes in data quality (positive is better) associated with change in device.


The red bars indicate that there is no significant change in measurement error associated with a device change. Our conclusion is that device changes do not lead to more measurement error, with 2 exceptions:
1. A transition from tablet -> PC or phone -> PC in two consecutive months, leads to a better evaluation of the questionnaire. This implies that the user experience of completing web surveys on a mobile device should be improved.
2. We find that people check more answers in a check-all-that-apply question when they move from a tablet -> PC, or phone -> PC

So, in short. Satisficing seems to be more problematic when surveys are completed on tablets and phones. But this can almost fully be explained by a selection effect. Those respondents who are worse completing surveys, choose to complete surveys more on tablets and smartphones.

The full paper can be found here

Tuesday, December 2, 2014

Which devices do respondents use over the course of a panel study?


Vera Toepoel and I have been writing a few articles over the last two years about how survey respondents are taking up tablet computers and smartphones. We were interested in studying whether people in a probability-based web panel (the LISS panel) use different devices over time, and whether siwtches in devices for completing surveys are associated with more or less measurement error.

In order to answer this question, we have coded the User Agent Strings of the devices used by more than 6.000 respondents over a six month period. (see the publication tab for a syntax on how to do this using R).

We find, as others have done, that in every wave about 10% of respondents either use a tablet or smartphone. What is new in our analyses is that we focus on the question whether respondents persistently use the same device.

The table below shows that PC users largely stick to their PC in all waves. For example, we see that 77.4% of PC-respondents in April, again use a PC in May. Only 1.5% of April’s PC respondents switch to either a tablet or smartphone to complete a questionnaire in May.

Table. Devices used between April and September 2013 by LISS panel respondents.
N = 6,226. Click to enlarge
The proportion of respondents switching a PC for either a tablet or smartphone is similarly low in the other months, and is never more than 5%. This stability in device use for PCs is, however, not found for tablets and smartphones. Once people are using a smartphone in particular, they are not very likely to use a smartphone in the next waves of LISS. Only 29 per cent of smartphone users in July 2013, again uses a smartphone in August for example. The consistency of tablet usage increases over the course of the panel; 24% of respondent is a consistent tablet user in April-May, but this increases to 64% in July-August.

Finally, it is worth to note that the use of either a smartphone or a tablet is more likely to lead to non-participation in the next wave of the survey. This may however be a sample selection effect. More loyal panel members may favor the PC to complete the questionnaires.

More in a next post on the differences between respondents answer behavior over time, when they switch devices. Do respondents become worse when they answer a survey on a smartphone or tablet?

You can download the full paper here