Tuesday, June 26, 2018

Which survey error source is larger: measurement or nonresponse?

As a survey methodologist I get paid to develop survey methods that generaly minimize survey errors, and advise people on how to field surveys in a specific setting. A question that has been bugging me for a long time is what survey error we should worry about most. The Total Survey Error (TSE) framework is very helpful for thinking which type of survey error may impact survey estimates
But which error source is generally larger?  Nonresponse or measurement errors?

Thankfully, no one has ever asked me this question yet, because I would find it impossible to answer anything then "well, that depends".

The reason why we don't know what error source is larger is that we can usually assess observational errors only for the people we have actually observed. There are several ways to do this. Sometimes we know the truth, and so we can compare survey answers ("do you have a valid driver's license?") to data that we know from administrative records. If we are interested in attitudes, we can use psychometric models. The people behind the computer programme SQP have summarised a huge number of question experiments and MTMM models to predict the quality of a specific survey questions. By asking different forms of the same question (e.g. how interested are you in politics?") we can gauge the reliability and validity of this question under different question wordings and answer scales.

The problem of course is that if we are indeed interested in the concept "interest in politics", we would ideally also like to know what people who we have not observed would have answered. In order to estimate errors of non-observation (nonresponse), we would need to  actually observe these people!

There are of course some situations where we actually do know something about nonrespondents. Cannell and Fowler (1963) are an early example: they knew something about nonrespondents (hospital visits) and could compare different respondent and nonrespondent groups. A more recent great example is by Kreuter, Muller and Trappmann (2010). They did a survey among people for whom they already knew their employment status. They showed that nonresponse and measurement error in employment status were about of equal size, and go in different directions.

There are several other studies, among students or in the context of mixed-mode studies that have looked at factual questions, and estimated both measurement and nonresponse error in the same study. So, what do we learn? From my reading of the literature, there is no clear pattern in findings. Sometimes measurement errors are larger, sometimes nonresponse is larger. And sometimes these survey errors go in the same direction, and sometimes in different directions. A further problem is that these validation studies use factual questions, not attitudinal questions, which surveys are more often interested in. In conclusion, that means that:

1. For factual questions, is is not clear whether nonresponse or measurement errors are the larger problem. There is large variation across studies.
2. Because the measurement quality of attitudinal questions is generally lower than that of factual questions, measurement errors may pose a relatively larger problem than nonresponse in attitudinal questions.
3. BUT, we then have to assume that nonresponse bias is generally the same for attitudinal and factual questions, which may not be true. Stoop (2005) and others have shown that if you are interested in measuring in " interest in politics", late and hard-to-reach respondents are very different from early and easy respondents.

So, what to do? How do we make progress so that I can at some point give an answer to the question which error sourcewe should worry about most?

1. We could find studies with a very high response rate (100% ideally) and study the differences between the easy and late respondents, like Stoop did.
2. We should do more validation studies for factual questions, which should become more feasible, as more and more register data are available.
3. And, we should try to link MTMM studies and other psychometric models to nonresponse models. I recently did a study that did this for a panel study, but what is really needed is work in cross-sectional studies.

Thursday, April 27, 2017

Mobile-only web survey respondents

My breaks between posts are getting longer and longer. Sorry my dear readers. Today, I am writing about research done over a year ago that I did with Vera Toepoel and Alerk Amin.

Our study was about a group of respondents we can no longer ignore: Mobile-only web survey respondents. These are people, who do no longer use a laptop or desktop PC and use their smartphone for most or any of their Internet browsing, but instead use a smartphone. If we as survey methodologists want these people to answer our surveys, we have to design our surveys for smartphones as well.

Who are these mobile-only web survey respondents? This population may of course differ across countries. We used data from the American Life Panel, run by RAND, to investigate what this group looked like in the United States, using data from 2014 (so the situation today may be a bit different). The full paper can be found here

We find that of all people participating in 7 surveys conducted in the panel, 7% is mobile-only in practice. This is not a huge proportion, but it may matter a lot if these 7% of respondents are very different from other types of respondents. We find that they are.

In order to study how different these respondents are, we define 5 groups based on the device the use for responding to surveys:
1. Respondents who always use a PC for completing surveys. This the largest group (68%) and therefore serves as the reference group)
2. Respondents who always use a tablet (5%)
3. Respondents who always use a smartphone ( 7% - our group of interest)
4. Respondents who mix tablets and Pcs (7%)
5. Respondents who mix phones and Pcs (10%)
A further 1% uses all devices, but we ignore these respondents here.

Click Figure to enlarge. The effects shown are always in comparison to the reference group, which is the ‘always PC’ group.


The 5 groups serve as our dependent variable in a multinomial logit regression. The average marginal effects shown in the Figure above respresent the change in the likelihood of being in the 'always PC' group as compared to one of the other groups. The negative age coefficient of -.03 for the always phone group means that with every decade respondents get younger (a negative effect), they have a .03 higher probability to be be in the always phone group as referred to the always Pc group. These effects seem small, but they are not. An imaginary respondent aged 60 has a predicted probability of 92 percent of being in the always PC group as opposed to the always phone group, but this probability is about 80 percent for someone aged 20, controlling for the effects of other covariates.

Our take-away? Apart from age, 'Always phone’ respondents are also less likely to have a higher education (Bachelor degree or higher), are more likely to be married, and more likely to be of Hispanic or African American ethnicity. These characteristics coincide with some of the most important characteristics of hard-to-recruit respondents. While designing your surveys for smartphones will not get these hard-to-recruit respondents into your panel, you can easily lose them by not designing your surveys for smartphones.

Wednesday, February 10, 2016

The traditional web survey is dead

Why we should throw out most of what we know on how to visually design web surveys

In 2000, web surveys looked like postal surveys stuck onto a screen. Survey researchers needed time to get used to the idea that web surveys should perhaps look differently from mail surveys. When I got into survey methodology in 2006, everyone was for example still figuring out whether to use drop down menus (no), how many questions to put on one screen (a few at most), let alone whether to use slider bars (it's not going to reduce breakoffs).
We now know that good web surveys don't look like mail surveys. Proponents of 'gamification' in surveys argue that survey researchers are still underusing the potential of web surveys. Perhaps they are right. So, while web surveys are ubiquitous, and have been around for almost 20 years, we still don't exactly know to to visually design them.

The next challenge is already upon us. How are we going to deal with the fact that so many surveys are now being completed on different devices than computers? Touch screens are getting more popular, and screens for smartphones can be as small as 4 inches. People are nowadays using all kinds of devices to complete surveys. The rules for how a questionnaire should look like on a 21 inch desktop monitor are in my view not the same however as for a 5 inch Iphone.

Just take the following picture as an example. It was taken from the Dutch LISS panel in 2013, before they came up with a new and much better design for mobile surveys. Yes this is still what many people still see when they answer a survey question on a smartphone. Is this the way to do it? Probably not. Why should there be a 'next' (verder) button at the bottom of the page? Can't we redesign the answer scale so that we use more space on the screen?
 

Small touchscreens ask for a different design. Take the picture below. Tiles are used instead of small buttons, so that the entire screen is used. There is no next button; once you press a tile the next questions shows up. To me this looks better, but is it really? Only proper (experimental) research can tell.

I also still see lots of problems with adapting our visual design to smartphones. Where to put the survey question in the example below? On a separate page? What if respondents tick a wrong answer? And, what if some respondents get this question on a PC? Should the questions look the same to minimize measurement differences, or be optimized for each device separately? Web surveys are dead. Mixed-device surveys are here to stay.


I think there are lots of opportunities for exciting research on how to design surveys for the new era of mixed-device surveys. For anyone interested, I recently edited an open-access special issue on mixed-device surveys together with Vera Toepoel, where different authors for example study:

- why using slider bars is a bad idea for (mobile) surveys
- the effects of device switching within longitudinal surveys (study 1 and study 2) on data quality
- different possible visual designs for implementing mixed-device surveys, and their effects on data quality

These studies are just the beginning. Much more is necessary, and it will take trial, error and probably another 15 years before we have figured out how to design surveys for mixed-device surveys properly.

Tuesday, December 1, 2015

Retrospective reporting

Back after a long pause. Panel surveys traditionally interview respondents at regular intervals, for example monthly or yearly. This interval is mostly chosen for practical reasons: interviewing people more frequently would lead to a large respondent burden, and a burden on data processing and dissemination. For these practical reasons, panel surveys often space their interviews one year apart. Many of the changes (e.g. changes in household composition) we as researchers are interested in occur slowly, and annual interviews suffice to capture these changes.
Sometimes we want to get reports at a more detailed level, however. For example, we would like to know how often a respondents visits a doctor (general practitioner) in one year, or when a respondent went on holidays. In order to get at such an estimate, survey researchers can do one of several things:

1. we can ask a respondent to remember all doctor visits in the past year, and rely on retrospective recall. We know that doesn't work very well, because respondents cannot remember all visits. Instead, respondents will rely on rounding, guessing and estimation to come to an estimate.
2. we can ask respondents for visits in say the past month, and rely on extrapolation to get to an annual estimate. This strategy works well if doctor visits are stable throughout the year (which they are not).
3. We can try to break down the year into months, and instead of asking for doctor visits in the last year, ask for doctor visits in each month, reducing the reference period. We can stimulate the retrieval of the correct information further by using a timeline or by the use of landmarks.
4. We can interview respondents more often. So, we conduct 12 monthly interviews instead of one annual interview, thereby reducing both the reference and recall period.

With Tina Glasner and Anja Boeve, I recently published a paper that compared methods 1, 3 and 4 within the same study to estimate the total number of doctor (family physician) visits. This study is unique in the sense that we used all three methods within the same respondents, so for each respondents we can see how reporting is different when we rely on annual recall, monthly recall, and on whether we use an annual or monthly reference period. Our study also included an experiment to see whether timelines and landmarks improved recall.

You can find the full paper here.

We find that respondents give different answers about their doctor visits depending on how we ask them. The estimates for annual visits are;
- annual estimate (1 question): 1.96 visits
- monthly estimate with 12-month recall: 2.62 visits
- monthly estimate with 1-month recall: 3.90 visits

The average number of doctor visits in  population registers is 4.66, so the monthly estimate with 1 month-recall periods comes closest to our population estimate.

As a final step, we were interested in understanding which respondents give different answers depending on the question format. For this, we studied the within-person absolute difference between the monthly estimates with a 12-month and 1-month recall period. The table below shows that the more frequent doctor visits are, the larger the differences between the 1 month and 12-month recall periods. This implies that respondents with more visits tend to underreport them more often when the recall period is long. The same holds for people in moderate and good health. People in bad health often go to the doctor regularly, and remember these visits. More infrequent visits are more easily forgotten. Finally, we find that the results of our experiment are non-significant. Offering respondents personal landmarks, and putting these next to the timeline to improve recall, does not lead to smaller differences.


In practice, these findings may be useful when one is interested in estimating frequencies of behavior over an annual period. Breaking up the 'annual-estimate' question into twelve monthly questions helps to improve data quality. Asking about such frequencies at 12 separate occasions further helps, but this is unlikely to be feasible due to the large increased costs of more frequent data collection. In self-administered web-surveys this might however be feasible. Splitting up questionnaires into multiple shorter ones may not only reduce burden, but can increase data quality for specific survey questions as well.


Wednesday, May 20, 2015

Adaptive designs: 4 ways to improve panel surveys

This is a follow-up on why I think panel surveys need to adapt their data collection strategies to target individual respondents. Let me first note that apart from limiting nonresponse error, there are other reasons why we would want to do this. We can limit survey costs by using expensive survey resources only for people who need them.
A focus on nonresponse alone can be too limited. For example: imagine we want to measure our respondents' health. We can maybe do this cheaply by using web interviews, and then try to limit nonresponse error by using interviewers to convert initial nonrespondents. But what about measurement? If we use web surveys we largely have to rely on self-reports on the respondents' health. But if we use interviewers for everyone and do a Face-to-Face survey among all our respondents, we can use the interviewers to obtain objective health measures for respondents. These objective measurements could be much better than the self-reports. So face-to-face interviews may not be 'worth' the cost if we look at nonresponse alone, but if we also include the effects on measurement, it may be a viable interview option, if we reduce the sampling size.
 
I think a focus just on any one type of survey error can have adverse effects, and it is Total Survey Error as well as costs we need to keep in mind. Having said this, I really believe nonresponse errors in panel surveys are a huge problem. What could we do (and have others done)?

1.  Targeted mode. Some respondents are easy to reach in all modes, and some are difficult in all modes. There is also a 'middle' group, who may participate in some modes, but not others. I, for example dislike web surveys (because I get so many), but appreciate mail surveys (because I never get them). In a panel survey, we can ask respondents about mode preferences. Some studies
(here, here) have found that stated mode preferences are not very predictive of response in that mode in the next wave, as people indicate to prefer the mode they are interviewed in. This means we probably need a better model than just 'mode preference' to make this work.

Probably wants a different survey mode next time.
 
2.  Targeted incentives. We know some respondents are 'in it' for the money, or at least are sensitive to offers of incentives. In panel surveys, we can learn quickly about this by experimenting with amounts both between and/or within persons. For example, does it help to offer higher incentives to hard-to-reach respondents? Does that help just once, or is there a persistent effect? It may be unethical to offer higher incentives to just hard-to-reach respondents, as we then put a premium on bad respondent behavior. We could however use different metrics for deciding whom to offer the incentive. Nonresponse bias is a much better indicator for example. There is not too much we know about how to do this, although there is a nice example here.

3. Targeted advance letters. We know quite a lot  about the effects different types of advance letters have on subsequent response. Young people can for example be targeted with a letter with a flashy lay-out and short bites of information of the study, while older people may prefer a more 'classic' letter with more extensive information about the study setup and results.
The effects of targeted letters on response in panel surveys are often small, and only present for specific subgroups. See here, and here for two examples. Still, this type of targeting costs little, perhaps we can find bigger effects if we know what groups to target with what message. As with other targeting methods, we need a combination of data mining and experimentation to develop knowledge about this.

4. Targeted tracking. I am not aware of any survey doing targeted tracking. Tracking is done during fieldwork. Respondents who are not located by an interviewer (or advance letter which bounce), are sent back to the study coordinating team, after which tracking methods are used to locate the respondent at an alternative address. From the literature we know that it is mainly people who move house who need tracking. If we can successfully predict the likelihood to move, we could potentially save time (and money) in fieldwork, by putting cases into preventive tracking. We could also potentially use a targeted order of tracking procedures, as James Wagner has done.




-

Wednesday, January 21, 2015

why panel surveys need to go 'adaptive'

Last week, I gave a talk at Statistics Netherlands (slides here) about panel attrition. Initial and nonresponse and dropout from panel surveys have always been a problem. A famous study by Groves and Peytcheva (here) showed that in cross-sectional studies, nonresponse rates and nonresponse bias are only weakly correlated. In panel surveys however, all the signs are there that dropout in a panel study is often related to change. Those respondents undergoing most change, are also most likely to drop out. This is probably partly because of respondents (e.g. a move of house could be a good reason to change other things as well, like survey participation), but it is also because of how surveys deal with such moves. Movers are much harder to contact (if we don't have accurate contact details anymore). Movers are often assigned to a different interviewer. This will all lead to an underestimate of the number of people who move house in panel studies. Moving house is associated with lots of other life events (change in household composition, change in work, income etc.). In short dropout is a serious problem in longitudinal studies.

The figure below shows the cumulative response rates for some large-scale panel studies. The selection of panel studies is a bit selective. I have tried to focus on large panel studies (so excluding cohort studies), which are still existing today, with a focus on Western Europe. 


Cumulative nonresponse rates in large panel surveys (click to enlarge)

The oldest study in the figure (PSID) has the highest initial response rate, followed by studies which were started in the 1980s (GSOEP), 1990s (BHPS), and early 2000s (HILDA). The more recent studies all have higher initial nonresponse rates. But not only that. They also have higher dropout rates (the lines go down much faster). This is problematic.

I think these differences are not due to the fact that we, as survey methodologists, are doing a worse job now as compared to 20 years ago. If anything, we have been trying use more resources, professionalize tracking, offer higher incentives, and be more persistent. In my view, the increasing dropout rates are due to changes in society (the survey climate). A further increase of our efforts (e.g. higher incentives) could perhaps help somewhat to reduce future dropout. I think this is however not the way to go, especially as budgets for data collection face pressures everywhere.

The way to reduce panel dropout is to collect data in a smarter way. First, we need to understand why people drop out. This is something we know quite well (but more can be done). For example, we know that likely movers are at risk. So, what we need are tailored strategies that focus on specific groups of people (e.g. likely movers). For example, we could send extra mailings in between waves only to them. We could use preventive tracking methods. We could put these into the field earlier.

I am not the first to suggest such strategies. We have been tailoring our surveys for ages to specific groups, but have mostly done so at an ad-hoc basis,  never systematically. Responsive or adaptive designs try to use tailoring systematically, for those groups that most benefit from tailoring. Because we know so much about our respondents after wave 1, panel studies offer lots of opportunities to implement responsive designs.



Monday, December 8, 2014

Satisficing in mobile web surveys. Device-effect or selection effect?


Last week, I wrote about the fact that respondents in panel surveys are now using tablets and smartphones to complete web surveys. We found that in the LISS panel, respondents who use tablets and smartphones are much more likely to switch devices over time and not participate in some months.
The question we actually wanted to answer was a different one: do respondents who complete surveys on their smartphone or mobile give worse answers?

To do this, we used 6 months of data from the LISS panel, and in each month, coded the User Agent String. We then coded types of satisficing behavior that occur in surveys: the percentage of item missings, whether respondents complete (non-mandatory) open questions, how long their answers were, whether respondents straightline, whether they go for the first answers in a check-all-that-apply questions, and how many answers they click in a check-all-that apply question. We also looked at interview duration, and how much respondents liked the survey.

We found that respondents on a smartphone seem to do much worse. They take longer to complete the survey, are more negative about the survey, have more item missings, and have a much higher tendency to pick the first answer. On the other questions, differences were small, sometimes in favor of the smartphone user.

Click to enlarge: indicators of satisficing per device in LISS survey
Is this effect due to the fact that the smartphone and tablet are not made to complete surveys, and is satisficing higher because of a device-effect? Or is it a person effect, and are worse respondents more inclined to do a survey on a tablet or smartphone?

In order to answer this final question, we looked at device-transitions that respondents take within the LISS panel. In the 6 months of the LISS, respondents can make 5 transitions from using 1 device in the one month, to another (or the same) device in the next. For 7 out of 9 transitions (we have too few observations to analyze the tablet -> phone and phone -> tablet transitions), we can then look at the difference in measurement error that is associated with a change in device.

Click to enlarge. Changes in data quality (positive is better) associated with change in device.


The red bars indicate that there is no significant change in measurement error associated with a device change. Our conclusion is that device changes do not lead to more measurement error, with 2 exceptions:
1. A transition from tablet -> PC or phone -> PC in two consecutive months, leads to a better evaluation of the questionnaire. This implies that the user experience of completing web surveys on a mobile device should be improved.
2. We find that people check more answers in a check-all-that-apply question when they move from a tablet -> PC, or phone -> PC

So, in short. Satisficing seems to be more problematic when surveys are completed on tablets and phones. But this can almost fully be explained by a selection effect. Those respondents who are worse completing surveys, choose to complete surveys more on tablets and smartphones.

The full paper can be found here

Tuesday, December 2, 2014

Which devices do respondents use over the course of a panel study?


Vera Toepoel and I have been writing a few articles over the last two years about how survey respondents are taking up tablet computers and smartphones. We were interested in studying whether people in a probability-based web panel (the LISS panel) use different devices over time, and whether siwtches in devices for completing surveys are associated with more or less measurement error.

In order to answer this question, we have coded the User Agent Strings of the devices used by more than 6.000 respondents over a six month period. (see the publication tab for a syntax on how to do this using R).

We find, as others have done, that in every wave about 10% of respondents either use a tablet or smartphone. What is new in our analyses is that we focus on the question whether respondents persistently use the same device.

The table below shows that PC users largely stick to their PC in all waves. For example, we see that 77.4% of PC-respondents in April, again use a PC in May. Only 1.5% of April’s PC respondents switch to either a tablet or smartphone to complete a questionnaire in May.

Table. Devices used between April and September 2013 by LISS panel respondents.
N = 6,226. Click to enlarge
The proportion of respondents switching a PC for either a tablet or smartphone is similarly low in the other months, and is never more than 5%. This stability in device use for PCs is, however, not found for tablets and smartphones. Once people are using a smartphone in particular, they are not very likely to use a smartphone in the next waves of LISS. Only 29 per cent of smartphone users in July 2013, again uses a smartphone in August for example. The consistency of tablet usage increases over the course of the panel; 24% of respondent is a consistent tablet user in April-May, but this increases to 64% in July-August.

Finally, it is worth to note that the use of either a smartphone or a tablet is more likely to lead to non-participation in the next wave of the survey. This may however be a sample selection effect. More loyal panel members may favor the PC to complete the questionnaires.

More in a next post on the differences between respondents answer behavior over time, when they switch devices. Do respondents become worse when they answer a survey on a smartphone or tablet?

You can download the full paper here

Wednesday, August 13, 2014

Can you push web survey respondents to complete questionnaires on their mobile phones?

I am back from some great holidays, and am revisiting some of the research I did over the last 2 years. Back then, I would have not expected that I would become so interested in doing survey research on mobile phones. I do think that a little change of research topic does one good.

I have written two papers with Vera Toepoel on how to do surveys on mobile phones. The first question we had was whether people were actually likely to do a survey on a mobile phone. Last year, Marketresponse, a probability-based web panel in the Netherlands, had changed their survey software so that questionnaires would be dynamically adapted to mobile phone screen settings, and navigation methods. They then informed their respondents about it, and encouraged them to try a short survey on shopping behavior on their smartphone (if respondents had one).

We found that of those respondents who owned a smartphone, 59% chose to use it when encouraged and were positively surprised by this finding. Even with quite little encouragement, survey respondents are willing to try completing the survey on their mobile phone. Also, we found little reason to be worried about side-effects of encouraging mobile survey response.

- We found little differences in terms of demographics between those who did the survey on a mobile phone, or a desktop (including tablets).
- We found no differences in terms of response behavior.
- We found no difference in how mobile and desktop respondents evaluated the questionnaire.
- We found no difference in the time it took them to complete the survey (see the figure below). In fact, the timings were so similar, we could scarcely believe the differences were so small.


The full paper can be found here. There are a few potential caveats in our study: we use a sample of experienced survey respondents and did not use experimental assignments, so self-selection into device could be selective beyond the variables we studied. So far however, it really seems that web surveys on a mobile phone are not very different for respondents than traditional web surveys.

Monday, May 26, 2014

AAPOR 2014

Big data and new technologies to do survey research. These were in my view the two themes of the 2014 AAPOR conference. The conference organisation tried to push the theme ‘Measurement and the role of pubic opinion in a democracy’, but I don't think the theme was really reflected in the talks at the conference. Or perhaps I have missed those talks, the conference was huge as always (> 1000 participants).

The profession of survey research is surely changing. Mick Couper last year argued that the ‘sky wasn’t falling’ on survey research, but it is evolving. Big data may potentially replace parts of survey research, especially if we don't adapt to new technologies (mobile), and learn to use some of the data that are now found everywhere. Big data and survey research in fact have the same basic goal. To extract meaningful information out of datasets (big data) or people (survey research), and use that to inform policy making.

Big data can certainly be useful for policy-making. Out of the 10 or so presentations that I have seen at AAPOR, most were however just talking about potential possibilities over using big data to inform policy makers.
What was in my opinion missing at AAPOR were good case studies that showed how big data can replace survey research and provide valid inferences. I have seen many good earlier examples when it comes to predictions at the level of an individual using big data. When Amazon tries to recommend me books that relate to a book I have previously bought, I find these useful and accurate predictions of what I really like. In politics, voter registration records data can help politicians target likely voters for their party, as the 2012 Obama campaign showed.

But when it comes to aggregating big data to the level of the population, big data is often in trouble (the Obama election campaign is an outlier here, as they collect data on the whole population). Survey research has relied on the principle of random sampling from the population to draw inferences, but for big data, coverage and nonresponse errors are often unknown and unestimatible for the convenience samples that big data ususally are. Paul Biemer made this point in an excellent talk.

Most of the other big data presentations at AAPOR to me were either in the category ‘bar talk’ - anecdotes without a scientific empirical strategy - or just talked about the potential of big data. And don’t get me wrong: I do think that big data are very useful, especially if they cover a late proportion of the population (e.g. voter records), or if the goals is prediction at the level of an individual.

The other conference theme seemed to be mobile surveys. With Vera Toepoel, I gave a presentation on this topic, which may be the topic of a next blogpost. Here, I think survey researchers are much better equipped to deal with the challenge mobile devices pose. I saw many excellent presentations on questionnaire design for mobile surveys, and selection bias.

Finally, this is just my conference take-away. Some other bloggers (here here) seem to have a slightly different view on the conference. Probably this is due to the fact I have only seen 1 out of the 8 presentations given at any time. So be sure to check their posts out if you want to know more about the conference.