Abstract: Effective and simple screening tools are needed to detect behaviors that are established early in life and have a significant influence on weight gain later in life. Crowdsourcing could be a novel and potentially useful tool to assess childhood predictors of adult obesity. This exploratory study examined whether crowdsourcing could generate well-documented predictors in obesity research and, moreover, whether new directions for future research could be uncovered. Participants were recruited through social media to a question-generation website, on which they answered questions and were able to pose new questions that they thought could predict obesity. During the two weeks of data collection, 532 participants (62% female; age = 26.5±6.7; BMI = 29.0±7.0) registered on the website and suggested a total of 56 unique questions. Nineteen of these questions correlated with body mass index (BMI) and covered several themes identified by prior research, such as parenting styles and healthy lifestyle. More importantly, participants were able to identify potential determinants that were related to a lower BMI, but have not been the subject of extensive research, such as parents packing their children’s lunch to school or talking to them about nutrition. The findings indicate that crowdsourcing can reproduce already existing hypotheses and also generate ideas that are less well documented. The crowdsourced predictors discovered in this study emphasize the importance of family interventions to fight obesity. The questions generated by participants also suggest new ways to express known predictors.
Abstract: This paper identifies trends within and relationships between the amount of participation and the quality of contributions in three crowdsourced surveys. Participants were asked to perform a collective problem solving task that lacked any explicit incentive: they were instructed not only to respond to survey questions but also to pose new questions that they thought might-if responded to by others-predict an outcome variable of interest to them. While the three surveys had very different outcome variables, target audiences, methods of advertisement, and lengths of deployment, we found very similar patterns of collective behavior. In particular, we found that: the rate at which participants submitted new survey questions followed a heavy-tailed distribution; the distribution in the types of questions posed was similar; and many users posed non-obvious yet predictive questions. By analyzing responses to questions that contained a built-in range of valid response we found that less than 0.2% of responses lay outside of those ranges, indicating that most participants tend to respond honestly to surveys of this form, even without explicit incentives for honesty. While we did not find a significant relationship between the quantity of participation and the quality of contribution for both response submissions and question submissions, we did find several other more nuanced participant behavior patterns, which did correlate with contribution in one of the three surveys. We conclude that there exists an optimal time for users to pose questions early on in their participation, but only after they have submitted a few responses to other questions. This suggests that future crowdsourced surveys may attract more predictive questions by prompting users to pose new questions at specific times during their participation and limiting question submission at non-optimal times.