Evaluating mobile-based data collection for crowdsourcing behavioral research

Date published

2025-04

Free to read from

2025-03-03

Supervisor/s

Journal Title

Journal ISSN

Volume Title

Publisher

Springer

Department

Type

Article

ISSN

Format

Citation

Esch DT, Mylonopoulos N, Theoharakis V. (2025) Evaluating mobile-based data collection for crowdsourcing behavioral research. Behavior Research Methods, Volume 57, Issue 4, April 2025, Article number 106

Abstract

Online crowdsourcing platforms such as MTurk and Prolific have revolutionized how researchers recruit human participants. However, since these platforms primarily recruit computer-based respondents, they risk not reaching respondents who may have exclusive access or spend more time on mobile devices that are more widely available. Additionally, there have been concerns that respondents who heavily utilize such platforms with the incentive to earn an income provide lower-quality responses. Therefore, we conducted two studies by collecting data from the popular MTurk and Prolific platforms, Pollfish, a self-proclaimed mobile-first crowdsourcing platform, and the Qualtrics audience panel. By distributing the same study across these platforms, we examine data quality and factors that may affect it. In contrast to MTurk and Prolific, most Pollfish and Qualtrics respondents were mobile-based. Using an attentiveness composite score we constructed, we find mobile-based responses comparable with computer-based responses, demonstrating that mobile devices are suitable for crowdsourcing behavioral research. However, platforms differ significantly in attentiveness, which is also affected by factors such as the respondents’ incentive for completing the survey, their activity before engaging, environmental distractions, and having recently completed a similar study. Further, we find that a stronger system 1 thinking is associated with lower levels of attentiveness and acts as a mediator between some of the factors explored, including the device used and attentiveness. In addition, we raise a concern that most MTurk users can pass frequently used attention checks but fail less utilized measures, such as the infrequency scale.

Description

Software Description

Software Language

Github

Keywords

Experimental Psychology, 4905 Statistics, 5202 Biological psychology, 5204 Cognitive and computational psychology

DOI

Rights

Attribution 4.0 International

Relationships

Relationships

Resources

Funder/s