Evaluating mobile-based data collection for crowdsourcing behavioral research

dc.contributor.authorEsch, Dennis T.
dc.contributor.authorMylonopoulos, Nikolaos
dc.contributor.authorTheoharakis, Vasilis
dc.date.accessioned2025-03-03T12:47:05Z
dc.date.available2025-03-03T12:47:05Z
dc.date.freetoread2025-03-03
dc.date.issued2025-04
dc.date.pubOnline2025-02-28
dc.description.abstractOnline crowdsourcing platforms such as MTurk and Prolific have revolutionized how researchers recruit human participants. However, since these platforms primarily recruit computer-based respondents, they risk not reaching respondents who may have exclusive access or spend more time on mobile devices that are more widely available. Additionally, there have been concerns that respondents who heavily utilize such platforms with the incentive to earn an income provide lower-quality responses. Therefore, we conducted two studies by collecting data from the popular MTurk and Prolific platforms, Pollfish, a self-proclaimed mobile-first crowdsourcing platform, and the Qualtrics audience panel. By distributing the same study across these platforms, we examine data quality and factors that may affect it. In contrast to MTurk and Prolific, most Pollfish and Qualtrics respondents were mobile-based. Using an attentiveness composite score we constructed, we find mobile-based responses comparable with computer-based responses, demonstrating that mobile devices are suitable for crowdsourcing behavioral research. However, platforms differ significantly in attentiveness, which is also affected by factors such as the respondents’ incentive for completing the survey, their activity before engaging, environmental distractions, and having recently completed a similar study. Further, we find that a stronger system 1 thinking is associated with lower levels of attentiveness and acts as a mediator between some of the factors explored, including the device used and attentiveness. In addition, we raise a concern that most MTurk users can pass frequently used attention checks but fail less utilized measures, such as the infrequency scale.
dc.description.journalNameBehavior Research Methods
dc.identifier.citationEsch DT, Mylonopoulos N, Theoharakis V. (2025) Evaluating mobile-based data collection for crowdsourcing behavioral research. Behavior Research Methods, Volume 57, Issue 4, April 2025, Article number 106
dc.identifier.eissn1554-3528
dc.identifier.elementsID565571
dc.identifier.issueNo4
dc.identifier.paperNo106
dc.identifier.urihttps://doi.org/10.3758/s13428-025-02618-1
dc.identifier.urihttps://dspace.lib.cranfield.ac.uk/handle/1826/23557
dc.identifier.volumeNo57
dc.languageEnglish
dc.language.isoen
dc.publisherSpringer
dc.publisher.urihttps://link.springer.com/article/10.3758/s13428-025-02618-1
dc.rightsAttribution 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectExperimental Psychology
dc.subject4905 Statistics
dc.subject5202 Biological psychology
dc.subject5204 Cognitive and computational psychology
dc.titleEvaluating mobile-based data collection for crowdsourcing behavioral research
dc.typeArticle
dcterms.dateAccepted2024-11-07

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Evaluating_mobile-based-2025.pdf
Size:
844.69 KB
Format:
Adobe Portable Document Format
Description:
Published version
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.63 KB
Format:
Plain Text
Description: