Document Type

Article

Publication Date

3-1-2022

Abstract

Increasingly, marketing and consumer researchers rely on online data collection services. While actively-managed data collection services directly assist with the sampling process, minimally-managed data collection services, such as Amazon's Mechanical Turk (MTurk), leave researchers solely responsible for recruiting, screening, cleaning, and evaluating responses. The research reported here proposes a 2 × 2 framework based on sampling goal and methodology for screening and evaluating the quality of online samples. By sampling goals, screeners can be categorized as selection, which involves matching the sample with the targeted population; or as accuracy, which involves ensuring that participants are appropriately attentive. By methodology, screeners can be categorized as direct, which screens individual responses; and as statistical, which provides quantitative signals of low quality. Multiple screeners for each of the four categories are compared across three MTurk samples, two actively-managed data collection samples (Qualtrics and Dynata), and a student sample. The results suggest the need for screening in every online sample, particularly for the MTurk samples, with the fewest supplier-provided filters. Recommendations are provided for researchers and journal reviewers that provide greater transparency with respect to sample practices.

Relational Format

journal article

DOI

10.1016/j.ijresmar.2021.05.001

Accessibility Status

Searchable text

Available for download on Friday, March 01, 2024

Included in

Business Commons

COinS