Abstract
Background
Models of satisficing suggest that study participants may not fully process survey items and provide accurate responses when survey burden is higher and when participant motivation is lower. Participants who do not fully process survey instructions can reduce a study’s power and hinder generalizability. Common concerns among researchers using self-report measures are data quality and participant compliance. Similarly, attrition can hurt the power and generalizability of a study.
Objective
Given that college students comprise most samples in psychological studies, especially examinations of student issues and psychological health, it is critical to understand how college student recruitment sources impact data quality (operationalized as attention check items with directive instructions and correct answers) and retention (operationalized as the completion of follow-up surveys over time). This examination aimed to examine the following: whether data quality varies across recruitment sources, whether study retention varies across recruitment sources, the impact of data quality on study variable associations, the impact of data quality on measures of internal consistency, and whether the demographic qualities of participants significantly vary across those who failed attention checks versus those who did not.
Methods
This examination was a follow-up analysis of 2 previously published studies to explore data quality and study compliance. Study 1 was a cross-sectional, web-based survey examining college stressors and psychological health (282/407, 69.3% female; 230/407, 56.5% White, 113/407, 27.8% Black; mean age 22.65, SD 6.73 years). Study 2 was a longitudinal college drinking intervention trial with an in-person baseline session and 2 web-based follow-up surveys (378/528, 71.6% female; 213/528, 40.3% White, 277/528, 52.5% Black; mean age 19.85, SD 1.65 years). Attention checks were included in both studies to assess data quality. Participants for both studies were recruited from a psychology participation pool (a pull-in method; for course credit) and the general student body (a push-out method; for monetary payment or raffle entry).
Results
A greater proportion of participants recruited through the psychology pool failed attention checks in both studies, suggesting poorer data quality. The psychology pool was also associated with lower retention rates over time. After screening out those who failed attention checks, some correlations among the study variables were stronger, some were weaker, and some were fairly similar, potentially suggesting bias introduced by including these participants. Differences among the indicators of internal consistency for the study measures were negligible. Finally, attention check failure was not significantly associated with most demographic characteristics but varied across some racial identities. This suggests that filtering out data from participants who failed attention checks may not limit sample diversity.
Conclusions
Investigators conducting college student research should carefully consider recruitment and include attention checks or other means of detecting poor quality data. Recommendations for researchers are discussed.
Subject
Health Informatics,Medicine (miscellaneous)