Abstract
Background
With the growing interest in mobile health (mHealth), behavioral medicine researchers are increasingly conducting intervention studies that use mobile technology (eg, to support healthy behavior change). Such studies’ scientific premises are often sound, yet there is a dearth of implementational data on which to base mHealth research methodologies. Notably, mHealth approaches must be designed to be acceptable to research participants to support meaningful engagement, but little empirical data about design factors influencing acceptability in such studies exist.
Objective
This study aims to evaluate the impact of two common design factors in mHealth intervention research—requiring multiple devices (eg, a study smartphone and wrist sensor) relative to requiring a single device and providing individually tailored feedback as opposed to generic content—on reported participant acceptability.
Methods
A diverse US adult convenience sample (female: 104/255, 40.8%; White: 208/255, 81.6%; aged 18-74 years) was recruited to complete a web-based experiment. A 2×2 factorial design (number of devices×nature of feedback) was used. A learning module explaining the necessary concepts (eg, behavior change interventions, acceptability, and tailored content) was presented, followed by four vignettes (representing each factorial cell) that were presented to participants in a random order. The vignettes each described a hypothetical mHealth intervention study featuring different combinations of the two design factors (requiring a single device vs multiple devices and providing tailored vs generic content). Participants rated acceptability dimensions (interest, benefit, enjoyment, utility, confidence, difficulty, and overall likelihood of participating) for each study presented.
Results
Reported interest, benefit, enjoyment, confidence in completing study requirements, and perceived utility were each significantly higher for studies featuring tailored (vs generic) content, and the overall estimate of the likelihood of participation was significantly higher. Ratings of interest, benefit, and perceived utility were significantly higher for studies requiring multiple devices (vs a single device); however, multiple device studies also had significantly lower ratings of confidence in completing study requirements, and participation was seen as more difficult and was associated with a lower estimated likelihood of participation. The two factors did not exhibit any evidence of statistical interactions in any of the outcomes tested.
Conclusions
The results suggest that potential research participants are sensitive to mHealth design factors. These mHealth intervention design factors may be important for initial perceptions of acceptability (in research or clinical settings). This, in turn, may be associated with participant (eg, self) selection processes, differential compliance with study or treatment processes, or retention over time.