Affiliation:
1. RAND
2. Harris Interactive
3. California HealthCare Foundation
4. University of California, San Francisco
5. Mt. Sinai School of Medicine
6. Ingenix
7. The Wellcome Trust
Abstract
The authors conducted a large-scale survey about health care twice, once as a web and once as a random digit dialing (RDD) phone survey. The web survey used a statistical technique, propensity scoring, to adjust for selection bias. Comparing the weighted responses from both surveys, there were no significant response differences in 8 of 37 questions. Web survey responses were significantly more likely to agree with RDD responses when the question asked about the respondent’s personal health (9 times more likely), was a factual question (9 times more likely), and only had two as opposed to multiple response categories (17 times more likely). For three questions, significant differences turned insignificant when adjacent categories of multicategory questions were combined. Factual questions tended to also be questions with two rather than multiple response categories. More study is needed to isolate the effects of these two factors more clearly.
Subject
Law,Library and Information Sciences,Computer Science Applications,General Social Sciences
Reference11 articles.
1. Survey Administration Effects?
2. Buckman, R. (2000, October 23). A matter of opinion. The Wall Street Journal Online[Electronic version].
3. Cochran, W. G. (1968). The effectiveness of adjustment by subclassification in removing bias in observational studies. Biometrics, 24, 295-313.
4. Couper, M. P., Blair, J.& Triplett, T. (1999). Acomparison of mail and e-mail for a survey of employees in U.S. statistical agencies. Journal of Official Statistics, 15, 39-56.
5. Czaja, R., Blair, J.& Sebestik, J. P. (1982). Respondent selection in a telephone survey: Acomparison of three techniques. Journal of Marketing Research, 19, 381-385.
Cited by
125 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献