Funding reductions combined with increasing data-collection costs required that Wave V of the USA’s National Longitudinal Study of Adolescent to Adult Health (Add Health) abandon its traditional approach of in-person interviewing and adopt a more cost-effective method. This approach used the mail/web mode in Phase 1 of data collection and in-person interviewing for a random sample of nonrespondents in Phase 2. In addition, to facilitate the comparison of modes, a small random subsample served as the control and received the traditional in-person interview. We show that concerns about reduced data quality as a result of the redesign effort were unfounded based on findings from an analysis of the survey data. In several important respects, the new two-phase, mixed-mode design outperformed the traditional design with greater measurement accuracy, improved weighting adjustments for mitigating the risk of nonresponse bias, reduced residual (or post-adjustment) nonresponse bias, and substantially reduced total-mean-squared error of the estimates. This good news was largely unexpected based upon the preponderance of literature suggesting data quality could be adversely affected by the transition to a mixed mode. The bad news is that the transition comes with a high risk of mode effects for comparing Wave V and prior wave estimates. Analytical results suggest that significant differences can occur in longitudinal change estimates about 60 % of the time purely as an artifact of the redesign. This begs the question: how, then, should a data analyst interpret significant findings in a longitudinal analysis in the presence of mode effects? This chapter presents the analytical results and attempts to address this question.