Affiliation:
1. Pharmacometrics and Systems Pharmacology Pfizer Groton Connecticut USA
Abstract
AbstractIt is often a goal of model development to predict data from which a variety of outcomes can be derived, such as threshold‐based categorization or change from baseline (CFB) transformations. This approach can improve power or support multiple decisions. Because these derivations are indirectly predicted from the model, they are valuable tests for misspecification when used in visual or numeric predictive checks (V/NPCs). However, derived outcome V/NPCs (especially if primary or key secondary) are often overly scrutinized and held to an uncommon standard when comparing model predictions to point estimates, even if by conventional standards both the directly and indirectly modeled data are captured well. Here, simulations of directly modeled data were used to determine where apparent issues in V/NPCs of derived outcomes are expected. Two types of datasets were simulated: (1) a simple pre–post study and (2) pharmacokinetic/pharmacodynamic data from a dose‐ranging study. A psoriasis exposure–response model case study was also assessed. V/NPCs were generated on the raw data, CFB data, and placebo‐corrected CFB (dCFB) data, and binned summary statistics of the observed data for each trial were graded as being strongly or weakly supportive of a predictive model (within the interquartile range or the 95% central distribution of all simulated trials, respectively). In all cases, the strength of support in direct data V/NPCs was minimally related to that in derived outcome V/NPCs. There are myriad benefits to modeling the underlying data of a derived measure, and these results support caution in discarding adequate models based on overly strict derived measure predictive checks.