Affiliation:
1. Psychology and the Marshal School of Business University of Southern California Los Angeles California USA
2. Psychology Universität Wὒrzburg Wὒrzburg Germany
3. Department of Statistics Columbia University New York New York USA
4. Samuel Curtis Johnson Graduate School of Management, Cornell SC Johnson College of Business Cornell University Ithaca New York USA
5. The Fuqua School of Business Duke University Durham North Carolina USA
Abstract
AbstractThree commentaries below provide different perspectives on data analysis and reporting. They generally focus on how the quality of the measures and manipulations determines the value of the analysis. Norbert Schwarz and Fritz Strack's comment is less on the right statistic and more on “sloppy reasoning, gaps between theoretical concepts and their operationalizations, and blissful ignorance of the situated nature of human thinking, feeling, and doing contribute more to the limited reproducibility of empirical findings than the choice of a particular test statistic.” They propose that particular effects are contextual and inappropriately labeled as true or false. Instead, our job is to focus on general constructs that make sense of the diversity of human experience and psychological reactions. Too often studies replicating psychological effects in the noisy and confounded conditions of the marketplace result in statistical uncertainty of garbage in, garbage out. Researchers instead need to look toward tests of specific interactions, which can clarify the influencing factors based on theoretical considerations. The second comment is by Andrew Gelman, an outstanding psychological statistician. He proposes that “once the data have been collected, the most important decisions have already been done.” He then provides four recommendations that enable the statistics to work appropriately. The first requirement of an effective study is to be sure that the measures address the construct of interest. Similar to the position of Schwarz and Strack, it is important to articulate the relevance of a statistically significant finding. The second recommendation seeks to curb large number of studies with inflated effect sizes built from narrow studies and unwarranted optimism. The third recommendation is to simulate data from a model and consider the distribution of possible results. That is often done to test a new analysis method, but it can be even more important in marketplace studies where novel characteristics of the sample and experimental conditions are included in the analysis. Finally, he recommends that one consider likely analyses needed before getting the data. Such foresight would encourage, for example, thinking about the kind of data needed to defend the equality of the control demographics against the treatment. The final commentary is by Stijn van Osselaer. He agrees that p‐values reflect the detailed methods from a given study but do not focus on the problem of generalizability. Like Gelman, he sees designs focused on effect sizes may have generated too many studies that do not replicate. He contrasts broad explorations with narrowly defined existence tests that provide evidence that an effect exists somewhere but are mute on other contexts where they may apply. For theoretical problems relevant to applications, it is important to identify moderators through broad sampling across population characteristics, stimuli, and situations. He proposes that consumer psychologists should not try to do everything in one paper, but to build practically relevant, applicable knowledge across multiple articles. Different articles, authors, and research methods play various roles, with each article focusing on important stages in the process from generating hypotheses, providing existence proofs, and exploring their broad applicability. That pragmatic approach can integrate theoretical silos that seek to resolve complex human problems and has promise as a criterion for relevant publications.
Subject
Marketing,Applied Psychology