Traditional metric indicators of scientific productivity (e.g., journal impact factor; h-index) have been heavily criticized for being invalid and fueling a culture that focuses on the quantity, rather than the quality, of a person’s scientific output. There is now a wide-spread demand for specified alternatives to current academic evaluation practices. In a previous report, we laid out four basic principles of a more responsible research assessment in academic hiring and promotion processes (Schönbrodt et al., 2022). The present paper offers a specific proposal for how these principles may be implemented in practice: We argue in favor of broadening the range of relevant research contributions and thus propose concrete quality criteria (including ready-to-use online templates) for published research articles, data sets and research software. These criteria are supposed to be used primarily in the first phase of the assessment process. Their function is to help establish a minimum threshold of methodological rigor that candidates need to pass in order to be further considered for hiring and promotion. In contrast, the second phase of the assessment process will focus more on the actual content of candidates’ research output and necessarily use more narrative means of assessment. We hope that this proposal will help get our colleagues in the field engaged in a discussion over ways of replacing current invalid evaluation criteria with ones that relate more closely to scientific quality.